-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feat] Apply Visual Vocabulary to vizro-ai #1059
base: main
Are you sure you want to change the base?
Conversation
View the example dashboards of the current commit live on PyCafe ☕ 🚀Updated on: 2025-03-14 00:14:56 UTC Compare the examples using the commit's wheel file vs the latest released version: vizro-core/examples/scratch_devView with commit's wheel vs View with latest release vizro-core/examples/dev/View with commit's wheel vs View with latest release vizro-core/examples/visual-vocabulary/View with commit's wheel vs View with latest release vizro-core/examples/tutorial/View with commit's wheel vs View with latest release vizro-ai/examples/dashboard_ui/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just skimmed through at a high-level and generally looks good! 🙂 I just left a few suggestions.
I like the idea of including the "how should the graph be used" notes. Does it actually improve behaviour of the model?
vizro-core/examples/visual-vocabulary/tools/generate_vivivo_json.py
Outdated
Show resolved
Hide resolved
vizro-core/examples/visual-vocabulary/tools/generate_vivivo_json.py
Outdated
Show resolved
Hide resolved
vizro-core/examples/visual-vocabulary/tools/generate_vivivo_json.py
Outdated
Show resolved
Hide resolved
vizro-core/examples/visual-vocabulary/tools/generate_vivivo_json.py
Outdated
Show resolved
Hide resolved
vizro-core/examples/visual-vocabulary/tools/generate_vivivo_json.py
Outdated
Show resolved
Hide resolved
The "#### What is..." and "When should I use it?" are not used yet. I extracted them anyway because they looks informative and could be useful if we ever need to build an agent to assist chart selection, for example |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried to skim over the vizro-ai part (not the json creation) and I think it's a bit late for me as I am struggling to make full sense of it :( Will have to wait for Monday if you would like a review from me...
In general I would suggest making types a little clearer, for example what data am I providing where. That might make it a bit easier for me to digest.
General questions:
- I take it you have opted for the route we discussed, where we do not package vivivo, but we provide it as an occasionally updated file?
- you have taken the route of rerunning the request again if the
augment
is true?
As for the second point, I may have an alternative: have you tested how fast in terms of latency a single short LLM request is that just asks for the chart type, and gives as context the avaialble charts from vivivo (maybe with when to use info), but of course allows model to go beyond if nothing fits.
If that is fast, I'd argue why not do that first, and then send a single enhanced request that sends the example code for that chart type - and not have the model produce code twice.
If that works well, maybe we can even scrap augment? If it's hardly slower, then why not. We even have the minimal argument, which still makes it super quick
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Yes! you and @antonymilne suggested similar approach. And I really like the byproduct (the JSON representation of vivivo), which I believe could be useful in other genai settings. In terms of when to update the file, we could make I thought about the alternative and I decided not to pursue it somehow. Let me give it another try. Either I will remember why I quit last time or prove it's a good route. You can take another look on Monday. |
Ok interesting. Yes thinking about it a little more, I think that could save some tokens and LLM confusion. Let me know once you have tried, or if you would like to catch-up to discuss this a bit more. I saw the tomorrow is not possible anymore! |
@@ -125,14 +125,20 @@ dependencies = [ | |||
"plotly==6.0.0" # to leverage new MapLibre features in visual-vocabulary, | |||
] | |||
installer = "uv" | |||
scripts = {example = "cd examples/{args:scratch_dev}; python app.py"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I wasn't clearer before, but this line should stay in the default environment; otherwise we won't be able to do hatch run example
and would need to do hatch run examples:example
instead, which is annoying (or I'm just lazy).
The idea of the jobs that are in the default environment that refer to other environments is that they're shortcuts like hatch run lint
that we run often so we don't have to explicitly specify the environment every time. For things like gen-vivivo
that aren't run all the time we don't need the top-level wrapper, but for example
we do still want it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The whole hatch run examples:gen-vivivo
flow feels much cleaner and simpler now there's just one hatch command for it all though, I like it 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can still run
hatch run example
hatch run example visual-vocabulary
because in default env there is a shortcut command
example = "hatch run examples:example {args:scratch_dev}" # shortcut script to underlying example environment script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would you like me to replace this command in default env:
example = "hatch run examples:example {args:scratch_dev}" # shortcut script to underlying example environment script.
with this command:
example = "cd examples/{args:scratch_dev}; python app.py"
so we can remove this line in example env:
example = "cd examples/{args:scratch_dev}; python app.py"
@@ -143,7 +143,7 @@ class ChartGroup: | |||
|
|||
|
|||
part_to_whole_intro_text = """ | |||
#### Part-to-whole helps you show how one whole item breaks down into its component parts. If you consider the size of\ | |||
#### Part-to-whole helps you show how one whole item breaks down into its component parts. If you consider the size of \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this intentional? Does it still render ok or has is squashed two words together?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it was squashing two words together and cause linting error in the output json file
Description
How does vivivo become available in vizro-ai?
Assuming with current approach, whenever we make vizro-ai releases, run
# under vizro-core hatch run examples:gen-vivivo
to generate and copy the latest
visual_vocabulary.json
to vizro-ai directory.How is vizro-ai leveraging vivivo now?
chart_type
ofBaseChartPlan
For example,

augment=False:
augment=True:

TODOs:
augment=False
LIMITATIONS of vizro-ai:
.plot
needs to be compatible with.dashboard
, we enforced that the code should always in this formatThis means the vizro-ai generated code couldn't be as flexible as Visual Vocabulary. e.g.,
vizro/vizro-core/examples/visual-vocabulary/pages/examples/waterfall.py
Lines 7 to 12 in 77c898b
Screenshot
Notice
I acknowledge and agree that, by checking this box and clicking "Submit Pull Request":