Better LLM assistance for Statamic and Antlers syntax? #11288
Unanswered
jensolafkoch
asked this question in
Q&A
Replies: 1 comment 1 reply
-
We tried to feed the Statamic docs into an LLM, but produced wrong results very often, so we stopped trying. What have you tried searching for in our docs that you haven't found? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I’m curious about your experiences with current LLMs like Claude 3.5 Sonnet or GPT-4o in relation to Statamic and its Antlers syntax, particularly when it comes to tags, modifiers, and inline PHP.
It seems that Statamic-specific material is sparse in the training data, as one might expect. Do you know of any models that are better suited or specifically trained for Statamic and Antlers? Alternatively, are you aware of any initiatives aimed at fine-tuning LLMs for this purpose?
There’s always the option of injecting such knowledge into the context manually—has anyone explored or shared their experiences with this approach?
BTW, I mostly use ChatGPT Plus and Codeium in PhpStorm, as well as experimenting with the new and promising Windsurf editor from Codeium.
Beta Was this translation helpful? Give feedback.
All reactions