Perhaps for array languages LLMs would do a better job running on a q/APL parse tree (produced using tree-sitter?) with the output compressed into the traditional array-language line noise just before display, outside the agentic workflow.
This is the dream, but it keeps crashing and sinking against reality. It seems intuitive that running language models on the AST should work better than running them on the source code, but as far as I'm aware every attempt to do this has resulted in much worse performance. There's so much more training data available as source code, and working in source code form gives you access to so much more outside context (comments, documentation, Stack Overflow posts), that it more than cancels out the disadvantages.