I find it extremely useful as a research tool. It can talk to probably over 100 models at this point, providing a single interface to all of them and logging full details of prompts and responses to its SQLite database. This makes it fantastic for recording experiments with different models over time.
The ability to pipe files and other program outputs into an LLM is wildly useful. A few examples:
llm -f code.py -s 'Add type hints' > code_typed.py
git diff | llm -s 'write a commit message'
llm -f https://raw.githubusercontent.com/BenjaminAster/CSS-Minecraft/refs/heads/main/main.css \
-s 'explain all the tricks used by this CSS'
I'm getting a whole lot of coding done with LLM now too. Here's how I wrote one of my recent plugins:
llm -m openai/o3 \
-f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
-f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
-s 'Write a new fragments plugin in Python that registers issue:org/repo/123 which fetches that issue
number from the specified github repo and uses the same markdown logic as the HTML page to turn that into a fragment'
LLM was also used recently in that "How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation" story - to help automate running 100s of prompts: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-...
Wow what a great overview; is there a big doc to see all these options?
I'd love to try it -- I've been trying `gh` copilot pulgin but this looks more appealing.
> had I used o3 to find and fix the original vulnerability I would have, in theory [...]
they ran a scenario that they thought could have lead to finding it, which is pretty much not what you said. We don't know how much their foreshadowing crept into their LLM context, and even the article says it was also sort of chance. Please be more precise and don't give into these false beliefs of productivity. Not yet at least.
Very fair, I expect others to confuse what you mean productivity of your tool called LLM vs. the doubt that many have on the actually productivity of LLM the large language model concept.
The ability to pipe files and other program outputs into an LLM is wildly useful. A few examples:
It can process images too! https://simonwillison.net/2024/Oct/29/llm-multi-modal/ LLM plugins can be a lot of fun. One of my favorites is llm-cmd which adds the ability to do things like this: It proposes a command to run, you hit enter to run it. I use it for ffmpeg and similar tools all the time now. https://simonwillison.net/2024/Mar/26/llm-cmd/I'm getting a whole lot of coding done with LLM now too. Here's how I wrote one of my recent plugins:
I wrote about that one here: https://simonwillison.net/2025/Apr/20/llm-fragments-github/LLM was also used recently in that "How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation" story - to help automate running 100s of prompts: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-...