Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AIConfig Extension for VS Code (visualstudio.com)
101 points by cybereporter on March 8, 2024 | hide | past | favorite | 16 comments


Alternative extension to do something similar, but using VSCode's native notebooks: https://marketplace.visualstudio.com/items?itemName=jaaxxx.l.... It's more tailored to crafting long-form few-shot prompts, rather than a single text box. Think ChatGPT playground but with a text editor interface that isn't absolutely terrible.

Nice part about that one is no separate server, no telemetry, and the backing file format is simple JSON you can directly import from your production application. However the range of supported models is smaller (basically only LLaMa-style and OpenAI-style interfaces are supported).


I don't understand what the use case is here at all. It saves prompt parameters as JSON? Is that it?


I thought it supported a few use cases:

* Testing prompt behavior across various LLMs * Sharing those prompts across multiple applications

We currently use a jupyter notebook to iterate, test, and validate prompts. Then move those prompts to our production app written in C#. If there were a C# SDK, I could use this tool to create a prompts config file and share it between the jupyter notebook and the C# app. The config file could also be added to version control.

Having said that, I don't understand why it saves the output of the LLM so maybe I'm missing something.

https://aiconfig.lastmileai.dev/docs/basics


If you enjoy the OpenAI's playground, this looks pretty much like that but you can use other LLMs.


JSON and YAML, but it also supports connectivity to a bunch of LLMs (including image models like Dall-e 3) and some prompt chaining.


It seems to be a config file format, library for loading it, interactive editor for it, and playground that uses it.


Looks like you can switch models and use the same prompts very quickly.


Using models from any provider or modality in a single playground


VSCode is a good target for single-player editing, and I can see something like this being helpful, but what does a collaborative experience look like, eg if you have a team of folks all working on the same prompts?


Yeah, source controlling the configurations, then cross-collaboration on teams isn't supported, but something interesting to explore especially with engineers and PMs collaborating together on prompting.


So one challenge we’ve had is folks would copy paste their prompt in google docs then write under it then try to run it and then paste back what worked well. A collaborative experience would be super useful so we don’t have to jump out and can just append/build on each other’s prompts inline.


Isn't that what Git is for? Or do you mean lot's of people editing the same prompt at the same moment, a la google docs? That sounds... hectic.


So I guess the idea is that you do your experimenting and iteration with this, then plug the config into your app?


Yeah, it seems like it. It's nice to have a way to share your prompts + logic without asking others to compile and build your app.


It contains telemetry but it's not clear whether it collects your prompts or not


doesn't collect prompts and there's a way to disable telemetry as well - https://github.com/lastmile-ai/aiconfig/blob/8a5a59d47cef474...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: