Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I built a unified Python library for AI batch requests (50% cost savings) (github.com/agamm)
4 points by funerr 5 months ago | hide | past | favorite | 4 comments


I needed a Python library to handle complex batch requests to LLMs (Anthropic & OpenAI) and couldn't find a good one - so I built one.

Batch requests take up to 24h but cut costs by ~50%. Features include structured outputs, automatic cost tracking, state resume after interruptions, and citation support (Anthropic only for now).

It's open-source, feedback/contributions welcome!

GitHub: https://github.com/agamm/batchata


Neat! What’s the use case exactly? Kinda hard to figure from skimming


When you have LLM requests you don't mind waiting for (up to 24h) then you can save 50% in costs. Great for document processing, image classification at scale, anything that you don't need an immediate result from the LLM provider and costs play a role.


Concrete use cases where 50 percent is actually a thing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: