Hacker Newsnew | past | comments | ask | show | jobs | submit | manidoraisamy's commentslogin

The ones that are gone are comments/documentation in code, unit testing, design patterns, datamodel/db design. I am happy documentation is gone. But I miss design, especially db design. AI generates SQL/code with no regard for backward compatibility or future maintenance.


I see. Do you have any examples where the design was just egregious?

From my experience generating db models/SQL schema at my last job, it did a fine job based on existing patterns and the outlined requirements. Point to note, we did optimise our AI prompts and LLM rules a lot, to the point we were able to get end to end unit tested, functional PRs out rather quickly


Not as much as US. Everyone from trump to Altman is betting on fusion. But China is more pragmatic by focusing on making fission resistant to supply chain shocks in uranium. Since they are fast follower, their plan might be to catch up, once fusion is viable for practical use.


Their government is chipping in a lot - there's a CNBC video about it with footage of a lot of stuff going on there and the US https://youtu.be/nyn0HUqluVM Says China has 10x as many fusion PhDs and more patents. It'll be interesting to see how it pans out. They kind of overtook in batteries, solar and EVs by doing the 10x as many engineers thing.


Yes. We use Gemini with mcp which supports photos and PDFs natively.


Ideally, you want to start small and iterate. With Promptrepo, you can use versioning to compare model outputs across different datasets. In the test UI, we calculate confidence scores using @promptrepo/score [1], which parses OpenAI’s logprobs and shows field-level reliability. Fields with low confidence are highlighted in red, making it easy to catch signs of overfitting or data drift.

[1] https://github.com/ManiDoraisamy/promptrepo-score


Our target customers are product teams like ours. I’m the technical founder and was building a feature to extract product prices from unstructured text. But my co-founder, who understands all the pricing formats — is non-technical. I built Promptrepo, so he could finetune the model himself and improve accuracy without relying on me. My bet is that this same dynamic exists in many other product teams.

Thanks again for the feedback and recommendation. We’ll update our site with clearer examples and target use cases!


Finetuning is technical, but OpenAI hides things like layers, learning rates, and uses LoRA under the hood. You just upload examples - usually around 50.

But even that’s too much for most business users. Choosing input/output fields, generating and validating JSONL still feels like coding. That’s why we built PromptRepo: it turns finetuning into a guided, no-code workflow using spreadsheets.


It depends on the use case. For many business workflows, where structured data is key - spreadsheets are already the source of truth. But for chat-based or unstructured tasks, a custom UI might make more sense.


Can you please clarify what you mean by "OpenAI projects"? Are you referring to the playground or the API for prompting or fine-tuning?


In OpenAI Pro ($20/mo) one can start a project with a set of files. Various chats can be had about this project topic with the files providing additional information. I've discovered the projects are isolated. They'll use memory configurable in settings but they don't use chat history outside the project. This can give ChatGPT chats in projects a different tone.

My question is this: Is this fine tuning with those project documents or RAG and what's the difference?


Thanks for the clarification. OpenAI projects in ChatGPT are meant for end users to get personalized help using their own documents, inside the ChatGPT UI.

Promptrepo is for developers and product teams to build new AI-powered features in their product. It’s about creating custom models that run behind the scenes in apps, not just improving a personal chat experience.

So while OpenAI projects use RAG for better chats, Promptrepo helps teams build and deploy fine-tuned APIs that serve structured outputs like JSON, labels, or extracted fields to build your own AI powered product.


Yeah, just like Dropbox was a passthrough for aws s3.

Edit: Sorry about the snide comment. But if this ends up as a simple utility for finetuning, I would be happy with that too. Just want to share a tool that's been very useful in building ai features in our products.


That's the biggest problem with showcasing to developers. They are just not representative of the user base at large. For example, any Desktop application showcased will getting annihilated for being Electron. You very much take a risk putting it out to developers first because they will leave commentary about your product in this way (forever, this is the internet, it's not going anywhere). We eat out our own basically.


I am not saying there isn't value for this, just saying what it does. I might actually use this if they don't tie the fine tuned models to only be used within their platform.


We don’t have automated evals, latency, or cost comparisons yet. But, Promptrepo does offer versioning and lets you deploy the same model across providers for comparison. Automating these comparisons is definitely on our roadmap.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: