Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We have a prompt that takes a job description and categorizes it based on whether it's an individual contributor role, manager, leadership, or executive, and also tags it based on whether it's software, mechanical, etc.

We scrape job sites and use that prompt to create tags which are then searchable by users in our interface.

It was a bit surprising to see how Karpathy described software 3.0 in his recent presentation because that's exactly what we're doing with that prompt.



In other words, are you using LLM as a text classifier?


This is what I'm using it for as well, it's really simple to use for text classification of any sort.


Are there currently services (or any demand for) a text classifier that you fine tune on your own data that is tiny and you can own forever? Like use a ChatGPT + synthetic data to fine tune a nanoBERT type of model


Can you elaborate on what makes this “software 3.0”? I didn’t really understand what the distinction was in Karpathy’s talk, and felt like I needed a more concrete example. What you describe sounds cool, but I still feel like I’m not understanding what makes it “3.0”. I’m not trying to criticize, I really am trying to understand this concept.


> Can you elaborate on what makes this “software 3.0”?

Software 2.0: We need to parse a bunch of different job ads. We'll have a rule engine, decide based on keywords what to return, do some filtering, maybe even semantic similarity to descriptions we know match with a certain position, and so on

Software 3.0: We need to parse a bunch of different job ads. Create a system prompt that says "You are a job description parser. Based on the user message, return a JSON structure with title, description, salary-range, company, position, experience-level" and etc, pass it the JSON schema of the structure you want and you have a parser that is slow, sometimes incorrect but (most likely) covers much broader range than your Software 2.0 parser.

Of course, this is wildly simplified and doesn't include everything, but that's the difference Karpathy is trying to highlight. Instead of programming those rules for the parser ourselves, you "program" the LLM via prompts to do that thing.


Thank you for the explanation, I appreciate it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: