Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If that's a thing, does it become a thing learning how to write a "poisoned" statement that misleads an LLM but is still factual?


As someone alluded to, the narrative that management drives has been examined and studied many times over. What is management saying, what are they not saying, what are they saying but not loudly, what did they say before that they no longer speak about. There are insights to glean but nothing that is giving you an unknown edge. Sentiment analysis and the like go back well into the late 80s, early 90s.


Maybe, but it sounds hard if there are multiple LLMs out there that people might use to analyze such text. Tricking multiple LLMs with a certain poisonous combination of words and phrases sounds a lot like having a file that hashes to the same hash from different hashing techniques. Theoretically possible but actually practically impossible.


It is already been done as quants analyze company statements for the last decade at least, counting positive and negative words, etc...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: