Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, especially given that nobody can adequately define what AGI even means.

But hey, if we did, then we couldn't have so many meandering, unproductive conversations about it on Fridman or Rogan.

Recently, I used GPT-4 to read my cousin's writings on his fictional world and generate a chart of the timeline and concepts using Mermaid syntax. I think one of the best things about LLMs at the moment is that it can convert things in an abstract way. Even if it doesn't get it totally right the first time, it can correct itself on instruction, and still saves time over coding something or downloading software.



Were you able to paste his writings all in one prompt without exhausting token space? Or did you have to do something tricky to get around that?


The summary of his world building wasn't super huge, so it was just large enough to fit the ~8000 token limit for the GPT-4 model on OpenAI after I trimmed it a bit. I, too, would like to know how to properly get around these technical limitations.


I know it is something along these lines: Install a vector database. Use the API to get vector embeddings for the manuscript, by getting them in chunks. Apparently this is possible the with API even though it's not with normal ChatGPT? Then, think of the query you want to ask. Use the API not to answer the query, but to get the vector embedding for the query. Then, do a search in the vector database to get the vectors that are "near" the vector for the query. Then, finally send the query's vector, and all the "near" vectors to the API. And then you'll get your answer.

I don't know how to do any of that yet. So far it seems like milvus might be the easiest vector db to install locally. But vectors for text passages are very large, so I'm not sure why I'd expect the final query of multiple vectors to be small enough for the token limit. And I'm not really sure yet how to send a query to ChatGPT in vector form.

(Ideally this could work against an open model that isn't ChatGPT.)


Ask GPT-4 to compress the prompt in several shots, do one last shot with the compressed chunks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: