Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just got a project running whereby I used python + pdfplumber to read in 1100 pdf files, most of my humble bundle collection. I extracted the text and dumped it into a 'documents' table in postgresql. Then I used sentence transformers to reduce each 1K chunk to a single 384D vector which I wrote back to the db. Then I averaged these to produce a document level embedding as a single vector.

Then I was able to apply UMAP + HDBSCAN to this dataset and it produced a 2D plot of all my books. Later I put the discovered topic back in the db and used that to compute tf-idf for my clusters from which I could pick the top 5 terms to serve as a crude cluster label.

It took about 20 to 30 hours to finish all these steps and I was very impressed with the results. I could see my cookbooks clearly separated from my programming and math books. I could drill in and see subclusters for baking, bbq, salads etc.

Currently I'm putting it into a 2 container docker compose file, base postgresql + a python container I'm working on.



This sounds like an interesting project. Do you have any plans to publish a tutorial/journal article + source code?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: