Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I index anything interesting I come across in my DAG-backed blogging platform.

1) I run a local instance. When I see an interesting link, I paste it into the textbox at http://localhost:2784/. This creates a new parent item.

2) I create sub-item under it which may include several tags, such as: #perl #toread

3) When I read the page, I create new text nodes under the item to annotate and locally store the information.

4) Whenever I want to publish something to my public blog, I add a child item with the #publish tag to it, and it's automatically pushed (using curl and HTTP GET or POST.)

5) My public blog is unauthenticated, but I could also limit publishing rights to e.g. only items signed by my particular PGP key.

6) When my notepad gets full, I archive it into a zip file and start afresh. This is how I deal with "information overload bankruptcy".

7) If I'm looking for something I annotated in the past, I use zgrep or whatever it's called on my pile of zip files.



Sounds interesting, have you written more about it anywhere? Is the tool available for others to use?


Yes, demo link in my profile, and source on github.


Demo link seems to be asking for credentials - is there a different url? Organization scheme sounds interesting, would love to know more.


Look carefully.


Ah, nice! The `l` threw me off, expected `u`.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: