I don't disagree on any particular points. When I first read about it and started playing with it I definitely felt like my expectations were set to something other than what IPFS actually provides.
That said, I think systems of this nature are worth pursuing and perhaps IPFS itself can be improved for more general purpose use cases. For my part, I think it'd be awesome to be able to write some html, css, make some images, `ipfs add ~/website` and then be able to link anyone my content and have reasonable guarantees of it's existence for the rest of my life. I can host my own websites, but it's not a particularly enjoyable experience.
> This is a lot more convenient than someone having to explicitly spider it etc (although a combination would probably work/be good idea -- eg: an IPFS "dmoz.org" where authors could register content index-pointers for others to spider/download into their IPFS nodes -- and index for search).
IIRC it's possible to follow announcements of new hashes on the network and retrieve them automatically. I picked this up from #ipfs on FN, I believe, so I'm not 100% sure about it. Doing that would make an IPFS search engine fairly robust (and interesting to build, actually).
ipfs dev here! This is indeed possible, you will be able to listen on announcements (provider messages) of hashes that are near your nodes peerID within the kademlia metric space. To get a complete picture of all hashes on the network, you would need to ensure your nodes had reasonable coverage over a good portion of the keyspace (enough that the K closest peers calls for any hash would return at least one of your nodes).
I really want to build something like this, just haven't had the time to do so.
That said, I think systems of this nature are worth pursuing and perhaps IPFS itself can be improved for more general purpose use cases. For my part, I think it'd be awesome to be able to write some html, css, make some images, `ipfs add ~/website` and then be able to link anyone my content and have reasonable guarantees of it's existence for the rest of my life. I can host my own websites, but it's not a particularly enjoyable experience.
> This is a lot more convenient than someone having to explicitly spider it etc (although a combination would probably work/be good idea -- eg: an IPFS "dmoz.org" where authors could register content index-pointers for others to spider/download into their IPFS nodes -- and index for search).
IIRC it's possible to follow announcements of new hashes on the network and retrieve them automatically. I picked this up from #ipfs on FN, I believe, so I'm not 100% sure about it. Doing that would make an IPFS search engine fairly robust (and interesting to build, actually).