I don't know that it was a bad thing though. As soon as I saw that I started to think about how that might be possible, or even if it could be possible. Fundamentally there has to be some kind of common discovery protocol underlying it; it just doesn't appear to be possible (yet) to have two unknown systems talk to each other with an unknown protocol. That'd be like two monoglots, a German and Russian speaker figuring out how to talk fluently with each other. I suppose it would be possible using gestures and props, but these non-verbal clues could themselves be thought of as a kind of discovery protocol for figuring out the more efficient protocol that enables verbal communication.
You should look into Hypermedia API's. The entire point is to have a discoverable API where the developer doesn't need to know low level details. Theoretically, you could write a library to parse, adapt, and act on another API.