no mechanism that allows or even requires content authors to create their own metadata will ever take off.
That's (mostly) true if there's no trust model. Once you insert some sort of trust or authorization model, this problem gets mitigated pretty quickly. The most obvious counterexample to your point is the rich ecosystem of API-based mashups. Mashups exist as integration points for multiple content sources where content authors provide their own metadata.
While there's a lot of talk these days about the idea of "semantic search engines," most practical uses of the semantic web are actually just souped up mashups -- solutions to integration problems combined with some lightweight AI.
Even disregarding the authentication aspect, there's laziness. People (at least myself, and I presume many/most others) can't be arsed to take the time to add tags that nobody will ever really see. It's a whole lot of work to tag more than the minimum.
That's about as true for mashups as it is for the semantic web; in both cases, there is not an "official" trust model used across the web, though it's obviously possible for semantic web providers to do what every other API provider does and simply use their reputation. For example, what makes you trust the data you get from any API at all, if not for the provider's reputation? I do agree that there should be a more uniform approach in order to get wide usage of the technology beyond the enterprise, academia, and medicine.
This apparent lack of an official trust model won't be the case for long with POWDER [1] on the horizon. POWDER is on its way to becoming a ratified W3C standard and it makes steps to address the idea of a "web of trust." Who knows what kind of traction this will get in the near term, but I think it's a step in the right direction.
Powder is merely a standard way to pass around "good housekeeping's seal of approval", which has less substantive content than "Ron Jeremy gives this 4 stiffies". (The problem is not passing around these tokens, it's what they actually mean. POWDER, at best, punts, and is more likely to say something like "good housekeeping means good stuff".)
The standard is also full of moronic blather like "Online child protection, as well as the continuation of offline child protection, is a priority for any responsible site or service provider, whether directed at children or not."
Okay - I'm being a bit unfair. It also has stuff to produce bad results "Web pages and whole Web sites containing any type of rich assets such as video/streaming video or audio can be tagged with that information using POWDER. A search engine, content aggregation or adaptation service can then determine whether a user is accessing content via a low or high bandwidth connection and return only those pages that contain assets and images that will be supported by that user's connection speed." (Think of all the ways this will do the wrong thing.)
And then there's a new way to do stuff that already works: "A user pays an extra fee to his ISP in order to have privileged access to third-party premium content. When he accesses a premium page on one of these third-party Web sites via his ISP, the server is able to recognize him as a paying customer and deliver the content that has been described as premium by an associated Description Resource."
That's (mostly) true if there's no trust model. Once you insert some sort of trust or authorization model, this problem gets mitigated pretty quickly. The most obvious counterexample to your point is the rich ecosystem of API-based mashups. Mashups exist as integration points for multiple content sources where content authors provide their own metadata.
While there's a lot of talk these days about the idea of "semantic search engines," most practical uses of the semantic web are actually just souped up mashups -- solutions to integration problems combined with some lightweight AI.