Hacker Newsnew | past | comments | ask | show | jobs | submit | leoxv's commentslogin

Autism is a hoax orchestrated by the Democrats.


This is the way.


Just like the Epstein files and climate change! Why those damn dirty democrats are always up to no good!


"No Semantic Web until the Incentive Problem and the Trust Problem are solved."

No. The semweb is already functional as is (see my other comments here). Trust is orthogonal and can/is being solved in different ways (centralized/decentralized as in Wikidata/ORCIDs/org-ID-URIs).


Talking about “the incentive problem” as if it’s some minor fixable issue ignores all of human psychology and economics.

The climate crisis is a somewhat comparable example - it requires changing behavior on a massive scale for abstract benefit. In the climate case the benefit is much more fundamental than what semweb promises. And despite massive pain and effort we are very very far from addressing it. Thinking semweb would happen just cuz it sounds cool is super naive.


"However, like many visions that project future benefits but ignore present costs, it requires too much coordination and too much energy to effect in the real world" ... Wikipedia, Wikidata, OpenStreetMaps, Archive.org, ORCID science-journal stores, and the thousands of other open linked-data platforms are proofing Clay wrong each day. He has not been relevant for a long time IMHO. Semweb > tag-taxonomies.


Many of the biggest companies in world are using semweb tech: http://sparql.club

Open linked-data has been growing very fast over the last few years. Many governments are now demanding LD from their executive/subsidized organizations. These data stores are then made accessible using REST and/or SPARQL.


Wikidata is already providing a nearly globally accepted store of concept IDs. Wikipedia adds a lot of depth to this knowledge graph too.

Schema.org has become very popular and Google is backing this project. Wordpress and others are already using it.

Governments are requiring not just "open data", but also "open linked-data" (which can then be ingested into a SPARQL engine), because they want this data to be usable across organizations.

The financial industry are moving to the FIBO ontology, and on and on...


1)

- SPARQL is _a lot better_ than the many different forms of SQL.

- Adding some JSON-LD can be done through simple JSON metadata. Something people using Wordpress are already able to do. All this will be more and more automated.

- The benefit is ontological cohesion across the whole web. Please take a look at the https://conze.pt project and see what this can bring you. The benefit is huge. Simple integration with many different stores of information in a semantically precise way.

2) AI/NLP is never completely precise and requires huge resources (which require centralization). The basics of the semantic web will be based on RDF (whether created through some AI or not), SPARQL, ontologies and extended/improved by AI/NLP. Its a combination of the two that is already being used for Wikipedia and Wikidata search results.


> The benefit is ontological cohesion across the whole web

This has no benefit for the person who has to pay to do the work. Why would I pay someone to mark up all my data, just for the greater good? When humans are looking/using my products, none of this is visible. It's not built into any tools, it doesn't get me more SEO, and it doesn't get me any more sales.


Why are people editing Wikipedia and Wikidata? What would it bring you if your products were globally linked to that knowledge graph and Google's machines would understand that metadata from the tiny JSON-LD snippet on each page? The tools are here already, the tech is evolving still, but the knowledge graph concept is going to affect web shop owners too soon enough.


It’s unclear to me at this point why people are contributing to Wikipedia and certainly wikidata, but they’re getting something out of it (perhaps notoriety), and a lot probably has to do with contributing to the greater good. It’s all non profit. The rest of the web is unlike these stand out projects.

Meanwhile, why would say Mouser or Airbnb pay someone to markup their docs? WebMD? Clearly nothing has been compelling them to do so thus far, and when you’re talking about harvesting data and using it elsewhere, it’s a difficult argument to make. Google already gets them plenty of traffic without these efforts.


They do it because it benefits them too. OpenStreetMaps links with WD, GLAMs link with WD, journals/ORCIDs link with WD, all sorts of other data archives link with WD. Whoever is not linking with may see a crawler pass by to collect license-free facts.

Also, I just checked: WebMD is using a ton of embedded RDF on each page. They understand SEO well as you said :)


What is "unsafe, untenable or hard" about embedding some JSON-LD (which is just some JSON metadata, transformed using a small JS library), like I did here: https://twitter.com/conzept__/status/1552719001826074625

Whether you trust the URIs or the data that was placed there is not a problem for the semantic web. The fact that you _can_ state these things and relate to other resources and concepts on the web is already wonderful and useful in itself. Google is reading this metadata and relating it to their trust/ranking-graph. The semantic web 'community' could do the same later also, in a more decentralized way (blockhain web IDs perhaps?). For now it all works fine.


people should use something like json-schema to publish their structure. this doesn't solve the root denotation problem, but it would help a lot.


I'm building a front end app for Wikipedia & Wikidata called Conzept encyclopedia (https://conze.pt) based on semantic web pillars (SPARQL, URIs, various ontologies, etc.) and loving it so far.

The semantic web is not dead, its just slowly evolving and and growing. Last week I implemented JSON-LD (RDF embedded in HTML with a schema.org ontology), super easy and now any HTTP client can comprehend what any page is about automatically.

See https://twitter.com/conzept__ for many examples what Conzept can already do. You won't see many other apps do these things, and certainly not in a non-semantic-web way!

The future of the semantic web is in: much more open data, good schemas and ontologies for various domains, better web extensions understanding JSON-LD, more SPARQL-enabled tools, better and more lightweight/accessible NLP/AI/vector compute (preferably embedded in the client also), dynamic computing using category theory foundations (highly interactive and dynamic code paths, let the computer write logic for you), ...


The future of the semantic web is in big companies. Where handling data exchanges at scale is becoming a massive waste of time, resources and sanity.


That looks cool, thank you!



Just tried it out. Looks nice (except for that unremovable crypto button on the toolbar), but I need SpeechSynthesis (TTS) support. Will stay with FF and use MS Edge for its excellent TTS.

The search engine is pretty weak judging from my initial queries.


The crypto button is removable. Have a look through your settings.


Specifically, brave://settings/?search=hide+brave+rewards+button


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: