Hacker Newsnew | past | comments | ask | show | jobs | submit | riskable's commentslogin

Summary: Ground.news puts far too much faith in the legitimacy of many right-wing headlines; giving them front-page views as "blindspots". Meaning: Even though the story itself is complete BS/non-factual, the algorithm shows it as a legitimate thing that isn't being covered by left-leaning or centrist news.

To fix this, Ground.news should update their algorithm to perform much more scrutiny for headlines trending on right-leaning sites. Especially considering that nearly all their right-leaning sources have very low or basically non-existent factuality ratings.

...or just remove (entirely) any "news" source that doesn't have a high factuality rating. That would be the most logical choice but I do sympathize with the fact that would remove nearly all right-leaning "news" sites.

If I were in charge I'd reorient the leanings so that sources like NPR would not show up as "left-leaning" but as centrist (i.e. a more European alignment which is more historically accurate/real, IMHO). Which is where they actually sit (in reality). Sites like Breitbart would be discarded entirely as failing tests of factuality.


I believe the main purpose of Ground News—especially the Blindspot feature—is to highlight what the "other side" is reading, rather than to lend legitimacy to the reported content. I still find that information valuable, but perhaps that distinction could have been made more explicit.


I don't know what to do with that information. "The other side believes absurd new thing" is subsumed by "The other side continuously believes absurd new things", which I already knew -- and already knew that I could not alter that.


It’s really about understanding the broader spectrum. You could assume that what you’re reading is the whole picture—but it’s still useful to see how the existence of these so-called “absurd things” influences and shapes the narratives within the media you do choose to consume.


Sources should really be evaluated on a multi-dimensional axis:

- Factuality (low -> high)

- Partisanship (low -> high)

- Political orientation: social issues (left -> right)

- Political orientation: economic issues (left -> right)

- Political orientation: foreign policy (left -> right)

etc.

Sources like NPR are decisively not centrist w.r.t social issues and partisanship. There has been a distinct change in their reporting over the last ~decade.

(also, I'm aware the left-right scale is very lacking. Economics itself has so many dimensions that left/right is meaningless there.)


It'd be nice, but I can't see it being effective at changing broad consumer behavior. Giving people more information to make a choice doesn't do much value when they're already making poor choices based on the already existing information.

Put another way, I wish we had the nuanced shit to sift through where some nice multi-dimensional analysis could save the day. But the issue is people are consuming shit that is demonstrably shit from the first whiff/taste. I can't see how having some pretty vectors to showing them where their shit lives in shitspace is going to be helpful.


I don't think anyone could (or even necessarily should) design a service like this for the masses. (Designing anything with the masses in mind is probably a way to end up with a mediocre product regardless, but I digress.) This would be more for people like us and the typical HN crowd: interested in truth and nuance for the sake of knowing the world accurately.

The problem with people consuming low-information, rage-baiting, distorted, highly-partisan slop masquerading as news is a problem I don't have the first idea on solving. The issue isn't even with the existence of the shit itself: that sort of content has always and will always exist, regardless. The issue is with (i) the volume of people consuming and accepting it, and more importantly, (ii) how much influence that garbage has on the national discourse and policy. For example, Musk consumes that sort of content and then feeds it into Grok (hello, MechaHitler), which then propagates to elected officials and the electorate and has real-world consequences in policy.


I feel like the whole concept of "left/right", when applied to news reporting especially, just makes the "false balance" issue impossible to deal with.

We don't have "right leaning facts", we either have facts or we don't.

And yes, omitting facts or focusing on others can certainly influence people, but we aren't even at that point any more, we have lies being positioned as equally valid "other side" arguments.


> If I were in charge I'd reorient the leanings so that sources like NPR would not show up as "left-leaning" but as centrist (i.e. a more European alignment which is more historically accurate/real, IMHO

It is not historically accurate. The terms left and right come from the French Revolution.

Those on the right supported a strong monarchy, those in a center supported a weak monarchy with a republic, and those on the left supported a republic and no monarchy.

Pretty much every news media, regardless if they are now considered "right wing", would meet the definition of left wing historically.


Yes, these things mutate over time. Their historical roots are interesting but have no bearing on their common usage today. A similar story for those who claim left-wing politicians in the USA are really "European centrists" -- possibly true, but pedantic and irrelevant in the context of American left-right discussions.


> Yes, these things mutate over time. Their historical roots are interesting but have no bearing on their common usage today.

I agree. I was just commenting on how the historical understanding is not more accurate for today's understanding and how his definition of historical was quite modern still.

> A similar story for those who claim left-wing politicians in the USA are really "European centrists" -- possibly true, but pedantic and irrelevant in the context of American left-right discussions.

I also agree, but I think that it is hard to know what politicians actually believe. I'm sure there are a number of communists in congress, they just aren't open about it and support more moderate legislation to not look radical. For all we know many of the left wing US politicians are further left than European ones.

Just looking at what they say and vote is not true to their beliefs.


Exactly! If Anthropic is guilty of copyright infringement for the mere act of downloading copyrighted books then so is Google, Microsoft (Bing), DuckDuckGo, etc. Every search engine that exists downloads pirated material every day. They'd all be guilty.

Not only that but all of us are guilty too because I'm positive we've all clicked on search results that contained copyrighted content that was copied without permission. You may not have even known it was such.

Remember: Intent is irrelevant when it comes to copyright infringement! It's not that kind of law.

Intent can guide a judge when they determine damages but that's about it.


It's the first step in a two-step process:

    1. Rewrite in (unsafe) Rust.
    2. Update the code over time, moving towards safe Rust.
It's the old, "get it working then fix it" process. In business that's normally a bad idea because you end up wasting more time than if you'd just done things correctly from the start but for a hobby project it's fine. Because then you're more likely to learn something and possibly—ultimately—end up with a better end product.

To a business, time your developers spend learning things (the hard way) is wasted.

To a hobbyist, taking the time to learn things is time well-spent.


In business it can also be a good idea, because if you're waiting for it to be done correctly you may never have a delivered product even if you have a working (but not 100% ideal) product. A compromise is to get a subset of your target capabilities working correctly and the rest unimplemented and deliver that before continuing on.


Lines of Code has always been a tertiary indicator at best. It's supposed to only be used as a (very) rough indicator when you're trying to figure out the overall complexity of a project. As in, "there's 50 million lines of code in Windows."

Knowing a figure like that, you can reason that it's too big for a single developer. Therefore, you'll likely need at least two; and maybe a few thousand marketing people to sell it.


Speak for yourself! Some of us don't think about the problem at all, write the code, then don't think about it afterwards either!

This is how "features" get added to most Microsoft products these days :thumbsup:


Your comment reminds me of people complaining about how using emoji in communications/text has become normalized. Generating images with AI is pretty fun and seems like an appropriate thing to do for a personal blog. As in, this is the exact sort of place where it's most appropriate.

It's not like this person was ever going to pay someone to make a cartoon drawing so nobody lost their livelihood over it. Seems like a harmless visual identifier (that helps you remember if you read the article if you stumble across it again later).

Is it really such a bad thing when people use generative AI for fun or for their hobbies? This isn't the New York Times.


If only this worked with image generation! There's vastly more applications for this kind of thing in that space. They're more fun too :)


Everyone in the cult knows what cargo is.


> Keep in mind, you can legally engineer EULAs in such a way that merely purchasing the work surrenders all of your fair use rights.

That has yet to be determined in a court of law. Just like: You can write a contract to kill but that won't make it legal.

The Supreme Court ruled that Fair use is an essential component that makes copyright law compatible with the First Amendment. I highly suspect that if if ever comes up in the SCOTUS they will rule that only signed contracts can override Fair Use. Meaning: Clickwrap agreements or broad contracts required by ebook publishers (e.g. when you use their apps) don't count.

Also, if you violate a contract by posting an excerpt of an ebook you purchased online would require the publisher to sue you in court (or at least force arbitration) over that contract violation. They could not use tools like the DMCA in such instance to enforce a takedown request.

There's no, "Hey! They're violating our contract, I swear!" takedown feature in contract law like there is with copyright law (the DMCA).


> Anthropic is liable for that copying

That's yet to be determined. The judge ruled that an entirely separate trial will be necessary to determine if Anthropic violated specific copyrights when they downloaded books from pirate websites and what the damages would be if they did so.

So far no court case has ruled downloading to be a violation of copyright. In Sony BMG Music Entertainment v. Tenenbaum and Capitol Records, Inc. v. Thomas-Rasset the courts ruled that downloading and then sharing the content constituted a violation of copyright law. Those are the only two cases I'm aware of where a ruling was made (relevant to this).

The courts need to be very careful with any such ruling because search engines download pirated content all day every day. If the mere act of downloading it violated copyright law then that will break the Internet (as we know it).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: