Hacker Newsnew | past | comments | ask | show | jobs | submit | more nerdo's commentslogin

What if money gets sent to websites that host hate speech?


Then they are saying something people want to hear, since it's all completely subjective anyway.


Why do you think anyone reserves the right to block funding to what gov determines as “hate speech”. If someone wants to pay for hate speech they can anyways do it right now with donations.

Ideas like this quickly devolves into gov marking any message that criticizes them as “Hate speech”. Democracies like Japan, to an extent Singapore do that already, officially by law.

Nations like Canada, India, Israel, Hungary, do it indirectly in unofficial but rampant ways.

Speech is not a crime, listening to hate speech and starting to be unjustly hateful towards people is a crime.

Punish the people who discriminate and act on the advice of hate speech. Do not punish speech, or else soon, you’ll get confused when gov starts changing what “hate” means.


> Speech is not a crime, listening to hate speech and starting to be unjustly hateful towards people is a crime.

I'm confused. You start off talking about merrits of blocking payment related to hate speech (or lack thereof). And then you point to the danger of government deciding what "hate" means.

That is all good and well, but to get back to the original issue - I'm wondering if you feel paying money to people spreading hate speech amounts to a crime or not?


Unfettered speech can lead to a cult following which, as we here in Europe know very well, can be extremely fucking dangerous. Which is precisely why some speech will land you in prison.


Yes, exactly. We wouldn't have had hitler if we'd had hate speech laws back in the days!11!1


Germany had plenty of hate speech laws back when Hitler was elected. Still the question that remains here would be: which government gets to decide what's hate speech? Like even if we could have a payment webAPI that could "block" hate speech (not sure how that would be possible), do we go by the standards of the US? Europe? Russia? How would it work concretely?

Trying to discuss the implications of hate speech for something that would be international is asinine imo, the term basically means everything and nothing (when used in a global context)


Why do you care about the two degrees of separation ?

The people posting hate speech already get in legal trouble.

And the websites hosting that can eventually get in trouble too (if they can't show that they did due diligence / were sufficiently close to a "common carrier" status).

And if it gets bad enough (typically if these fist two categories violate a bunch of other laws), their funding might get investigated too. Money flows for that kind of use typically fairly easy to track, as long as law enforcement actually bothers to.


What happens now when money gets sent to websites that host hate speech?

Twitter/X hosts plenty of hate speech and is the #6 site in the world according to Wikipedia currently.


In the US the receiver gets unbanked without warning or explanation and permanently loses the ability to accept money through any service via systems like MATCH.


Right... so far this has happened to 3 sites in the top 10 of this list: https://en.wikipedia.org/wiki/List_of_most-visited_websites

2 of the sites in the top 10 of that list have been criticized for hosting hate speech.

The 3 sites which have had trouble receiving payments don't overlap with the 2 which have been criticized for hosting hate speech.

And notably, many sites which are more hateful than those and have experienced deplatforming efforts, are still thriving, such as KiwiFarms and 4Chan. As it turns out, there are enough hateful people out there that they can manage technical/financial solutions to deplatforming efforts.

Hate speech is becoming a go-to justification for policing the internet, but the reality is that those policies are more effective in harming user privacy and freedom than they are in curbing hate speech.


Porn can be an understandable concern because of the amount of chargebacks it can create for processors, though the excessive chargebacks should be the rule not the porn. Hate speech they never tried to explain afaik. FWIW repealing fair banking[1] was immediate priority for Biden up there with fortifying elections[2].

1: https://reason.com/2021/02/11/biden-administration-suspends-...

2: https://www.congress.gov/bill/117th-congress/house-bill/1


You're not saying anything I don't already know, you're just changing the topic.

My point is that when you said, "In the US the receiver gets unbanked without warning or explanation and permanently loses the ability to accept money through any service via systems like MATCH", that's flat wrong in the most prominent cases, and in less-prominent cases where deplatforming has been applied it hasn't been effective.

It's absurd to try to block a direct payment model to all content creators because you think it will be an effective payment model for hate speech, when hate speakers already have working payment models.


The regime doesn’t want people saying or thinking things they aren’t being currently told to say and think. The regime is technically not allowed to use the legal system directly to punish people for this quasi-crime but is very happy to ruin everything else in same pursuit. So why should a protocol for p2p financial transactions the regime doesn’t control be allowed to exist?


Who said anything about p2p? Crypto is poorly-suited for microtransactions due to its inefficiency.

"The regime" has nothing to do with it; "the regime" has already effectively absorbed cryptocurrency. Coinbase is listed on NASDAQ, and all the major cryptocurrencies have zero privacy, allowing the powers that be to see everything that happens on the public ledgers.


Incredible effort. Text of scroll is essentially "drink more Ovaltine" which is on theme.


Did they ever explain the logo thing? https://longform.asmartbear.com/logo/


> The Smart Bear logo was made by a freelance designer in Austin, TX.

Probably both design teams ripped off the same stock artwork for a bear by tracing it.


That's possible but not likely, IMO.


It is very likely or perhaps they both just googled a bear and traced the same image


Why do you say that? I work in marketing and this stuff happens all of the time.


Because it either would have been wildly popular, and there would be a lot of others, or it would have been easier for Drizzly to to discover from a popular startup blog than from a not so popular stock artwork. Perhaps they could have carried out a search found the stock source as justification for copying it, but that wouldn't have the same story of independent discovery.

Also if you take asmartbear at his word, that sort of implies he at least somewhat believes his designer :) If he didn't think his designer had any scruples about saying that they made that, I don't think he'd have said it so plainly, as "was made by a freelance designer"


> or it would have been easier for Drizzly to to discover from a popular startup blog than from a not so popular stock artwork

It's just a traced vector of a bear. Again, as someone who is familiar with this process, if we were just going to steal something, we would have put a bit of effort into not looking stolen.

Even an amazing designer can't make a silhouette of a bear from memory and he probably didn't take the photo himself. So he surely stole from something else too.


Seems very likely to me.


I'm probably missing context or I'm desensitized by all the blatant copyright violations of the last year, but this is the outline of a bear that, by your link owns admission is not exactly the same, is flipped, has a different color and some text over it. Even if they just took that person's design and modified it is there really something to explain?


And even then my 8 year old could easily create a grizzly silhouette from a photo in 2 minutes. Is this really something to struggle over?


What lawsuits? They send updated terms of service ahead of it so it’s all totally legal.


How would it collapse? People will pay $100/mo instead of buying a new car and even then what would they buy? Other car companies will see Tesla getting free money and push their own updates the same.


So you're saying that people get a new Tesla for free, then only pay $100 bucks per month for 'extras'?

Tesla would definitely collapse seeing that they're not getting paid for making that Tesla car in the first case. They'd be broke within a couple of months.


No, Tesla already sold these cars and took people's money. They continue to sell cars the same. This is about charging the existing and new customers $100/mo because it's a number Tesla thought of. It's not for extras it's for all non-legally-required safety functionality- e.g. the tablet screen, a/c, ability to adjust your seat.


Context: I recently had a set of lightbulbs lose all functionality a couple nights ago, now insisting I had to get an account, sign in with 15 character password with certain symbols and mixed case, receive constant marketing emails whether I accept them or not, accept terms of service numbering 10000+ words, and then re-pair the bulbs which could no longer be recognized anyway after long failure attempt cycles. Humiliation ritual complete, bulbs are now e-waste I guess.

$1000 Peloton bike signed out of account and insisted on payment to sign back in at all, though still functions as a $20 salvage bike without any functions.

Is it optimal for all companies to completely cripple products once they've been sold? What's holding them back here?


Absolutely nothing.

Welcome to why software clickwrapping and EULA's are a blight to the concept of contract law. Also welcome to why "consideration" has been so dubiously defined as to become meaningless in the modern era.

In a world where everything is gated behind Licensing, and all the mediums of executing software are increasingly headed toward ensuring proprietary lock-in through burning in cryptographic hash locks on firmware; right of First Sale is a vanishing luxury that is simply not being offered anymore.


Yes, he’s the poster of the linked video.


> Toner maintains that safety wasn’t the reason the board wanted to fire Altman. Rather, it was a lack of trust

> Toner declined to provide specific details on why she and the three others voted to fire Altman from OpenAI

Does she see herself as more trustworthy? She can't even be bothered to give an excuse for the firing.

Someone like this couldn't be trusted in literally any function, how did she get a board seat on OpenAI?


> Does she see herself as more trustworthy? She can't even be bothered to give an excuse for the firing.

I agree that outsiders can't weigh in on this topic without statements from the board.

But as the (former) board member of a private organization, she's not accountable to the public. Making statements about why she fired Altman can carry substantial legal risk.


So what’s the point of doing a public interview claiming you were right but refusing to provide the reasoning?


Preserve your reputation without getting sued.


> how did she get a board seat on OpenAI?

She replaced Holden Karnofsky as a representative of the "effective altruist" group, Open Philanthropy, who donated money early on to OpenAI.

From the article:

"She succeeded her former manager from Open Philanthropy, Holden Karnofsky, on the OpenAI board in 2021 after he stepped down. His wife co-founded OpenAI rival Anthropic."


I have been wondering the same about Altman. Dude had a failed startup, and somehow wormed his way into being the president of YC, a job he f'd up and got fired from, and then managed to parachute into being the CEO of OpenAI.

Some people just fail upwards. Mind boggling.


> a job he f'd up and got fired from

Was he fired or he stepped down to be the CEO of Open AI?

> Some people just fail upwards. Mind boggling.

Didnt he hire the initial team and get investor money, that is a pretty good skill.


He was fired from YC : https://archive.ph/fLzoF

Brockman hired the top researchers with help from Yoshua Bengio.

Elon Musk was the biggest initial donor, and he is on record saying that Ilya was the key hire.

https://en.wikipedia.org/wiki/OpenAI#History


> Does she see herself as more trustworthy? She can't even be bothered to give an excuse for the firing.

I can almost promise you there is a lawyer telling her not to talk about the firing.


> how did she get a board seat on OpenAI?

She's the leading academic on the issue of AI safety. It's really ridiculous people don't even know her name and say random things about her, not realizing she's a rock star in her field.

Her leaving the board is a tremendous loss for OpenAI, about as terrible as Ilya leaving the board. These are two giants of AI currently.


She comes across as a childish idealist, typical of many uni graduates who've yet to encounter the real world, also typical of the EA cult. Giving people like her real power before they've had the uni nonsense knocked out of their heads is a ludicrous idea.

I'm embarrassed she comes from my country, but not surprised about the particular uni she attended within my country.


I have my issues with EA, but I don't think the "cult" description is accurate. For example, in 2022 they handed out $120K in prizes to the best criticisms of EA: https://forum.effectivealtruism.org/posts/YgbpxJmEdFhFGpqci/...

I'm doubtful that the "childish idealist" description is accurate. But even if it was, given the choice between having existentially dangerous AI developed by a childish idealist vs a slippery machiavellian like Sam, I would choose the childish idealist.

The big risk with a childish idealist is that they become the pawn of a slippery machiavellian. Choosing the slippery machiavellian just seems like speedrunning failure.


You’re welcome to disagree but it’s not just me who thinks EA has cult-like attributes: https://www.google.com/search?q=is+ea+a+cult

I’m quite happy with the childish idealist description though. Putting childish idealists in charge of $billions of other people’s money might sound like a great idea to you, but would you put them in charge of your money?


"Cult" makes it sound like Scientology. Scientology doesn't run criticism contests of itself. As another datapoint, many of the most-upvoted posts on the EA Forum are EA critiques of one sort or another: https://forum.effectivealtruism.org/allPosts?timeframe=allTi...

Wikipedia says you have to apply to attend some EA events. That's true for lots of things. TED isn't open to just anyone either. If you want to go to an EA event without applying, most group events are pretty casual and don't require any application: https://forum.effectivealtruism.org/groups (Note: I heard a rumor that EA events in Germany are less likely to be open to the public)

>Putting childish idealists in charge of $billions of other people’s money might sound like a great idea to you, but would you put them in charge of your money?

Well, I have in fact donated significant sums to some EA organizations, and I have a recurring donation to the Good Food Institute (working on meat alternatives). I don't agree with the "childish idealist" description of EA though. From my perspective, it's remarkable how easily Sam Altman seems to manipulate people, and he seems more successful at manipulating people outside of the EA community.


I had a quick skim through this one from your list: https://forum.effectivealtruism.org/posts/8xNSiwj5gjoDTRquQ/... I note this post by the high priest of the cult makes no mention of his recent moral relativist defence of bestiality - surely an oversight?: https://twitter.com/PeterSinger/status/1723269850930491707


The post is from May. The tweet is from November. I'm not sure why you'd expect the post to mention the tweet, because (a) the tweet was 6 months in the future, relative to the time of the post, and (b) the tweet doesn't appear very relevant to the topic of the post.

Peter Singer is more of an EA popularizer than an EA leader. The post you linked is the only post he has made in the past ~9 years: https://forum.effectivealtruism.org/users/peter_singer The tweet you linked is related to his Journal of Controversial Ideas, not EA.

It's certainly not the case that Singer should be taken as representative of the average EA, you'd have to look at surveys for that: https://forum.effectivealtruism.org/s/FxFwhFG227F6FgnKk However EA tends to be friendly to contrarian thought experiments (not usually taking it to Peter Singer levels). And EAs usually prefer to focus on getting facts right over condemning each other in an inflammatory way.


So the high priest / philosophical originator (guardian.com) of the cult should not "be taken as a representative of the average EA," and nobody saw fit to mention his...unusual views (due to his naive utilitarian philosophy) later on in that EA forum thread, the first thread he's posted in 9 years, the thread who's topic is closely related to the topic of the Twitter thread I linked?

Is EA distancing itself from Singer (who's views are, by all accounts, extremely influential in the EA community), as they did with SBF (whose hedge fund's initial $170m seed capital was raised from EA devotees)? When multiple high-profile people associated with EA are proven to be....problematic, should we pretend the common factor linking them is insignificant?

Inflammatory? I thought I was quite restrained compared to some of the commenters on that Twitter thread.


> "Cult" makes it sound like Scientology.

Other than telling me that you have a narrow understanding of a "cult", I'm not sure what this is supposed to communicate.


You're welcome to say EA is a cult if you include the caveat that "by cult, I mean a thing that gives lots of money and upvotes to people who criticize it". That way it will be clear that your definition of "cult" is broad enough to include things like that. Not everyone defines the word so broadly.

BTW, for people reading this discussion, here's an alternative explanation of why EA tends to be controversial: https://www.richardhanania.com/p/effective-altruism-thinks-y...


>"She's the leading academic on the issue of AI safety."

From article, this is what I know about her:

>"Toner graduated from the University of Melbourne, Australia, in 2014 with a degree in chemical engineering and subsequently worked as a research analyst at a series of firms, including Open Philanthropy, a foundation that makes grants based on the effective-altruism philosophy."

>"In 2019, she spent nine months in Beijing studying its AI ecosystem. When she returned, Toner helped establish a research organization at Georgetown University, called the Center for Security and Emerging Technology, where she continues to work."

She's been working in the field of AI safety for what 3 years at most? I find it very hard to believe that she's "thee" leading academic on the issue of safety. Stuart Russell's group, the Center for Human Compatible AI at UC Berkeley, MIT's The Algorithmic Alignment Group and David Kruegger's group at Cambridge are all academic leaders and groups with great expertise in this subdomain.

Further it's not even clear what "studying Beijing's AI ecosystem" actually means. Does this refer to the city? The university? The CCPs' surveillance state(not likely)? It's very hand wavy.

I don't have any problem with what she did or her views. I also think it's a loss for the company. I do question her bona fides for being called the leading academic on the issue though.


What impact has she made? What has she built? Did she actually make AI safer? If so, how did she do it? Are current AI models safer because of her?

It sounds like someone who talked a lot but didn’t actually create anything, change anything, or make anyone’s lives better.

Happy to revise my perspective if someone can provide any concrete data on her impact.


> What impact has she made? What has she built? Did she actually make AI safer?

She was on the board that fired Sam Altman, that had the courage to fire Sam Altman, presumably (according to the article at least) because he is a liar and she and three other board members did not trust him to be loyal to the non-profit's mission of working to keep AI safe.


The parent post question presumably was about the impact she has had as an academic or researcher (as "the leading academic on the issue of AI safety"), where the political actions on some board are not relevant.


That particular impact was undone in a single weekend. She may be an absolute rockstar otherwise but "firing Altman" is not impact when he is still in control of OpenAI.


from the article, the paragraph right before it explained how she got on the board:

"In 2019, she spent nine months in Beijing studying its AI ecosystem. When she returned, Toner helped establish a research organization at Georgetown University, called the Center for Security and Emerging Technology, where she continues to work."


It's interesting that on her LinkedIn profile she instead states that in Beijing she spent those 9 months in "intensive Mandarin Chinese language training" at a Chinese language program in Tsinghua University, not studying its AI ecosystem.


Credential inflation is how all these Nepo babies seemingly outpace anyone else.


She's a lobbyist or policy wonk, not a researcher.

Yann LeCun is a leading academic...


Lobbyists and policy wonks are dramatically different vocations.


What are you talking about? She has no qualifications and her publications are juvenile. Her h-index is 8. She has no background in CS and has a few publication in Foreign Affairs. If she is the leading academic, then AI safety is not a serious academic field. She is in no way comparable to Ilya. This is the worst take I've seen on HN in awhile.


> the leading academic on the issue of AI safety.

Sounds like a made up position to me. Just another bureaucrat with a cool sounding name


Why don't you reserve your ignorant judgement until you do due diligence?

https://scholar.google.com/citations?user=NNnQg0MAAAAJ&hl=en


As an academic from a different field, this is not a publication record that would be anywhere close to someone being "the leading academic on the issue of AI safety". It indicates that she has had been one of very many participants at two impactful papers (with zero indication of what her individual contribution was, with so many authors it might as well be being part of the funding organization), has done a few collaborations with strong researchers, and that her personal first-author or sole-author research is barely relevant.

Like, it's not a bad publication record, especially for a junior researcher (heck, her first participation in other people's papers is just in 2018 and her own work starts in 2021-2022, so she's effectively just getting started and may have many opportunities to prove herself in the future), it does indicate doing a couple years of research and might qualify for an okayish faculty job when (if?) she gets her PhD, but it's not also something indicating a senior researcher that's the pillar of anything and has had some impact - a grad student who's lucky to study in a strong leading researcher's lab and does some work on two of the advisor's papers might have a similar publication record already at their graduation before their career has even seriously started.

Many of her co-authors have significantly larger research impact and personal research (https://scholar.google.com/citations?hl=en&user=VclFrJ8AAAAJ or https://scholar.google.com/citations?hl=en&user=MbBntPgAAAAJ for example or https://scholar.google.com/citations?hl=en&user=0-G2eiEAAAAJ) - which also raises a question of how much each co-author did in a two-author paper where the leading author is a strong researcher and the second author is a "Director of Strategy and Foundational Research Grants" i.e. getting money for that research, so labeling her as a leading researcher (much less the leading researcher) of some field seems misleading as the publication record would indicate that she's not even on the same level as they are.


She was one real paper of note, and was one of a dozen authors on. Those authors include ppl with significantly greater reputation.

https://scholar.google.com/citations?hl=en&user=6-e-ZBEAAAAJ


And what happens when they read the articles and come to realize that you could give an average college freshman any of those subjects and they could produce a comparable paper in a few weeks' time with a full course load and a part time job? Seriously, I've never seen a more underwhelming set of articles on google scholar from a supposed "leading" expert in a field... and what's worse is that she wasn't even the lead writer on most of them.


> she's a rock star in her field

I don't think "rock star" is quite the right term.

Unless you think David Geffen or Lester Bangs are rock stars. Normally rock stars sing or play an instrument. Someone who got a nice title in a nonprofit by writing about rock is not a rock star.


> “Fancy titles like ‘Director of Strategy at Georgetown’s Center for Security and Emerging Technology’ can lead to a false sense of understanding of the complex process of entrepreneurial innovation,” Khosla wrote in an essay

Exactly. Being a rent-seeking grifter doesn't make you an expert. In anything.


Fancy titles like Founder and Managing Director of Khosla Ventures can lead to a false sense of understanding about what a researcher like Helen Toner understands.

I'm amazed anyone can side with Khosla after he blatantly showed his $ > ethics side when it came to the Martin's Beach scandal[0]. In a battle of $ and ethics, finding yourself agreeing with Khosla is a smoking gun where someone stands.

That we have VC's character assassinating someone in contrast to all the people actually around her who can vouch, is quite a sad legacy.

[0] https://www.washingtonpost.com/technology/2020/01/08/califor...


Instead of making this about Khosla, can you tell us what exactly has she done, why is she an expert on AI safety?


She’s one of the only people to bridge the AI communities of the US and China, which will be a crucial factor in preventing a state level arms race of AGI. BSc Chemical Engineering, MA from Georgetown, spent the rest doing research on what the future looks like.

https://scholar.google.com/citations?user=NNnQg0MAAAAJ&hl=en


A politician, in other words. And a junior one at that.

"bridge the AI communities of the US and China": what does that mean? She organized some meetings where platitudes were intoned?


a "researcher"? that's what you call her?

Those who can, do. Those who can't, write about it.


Albert Cory, 'Author of "Inventing the Future," a historical novel'...


whoa, he did 15 seconds of research.

I was there, as an engineer. I'm a character in the novel.


And yet now it’s Helen Toner who is “there,” as a chemical engineer BSc, involved in one of the most impactful boards because she’s respected by the important people around her, and you’re writing about it.

https://scholar.google.com/citations?user=NNnQg0MAAAAJ&hl=en


are these papers supposed to impress us with her AI there-ness, or what?

None of them are about chemical engineering, which has nothing much to do with AI, anyway, so I don't know why you mention that.

when did she go over to the Dark Side?


Looks like AI art has the same issue with joystick buttons as with fingers



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: