Why do you think anyone reserves the right to block funding to what gov determines as “hate speech”. If someone wants to pay for hate speech they can anyways do it right now with donations.
Ideas like this quickly devolves into gov marking any message that criticizes them as “Hate speech”. Democracies like Japan, to an extent Singapore do that already, officially by law.
Nations like Canada, India, Israel, Hungary, do it indirectly in unofficial but rampant ways.
Speech is not a crime, listening to hate speech and starting to be unjustly hateful towards people is a crime.
Punish the people who discriminate and act on the advice of hate speech. Do not punish speech, or else soon, you’ll get confused when gov starts changing what “hate” means.
> Speech is not a crime, listening to hate speech and starting to be unjustly hateful towards people is a crime.
I'm confused. You start off talking about merrits of blocking payment related to hate speech (or lack thereof). And then you point to the danger of government deciding what "hate" means.
That is all good and well, but to get back to the original issue - I'm wondering if you feel paying money to people spreading hate speech amounts to a crime or not?
Unfettered speech can lead to a cult following which, as we here in Europe know very well, can be extremely fucking dangerous. Which is precisely why some speech will land you in prison.
Germany had plenty of hate speech laws back when Hitler was elected. Still the question that remains here would be: which government gets to decide what's hate speech? Like even if we could have a payment webAPI that could "block" hate speech (not sure how that would be possible), do we go by the standards of the US? Europe? Russia? How would it work concretely?
Trying to discuss the implications of hate speech for something that would be international is asinine imo, the term basically means everything and nothing (when used in a global context)
Why do you care about the two degrees of separation ?
The people posting hate speech already get in legal trouble.
And the websites hosting that can eventually get in trouble too (if they can't show that they did due diligence / were sufficiently close to a "common carrier" status).
And if it gets bad enough (typically if these fist two categories violate a bunch of other laws), their funding might get investigated too. Money flows for that kind of use typically fairly easy to track, as long as law enforcement actually bothers to.
In the US the receiver gets unbanked without warning or explanation and permanently loses the ability to accept money through any service via systems like MATCH.
2 of the sites in the top 10 of that list have been criticized for hosting hate speech.
The 3 sites which have had trouble receiving payments don't overlap with the 2 which have been criticized for hosting hate speech.
And notably, many sites which are more hateful than those and have experienced deplatforming efforts, are still thriving, such as KiwiFarms and 4Chan. As it turns out, there are enough hateful people out there that they can manage technical/financial solutions to deplatforming efforts.
Hate speech is becoming a go-to justification for policing the internet, but the reality is that those policies are more effective in harming user privacy and freedom than they are in curbing hate speech.
Porn can be an understandable concern because of the amount of chargebacks it can create for processors, though the excessive chargebacks should be the rule not the porn. Hate speech they never tried to explain afaik. FWIW repealing fair banking[1] was immediate priority for Biden up there with fortifying elections[2].
You're not saying anything I don't already know, you're just changing the topic.
My point is that when you said, "In the US the receiver gets unbanked without warning or explanation and permanently loses the ability to accept money through any service via systems like MATCH", that's flat wrong in the most prominent cases, and in less-prominent cases where deplatforming has been applied it hasn't been effective.
It's absurd to try to block a direct payment model to all content creators because you think it will be an effective payment model for hate speech, when hate speakers already have working payment models.
The regime doesn’t want people saying or thinking things they aren’t being currently told to say and think. The regime is technically not allowed to use the legal system directly to punish people for this quasi-crime but is very happy to ruin everything else in same pursuit. So why should a protocol for p2p financial transactions the regime doesn’t control be allowed to exist?
Who said anything about p2p? Crypto is poorly-suited for microtransactions due to its inefficiency.
"The regime" has nothing to do with it; "the regime" has already effectively absorbed cryptocurrency. Coinbase is listed on NASDAQ, and all the major cryptocurrencies have zero privacy, allowing the powers that be to see everything that happens on the public ledgers.
Because it either would have been wildly popular, and there would be a lot of others, or it would have been easier for Drizzly to to discover from a popular startup blog than from a not so popular stock artwork. Perhaps they could have carried out a search found the stock source as justification for copying it, but that wouldn't have the same story of independent discovery.
Also if you take asmartbear at his word, that sort of implies he at least somewhat believes his designer :) If he didn't think his designer had any scruples about saying that they made that, I don't think he'd have said it so plainly, as "was made by a freelance designer"
> or it would have been easier for Drizzly to to discover from a popular startup blog than from a not so popular stock artwork
It's just a traced vector of a bear. Again, as someone who is familiar with this process, if we were just going to steal something, we would have put a bit of effort into not looking stolen.
Even an amazing designer can't make a silhouette of a bear from memory and he probably didn't take the photo himself. So he surely stole from something else too.
I'm probably missing context or I'm desensitized by all the blatant copyright violations of the last year, but this is the outline of a bear that, by your link owns admission is not exactly the same, is flipped, has a different color and some text over it. Even if they just took that person's design and modified it is there really something to explain?
How would it collapse? People will pay $100/mo instead of buying a new car and even then what would they buy? Other car companies will see Tesla getting free money and push their own updates the same.
So you're saying that people get a new Tesla for free, then only pay $100 bucks per month for 'extras'?
Tesla would definitely collapse seeing that they're not getting paid for making that Tesla car in the first case. They'd be broke within a couple of months.
No, Tesla already sold these cars and took people's money. They continue to sell cars the same. This is about charging the existing and new customers $100/mo because it's a number Tesla thought of. It's not for extras it's for all non-legally-required safety functionality- e.g. the tablet screen, a/c, ability to adjust your seat.
Context:
I recently had a set of lightbulbs lose all functionality a couple nights ago, now insisting I had to get an account, sign in with 15 character password with certain symbols and mixed case, receive constant marketing emails whether I accept them or not, accept terms of service numbering 10000+ words, and then re-pair the bulbs which could no longer be recognized anyway after long failure attempt cycles. Humiliation ritual complete, bulbs are now e-waste I guess.
$1000 Peloton bike signed out of account and insisted on payment to sign back in at all, though still functions as a $20 salvage bike without any functions.
Is it optimal for all companies to completely cripple products once they've been sold? What's holding them back here?
Welcome to why software clickwrapping and EULA's are a blight to the concept of contract law. Also welcome to why "consideration" has been so dubiously defined as to become meaningless in the modern era.
In a world where everything is gated behind Licensing, and all the mediums of executing software are increasingly headed toward ensuring proprietary lock-in through burning in cryptographic hash locks on firmware; right of First Sale is a vanishing luxury that is simply not being offered anymore.
> Does she see herself as more trustworthy? She can't even be bothered to give an excuse for the firing.
I agree that outsiders can't weigh in on this topic without statements from the board.
But as the (former) board member of a private organization, she's not accountable to the public. Making statements about why she fired Altman can carry substantial legal risk.
She replaced Holden Karnofsky as a representative of the "effective altruist" group, Open Philanthropy, who donated money early on to OpenAI.
From the article:
"She succeeded her former manager from Open Philanthropy, Holden Karnofsky, on the OpenAI board in 2021 after he stepped down. His wife co-founded OpenAI rival Anthropic."
I have been wondering the same about Altman. Dude had a failed startup, and somehow wormed his way into being the president of YC, a job he f'd up and got fired from, and then managed to parachute into being the CEO of OpenAI.
She's the leading academic on the issue of AI safety. It's really ridiculous people don't even know her name and say random things about her, not realizing she's a rock star in her field.
Her leaving the board is a tremendous loss for OpenAI, about as terrible as Ilya leaving the board. These are two giants of AI currently.
She comes across as a childish idealist, typical of many uni graduates who've yet to encounter the real world, also typical of the EA cult. Giving people like her real power before they've had the uni nonsense knocked out of their heads is a ludicrous idea.
I'm embarrassed she comes from my country, but not surprised about the particular uni she attended within my country.
I'm doubtful that the "childish idealist" description is accurate. But even if it was, given the choice between having existentially dangerous AI developed by a childish idealist vs a slippery machiavellian like Sam, I would choose the childish idealist.
The big risk with a childish idealist is that they become the pawn of a slippery machiavellian. Choosing the slippery machiavellian just seems like speedrunning failure.
I’m quite happy with the childish idealist description though. Putting childish idealists in charge of $billions of other people’s money might sound like a great idea to you, but would you put them in charge of your money?
"Cult" makes it sound like Scientology. Scientology doesn't run criticism contests of itself. As another datapoint, many of the most-upvoted posts on the EA Forum are EA critiques of one sort or another: https://forum.effectivealtruism.org/allPosts?timeframe=allTi...
Wikipedia says you have to apply to attend some EA events. That's true for lots of things. TED isn't open to just anyone either. If you want to go to an EA event without applying, most group events are pretty casual and don't require any application: https://forum.effectivealtruism.org/groups (Note: I heard a rumor that EA events in Germany are less likely to be open to the public)
>Putting childish idealists in charge of $billions of other people’s money might sound like a great idea to you, but would you put them in charge of your money?
Well, I have in fact donated significant sums to some EA organizations, and I have a recurring donation to the Good Food Institute (working on meat alternatives). I don't agree with the "childish idealist" description of EA though. From my perspective, it's remarkable how easily Sam Altman seems to manipulate people, and he seems more successful at manipulating people outside of the EA community.
The post is from May. The tweet is from November. I'm not sure why you'd expect the post to mention the tweet, because (a) the tweet was 6 months in the future, relative to the time of the post, and (b) the tweet doesn't appear very relevant to the topic of the post.
Peter Singer is more of an EA popularizer than an EA leader. The post you linked is the only post he has made in the past ~9 years: https://forum.effectivealtruism.org/users/peter_singer The tweet you linked is related to his Journal of Controversial Ideas, not EA.
It's certainly not the case that Singer should be taken as representative of the average EA, you'd have to look at surveys for that: https://forum.effectivealtruism.org/s/FxFwhFG227F6FgnKk However EA tends to be friendly to contrarian thought experiments (not usually taking it to Peter Singer levels). And EAs usually prefer to focus on getting facts right over condemning each other in an inflammatory way.
So the high priest / philosophical originator (guardian.com) of the cult should not "be taken as a representative of the average EA," and nobody saw fit to mention his...unusual views (due to his naive utilitarian philosophy) later on in that EA forum thread, the first thread he's posted in 9 years, the thread who's topic is closely related to the topic of the Twitter thread I linked?
Is EA distancing itself from Singer (who's views are, by all accounts, extremely influential in the EA community), as they did with SBF (whose hedge fund's initial $170m seed capital was raised from EA devotees)? When multiple high-profile people associated with EA are proven to be....problematic, should we pretend the common factor linking them is insignificant?
Inflammatory? I thought I was quite restrained compared to some of the commenters on that Twitter thread.
You're welcome to say EA is a cult if you include the caveat that "by cult, I mean a thing that gives lots of money and upvotes to people who criticize it". That way it will be clear that your definition of "cult" is broad enough to include things like that. Not everyone defines the word so broadly.
>"She's the leading academic on the issue of AI safety."
From article, this is what I know about her:
>"Toner graduated from the University of Melbourne, Australia, in 2014 with a degree in chemical engineering and subsequently worked as a research analyst at a series of firms, including Open Philanthropy, a foundation that makes grants based on the effective-altruism philosophy."
>"In 2019, she spent nine months in Beijing studying its AI ecosystem. When she returned, Toner helped establish a research organization at Georgetown University, called the Center for Security and Emerging Technology, where she continues to work."
She's been working in the field of AI safety for what 3 years at most? I find it very hard to believe that she's "thee" leading academic on the issue of safety. Stuart Russell's group, the Center for Human Compatible AI at UC Berkeley, MIT's The Algorithmic Alignment Group and David Kruegger's group at Cambridge are all academic leaders and groups with great expertise in this subdomain.
Further it's not even clear what "studying Beijing's AI ecosystem" actually means. Does this refer to the city? The university? The CCPs' surveillance state(not likely)? It's very hand wavy.
I don't have any problem with what she did or her views. I also think it's a loss for the company. I do question her bona fides for being called the leading academic on the issue though.
> What impact has she made? What has she built? Did she actually make AI safer?
She was on the board that fired Sam Altman, that had the courage to fire Sam Altman, presumably (according to the article at least) because he is a liar and she and three other board members did not trust him to be loyal to the non-profit's mission of working to keep AI safe.
The parent post question presumably was about the impact she has had as an academic or researcher (as "the leading academic on the issue of AI safety"), where the political actions on some board are not relevant.
That particular impact was undone in a single weekend. She may be an absolute rockstar otherwise but "firing Altman" is not impact when he is still in control of OpenAI.
from the article, the paragraph right before it explained how she got on the board:
"In 2019, she spent nine months in Beijing studying its AI ecosystem. When she returned, Toner helped establish a research organization at Georgetown University, called the Center for Security and Emerging Technology, where she continues to work."
It's interesting that on her LinkedIn profile she instead states that in Beijing she spent those 9 months in "intensive Mandarin Chinese language training" at
a Chinese language program in Tsinghua University, not studying its AI ecosystem.
What are you talking about? She has no qualifications and her publications are juvenile. Her h-index is 8. She has no background in CS and has a few publication in Foreign Affairs. If she is the leading academic, then AI safety is not a serious academic field. She is in no way comparable to Ilya. This is the worst take I've seen on HN in awhile.
As an academic from a different field, this is not a publication record that would be anywhere close to someone being "the leading academic on the issue of AI safety". It indicates that she has had been one of very many participants at two impactful papers (with zero indication of what her individual contribution was, with so many authors it might as well be being part of the funding organization), has done a few collaborations with strong researchers, and that her personal first-author or sole-author research is barely relevant.
Like, it's not a bad publication record, especially for a junior researcher (heck, her first participation in other people's papers is just in 2018 and her own work starts in 2021-2022, so she's effectively just getting started and may have many opportunities to prove herself in the future), it does indicate doing a couple years of research and might qualify for an okayish faculty job when (if?) she gets her PhD, but it's not also something indicating a senior researcher that's the pillar of anything and has had some impact - a grad student who's lucky to study in a strong leading researcher's lab and does some work on two of the advisor's papers might have a similar publication record already at their graduation before their career has even seriously started.
Many of her co-authors have significantly larger research impact and personal research (https://scholar.google.com/citations?hl=en&user=VclFrJ8AAAAJ or https://scholar.google.com/citations?hl=en&user=MbBntPgAAAAJ for example or https://scholar.google.com/citations?hl=en&user=0-G2eiEAAAAJ) - which also raises a question of how much each co-author did in a two-author paper where the leading author is a strong researcher and the second author is a "Director of Strategy and Foundational Research Grants" i.e. getting money for that research, so labeling her as a leading researcher (much less the leading researcher) of some field seems misleading as the publication record would indicate that she's not even on the same level as they are.
And what happens when they read the articles and come to realize that you could give an average college freshman any of those subjects and they could produce a comparable paper in a few weeks' time with a full course load and a part time job? Seriously, I've never seen a more underwhelming set of articles on google scholar from a supposed "leading" expert in a field... and what's worse is that she wasn't even the lead writer on most of them.
I don't think "rock star" is quite the right term.
Unless you think David Geffen or Lester Bangs are rock stars. Normally rock stars sing or play an instrument. Someone who got a nice title in a nonprofit by writing about rock is not a rock star.
> “Fancy titles like ‘Director of Strategy at Georgetown’s Center for Security and Emerging Technology’ can lead to a false sense of understanding of the complex process of entrepreneurial innovation,” Khosla wrote in an essay
Exactly. Being a rent-seeking grifter doesn't make you an expert. In anything.
Fancy titles like Founder and Managing Director of Khosla Ventures can lead to a false sense of understanding about what a researcher like Helen Toner understands.
I'm amazed anyone can side with Khosla after he blatantly showed his $ > ethics side when it came to the Martin's Beach scandal[0]. In a battle of $ and ethics, finding yourself agreeing with Khosla is a smoking gun where someone stands.
That we have VC's character assassinating someone in contrast to all the people actually around her who can vouch, is quite a sad legacy.
She’s one of the only people to bridge the AI communities of the US and China, which will be a crucial factor in preventing a state level arms race of AGI. BSc Chemical Engineering, MA from Georgetown, spent the rest doing research on what the future looks like.
And yet now it’s Helen Toner who is “there,” as a chemical engineer BSc, involved in one of the most impactful boards because she’s respected by the important people around her, and you’re writing about it.