Hacker Newsnew | past | comments | ask | show | jobs | submit | more SCdF's commentslogin

I disagree: in as much as I have noticed this *far* more with AI than any other advancement / fad (depending on your opinion) than anything else before.

This also tracks with every app and website injecting AI into every one of your interactions, with no way to disable it.

I think the article's point about non-consent is a very apt one, and expresses why I dislike this trend so much. I left Google Workspace, as a paying customer for years, because they injected gemini into gmail etc and I couldn't turn it off (only those on the most expensive enterprise plans could at the time I left).

To be clear I am someone that uses AI basically every day, but the non-consent is still frustrating and dehumanising. Users–even paying users–are "considered" in design these days as much as a cow is "considered" in the design of a dairy farm.

I am moving all of the software that I pay for to competitors who either do not integrate AI, or allow me to disable it if I wish.


To add to this, it's the same attitude that they used to create the AI in the first place by using content which they don't own, without permission. Regardless of how useful it may be, the companies creating it and including it have demonstrated time and again that they do not care about consent.


> the same attitude that they used to create the AI in the first place by using content which they don't own, without permission

This was a massive "white pill" for me. When the needs of emerging technology ran head first into the old established norms of ""intellectual property"" it blew straight through like a battle tank, technology didn't even bother to slow down and try to negotiate. This has alleviated much of my concern with IP laws stifling progress; when push comes to shove, progress wins easily.


For big corps yes.

For everyone else, chains.


You haven't taken to the high seas?


How can you get a machine to have values? Humans have values because of social dynamics and education (or lack of exposure to other types of education). Computers do not have social dynamics, and it is much harder to control what they are being educated on if the answer is "everything".


It's not hard if the people in charge had any scruples at all. These machines never could have done anything if some human being, somewhere in the chain, hadn't decided that "yeah, I think we will do {nefarious_thing} with our new technology". Or should we start throwing up our hands when someone gets stabbed to death like "well, I guess knives don't have human values".

Human beings are doing this.


> How can you get a machine to have values?

The short answer is a reward function. The long answer is the alignment problem.

Of course, everything in the middle is what matters. Explicitly defined reward functions are complete, but not consistent. Data defined rewards are potentially consistent but incomplete. It's not a solvable problem form machines but equally likewise for humans. Still we practice, improve and middle through dispite this and approximate improvement hopefully, over long enough timescales.


Well, it’s pretty clear to me that the current reward function of profit maximization has a lot of down sides that aren’t sufficiently taken into account.


The only thing worse than it is anything else-maximisation.


That sounds like the valued-at-billions-and-drowning-in-funding company’s problem. The issue is they just go “there are no consequences for solving this, so we simply won’t.”


Maybe if we can't build a machine that isn't a sociopath the answer should be don't build the machine rather then oh well go ahead and build the sociopaths


This has real Torment Nexus[0] energy

[0] Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale.

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus


I’d argue that a lot of the scrape-and-train is just the newest and most blatant exploitation of the relationship that always existed, not a renegotiation of it. Stack overflow monetized millions of hours of people’s work. Same thing with Reddit and Twitter and plenty of other websites.

Legally it is different with books (as Anthropic found out) but I would argue morally it is more similar: forum users and most authors write not for money, but because they enjoy it.


I don't know, it feels odd to declare people wrote "because they enjoy it" and then get irritated when someone finds a way to monetize it retrospectively.

Like you're either doing this for the money or you're not, and its okay to re-evaluate that decision...but at the same time there's a whole lot of "actually I was low key trying to build a career" type energy to a lot of the complaining.

Like I switched off from Facebook aboutna years after discovering it when it increasingly became "look at my new business venture...friends". LinkedIn is at least just upfront about it and I can ignore the feed entirely (use it for job listings only).


The shift from "you just don't understand" to damage control would be funny if it wasn't so transparent.

> We have identified a bug in our system... we take communication consent very seriously

> There was a bug, and we fucked up... we take comms consent seriously

These two actors were clearly coached into the same narrative. I also absolutely don't believe them at all: some PM made the conscious decision to bypass user preferences to increase some KPI that pleases some AI-invested stakeholder.


> only those on the most expensive enterprise plans could at the time I left.

lol. so the premium feature is the ability to turn off the AI? That's one way to monetise AI I suppose.


Hahaha. It's like a protection racket for the new age.

"Nice user experience you got there. Would be a real shame if AI got added to it."


> I left Google Workspace, as a paying customer for years, because they injected gemini into gmail

I wonder if this varies by territory. In UK, none of the Gmail accounts I use has received this pollution

> I am moving all of the software that I pay for to competitors who either do not integrate AI, or allow me to disable it if I wish.

The latter sounds safer. The former may add "AI" tomorrow.


I am in the UK. TBC this isn't a gmail.com email address, this is a paid "small business" workspace against a custom domain.

Eventually they backtracked and allowed (I think?) all paid customers to disable gemini, but I had already migrated to Fastmail so :shrug:


Ah. My addresses are @gmail.com.

Perhaps the fact you paid got you marked as a likely gull :)


I think in that case you have even less ability to turn that stuff off? If it's not there for you yet, perhaps it's a slow rollout still?


Perhaps yes. We'll see :(


Gmail <> Google Workspaces


Maybe not equal but when I launch Gmail the page says "Google Workspace" and I get Gmail, Docs etc. as per https://workspace.google.com/intl/en_uk/resources/what-is-wo... .


Google has always released to Workspaces and Gmail separately. In this case the Gemini button is in Workspaces (because they’re a paid tier) but not yet Gmail.


Yeah this is not a new thing with AI, you can unsubscribe all you want, they are still gonna email you about "seminars" and other bullshit. AWS has so many of those and your email is permanently in their database, even if you delete your account. I also still get Oracle Cloud emails even though I told them to delete my account as well, so I can't even log in anymore to update preferences!


Fun fact, requiring login for unsubscribe is illegal per the canspam act. The most you can do is force a user to verify their email address to you.


> I disagree: in as much as I have noticed this far more with AI than any other advancement / fad (depending on your opinion) than anything else before

Isn't that because most of the other advancements/fads were not as widely applicable?

With earlier things there was usually only particular kinds of sites or products where they would be useful. You'd still get some people trying to put them in places they made no sense, but most of the places they made no sense stayed untouched.

With AI, if well done, it would be useful nearly everywhere. It might not be well done enough yet for some of the places people are putting it so ends up being annoying, but that's a problem of them being premature, not a problem of them wanting to put AI somewhere it makes no sense.

There have been previous advancements that were useful nearly everywhere, such as the internet or the microcomputer, but they started out with limited availability and took many years to become widely available so they were more like several smaller advancements/fads in series rather than one big one like AI.


This is a very strange argument. If AI was so bloody revolutionary than you didn't have to sneak it into your products without consent.

Very often AI seems to be a solution looking for a problem.


> With AI, if well done, it would be useful nearly everywhere.

I fundamentally disagree with this.

I never, now or in the future, want to use AI to generate or alter communication or expression primarily between me and other humans.

I do not want emails or articles summarised, I do not emails or documents written for me, I do not want my photos altered yassified. Not now, not ever.


Keep in mine I said "if well done". That was not meant to imply that I think the current AI offerings are well done. I'd take "well done" to mean that it performs the tasks it is meant for as well as human assistants perform those tasks.

> I never, now or in the future, want to use AI to generate or alter communication or expression primarily between me and other humans. [...] I do not want emails or articles summarised, I do not emails or documents written for me, I do not want my photos altered yassified.

That's fine, but generally the tools involved in doing those things are designed to be general purpose.

A word processor isn't just going to be used by people writing personal things for example. It will also be used by people writing documentation and reports for work. Without AI it is common for those people to ask subordinates, if they are high enough in their organization to have them, to write sections of the report or to read source material and summarize it for them.

An AI tool, if good enough to do those tasks, would be useful to those users, and so it makes sense for such tools to be added by the word processor developer.

Again, I'm not saying that the AI tools currently being added to basically everything are good enough.

The point is that

(1) a large variety of tools and products have enough users that would find built-in AI useful (even if some users won't) that it makes a lot of sense for them to include those tools (when they become good enough), and

(2) AI may be unique compared to prior advances/fads in how wide a range of things this applies to and the speed it has reached a point that companies think it has become good enough (again, not saying they have made the right judgement about whether it is good enough).


How about machine translation and fixing grammar in languages you're not very familiar with? That's the only use of "AI" I've found so far. I'd rather read (and write) broken English in informal contexts like this forum, but there are enough more formal situations.


Remember, I am responding to this:

> With AI, if well done, it would be *useful nearly everywhere.*

I'm not saying it doesn't have uses.

Having said that, there are two things I never want AI to do: a) degrade or remove the need for me to express myself as a human being, b) do work I'd have to redo to prove it did it correctly.

On translation, sycophancy is a problem. I can't find it now, but there was an article I read about an LLM mistranslating papers to exclude data it thought the user wasn't interested in. So no, I wouldn't trust it for anything I cared about.

I do use AI: I'm literally reviewing some Claude generated code at the moment. But I can read that and know that it's done it right (or not, as the case often is). This is different from translation or summarisation, where I'd have to do the whole task again to prove correctness.


If you're not familiar, how could you possibly know if what you're conveying is accurate to your intention? And if you don't, why bother at all?


I don't want those added to anything either - if I want to translate something I'll use a dedicated tool.


Even WhatsApp has it in the search bar


For me it’s just a multi-coloured ring like a gamer’s mood light, but it’s literally just slapped in the corner of the UI the same way a shitty Intercom widget would be.

Totally a thing a growth hacking team would do, injecting an interface on top of a design.


>I disagree: in as much as I have noticed this far more with AI than any other advancement / fad

I agree with gp that new spam emails that override customers' email marketing preferences is not an "AI" issue.

The problem is that once companies have your email address, their irresistible compulsion to spam you is so great that they will deliberately not honor their own "Communication Preferences" that supposedly lets customers opt out of all marketing emails.

Even companies that are mostly good citizens about obeying customers' email marketing preferences still end up making exceptions. Examples:

Amazon has a profile page to opt out of all email marketing and it works... except ... it doesn't work to stop the new Amazon Pharmacy and Amazon Health marketing emails. Those emails do not have an "Unsubscribe" link and there is no extra setting in the customer profile to prevent them.

Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"

Neither of those aggressive emails have anything to do with AI. Companies just like to make exceptions to their rules to spam you. The customer's email inbox is just too valuable a target for companies to ignore.

That said, I have 3 gmail.com addresses and none of them have marketing spam emails from Google about Gemini AI showing up in the Primary inbox. Maybe it's commendable that Google is showing incredible restraint so far. (Or promoting Gemini in Chrome and web apps is enough exposure for them.)


> That said, I have 3 gmail.com addresses and none of them have marketing spam emails from Google about Gemini AI showing up in the Primary inbox.

That's because they put their alerts in the gmail web interface :-/

"Try $FOO for business" "Use drive ... blah blah blah"

All of these can be dismissed, but new ones show up regularly.


>That's because they put their alerts in the gmail web interface :-/

I agree and that's what I meant by Google's "web apps" having promos about Gemini.

But in terms of accessing Gmail accounts via the IMAP protocol in Mozilla Thunderbird, Apple Mail client, etc, there are no spam emails about Gemini AI. Google could easily pollute everybody's gmail inboxes with endless spam about Gemini such that all email clients with IMAP access would also see them but that doesn't seem to happen (yet). I do see 1 promo email about Youtube Premium over the last 5 years. But zero emails about Google's AI.


> Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"

That's "transactional" I'm sure. It makes sense that a company is legally allowed to send transactional emails, but they all abuse it to send marketing bullshit wherever they can blur the line.


How is it transactional in any way? It looks to me like post-transaction upsell, pure and simple.


I 100% agree with you, but it seems like the courts do not. Even while they were functioning.


Has this been actually tested in court, though.


It's not, but it's their justification


> Maybe it's commendable that Google is showing incredible restraint so far.

Or the Gmail spam filter is working.


This is not an issue in Europe, due to effective regulation.


>This is not an issue in Europe, due to effective regulation.

This article's author complaining about Proton overriding his email preferences is from the UK. Also in this thread, more commenters from UK and Germany say companies routinely ignore the law and send unwanted spam. Companies will justify it as "oops it was a mistake", or "it's a different category and not marketing", etc.


Imagine making this argument for other technologies. There is no opt-out button for machine learning, choosing the power source for their datacenters, the coding language in their software, etc. Conceptually there is a difference between opting out of an interaction with another party vs opting out of a specific part of their technology stack.


The three examples you listed are implementation details, so it's not clear if this is a serious post. Which datacenter they deploy code in is (other than territory for laws etc, which is something you may wish to know about and pick from) an implementation detail.

A better example would be: imagine every single operating system and app you use adds spellcheck. They only let you spell check in American[1]. You will get spell check prompts from your Operating System, your browser, and the webapp you're in. You can turn none of them off.

[1] in this example, you speak the Queen's English, so spell color colour etc


Unrelated but interesting to think about terms like "queens English" now that the queen is gone. Will we be back to kings English some day? I suppose the monarchy might stay too irrelevant to bother changing phrases.


They’re already calling it the Kings Birthday public holiday in Australia and it just seems wrong.


Do we know if this counts for Prime?

Though tbh the writing has been on the wall for awhile. It's really frustrating, because it otherwise just gets out of your way, which is why I like it.

I guess I have to dedicate an afternoon to finding an alternative.


same thing here. com.teslacoilsw.launcher.prime is a different version and from what I see it doesn't have the tracking ... changing launchers when you have 200+ apps neatly grouped in folders would suck.


One option is to move to a keyboard-based launcher that no longer requires you to organize your apps in colorful grids. KISS is like this. Tiny, fast, free.


How does the climate crisis fold into that, or do you not think that's a problem, or do you think it's a problem with no ramifications?


> and you are asking them to decide on COVID or climate change

In case you didn't mean this, do you agree that the propaganda you're referencing above is the "you" in this sentence? eg the propaganda is the thing that is asking them to decide on covid or climate change.

I don't think anyone who is genuine expects the public to have expertise in these topics. The propaganda seems centred around a constant war against intellectualism and expertise, such that people think they should have an opinion on things they are woefully unqualified to have opinions on, and politicians just align themselves to what they think will get votes.


> “…do you agree that the propaganda you're referencing above is the "you" in this sentence?”

?? The op is making “propaganda” by some assertions in their comment?


Sorry I think I worded it badly. The OP is not the you, I mean the propaganda is. As in, a large part of the propaganda is the idea that people should have opinions about this kind of thing, as opposed to accepting the idea that there are people who are more well versed in these topics than they are. eg the scientific consensus is that global warming is real and it's a big problem. There isn't really an opinion to have there, broadly.


Ironically, the pub it suggested near me that was the most fucked closed down years ago (it's not just them, quite a few databases don't know that), so yeah, good call.


Same here, at least three nearby no longer exist and are now flats already.

I guess prepare for an acceleration of the same.


I don't think with any confidence we can say it will be for the better. Or at least, not on balance for the better.


To be clear you believe that you do a year's worth of work in one week? Every week? So halfway through this year you will have done 25 years of work?


OP has got 200 years of experience under their belt now.


That's a FAANG level resume right there


50x more code? Absolutely plausible. 50x more ideas implemented, or 50x better ideas? Doubtful. Generating code doesn't generate value.


He was just not doing his job 95% of the time.


In a sense, yes. If you compare how long it would take me to do it manually about 15 years ago.


Yes, it is not fair. The entire gambling industry is unfair and exploitative.


Late to this, but my interpretation of the parent's point was eg: LLMs still often produce bad code, despite "reading" every book about programming ever written. Simplistically, they aren't taking the knowledge from those books, and applying them to the knowledge of the code they've scraped, they are just using the scraped output. You can then separately ask them about knowledge from those books, but then if you go back and get them to code again, they still won't follow the advice they just gave you.


More power to you obviously. But I have mixed feelings about this.

There is so much information that curation is inevitable. Sure. But I don't want that curation to be "fun". I don't _want_ tiktok in my life, or really anything whose goal is "engagement". I don't want time killers.

One of the reasons for getting back into RSS for me was to have a direct feed to authors I'm interested in.

But I understand that quickly can become unmanageable.

When that time comes, I think I'd be interested in the curation being about compressing content down, not expanding it out. That is to say: use the algorithm to select from a large pool of what you're interested in, down to a manageable static size (like a weekly newsletter), as opposed to using it to infinitely expand outward to keep engaging you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: