>AI is basically text-predict combined with data mining. That’s it. It’s a super-Google that goes into the body of texts and rearranges the words into a very pleasing facsimile of a cogent argument. There’s no “intelligence” behind it, in the sense of a computer actually thinking.
>AI is more or less the same thing — it uses our wonder to convince us of a simulacrum of intelligence when what we are really witnessing is, in a sense, our own childish excitement at a trick of anthropomorphisation
People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.
What's important is if it is useful.
E.g... create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv. People would embrace AI like that instead of complaining about it.
If a pundit tried to advise the consumer who wants to avoid ads, "you know, that ad-skipping technology is _just_ fancy linear algebra and there's no _real_ intelligence behind it! You're dumbing down your brain by letting the AI mute the ads automatically instead of you doing it yourself." ... that's not a compelling argument. The usefulness of blocking ads outweighs any theoretical thresholds for real intelligence.
A lot of generative AI is not useful, so people will complain about it by falling back on the "it's not real intelligence" argument.
> People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.
It matters because you have to keep it in mind when judging its output EVERY TIME. You can't trust it to not tell you to poison yourself, or invent people/things that don't exist, or make up stories about people that _do_ exist.
Except that telling someone a lie, especially in professional context, can be completely discrediting, especially whenever trust is crucial. I work with people able to say: "I don't know", "I'm not sure", "Hard to say", "I need to do some extra work to verify this". Working with such people is very efficient because you don't waste context on fact-checking their every utterance.
Can they be wrong? Absolutely, but it's relatively rare. Will they decide to just lie to me? Extremely unlikely, given the stakes.
Furthermore, it also matters if we’re replacing existing systems with new ones with lower reliability, e.g. replacing technical documentation written by humans with ones generated by AI.
but that is not relevant to ad-detecting. the worst ad-detecting can do is false positives, eg detect an ad inside a scene where someone is watching said ad on tv as part of the story or something like that. so your basic point still stands, you have to check if what it detects is really an ad, but beyond that the use of AI for ad-detecting is benign and harmless.
> You can't trust it to not tell you to poison yourself, or invent people/things that don't exist, or make up stories about people that _do_ exist.
My mum was a big beliver in homeopathy and Bach flower remedies. Kept sneakily dosing me and my dad with one "for memory". She ended up with Alzheimer's just shy of 20 years younger than her mum.
I could name a lot of elected officials over the years that made up stories about people, the hard part is picking one sufficiently uncontrovertial that nobody will object to the example.
What matters is the rate at which these things happen, given AI is now sufficiently competent at presenting as a human to be a problem for those who need to know they're discoursing with a human (job interviews, grading essays, is this video call really with your relative who really needs an emergency payment or is it all fake, political propaganda).
You've completely missed the point of the parent comment. It doesn't matter if it occasionally hallucinates, because there are many use cases where that's okay and you can generate enormous value anyway.
i disagree with that. the point is that there are good and harmless uses of AI. the possibility of dangerous uses does not necessarily warrant the complete dismissal of AI unless a complete dismissal is the only way to protect us from harm caused by use of AI.
Despite it being cherrypicked, it is a valid case. Hence it would seem you're knowingly disagreeing with a fact because it doesn't fit your purpose, which is disingenuous.
> People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.
> What's important is if it is useful.
Except the usefulness expectations almost completely rely on the intelligence (it's right there in the name)! Without this you wouldn't have so much hype&money in this topic. So the complaint is rightfully directed at the core selling point
Why not just... use the product and see if it's useful? What does it matter what it's called, what does it matter if it's hyped or not, what does it matter if investors are or aren't throwing too much money into it.
You, individually, can try out the product and see if it's useful for you. Many people have done so and found it extremely useful for themselves (myself included).
Strange question, why would you "just" ignore all the information out there? But then why would you pick this specific products among a million of others? You wouldn't even know about it without the hype!
> Strange question, why would you "just" ignore all the information out there?
Some of the information out there is many, many people saying they do use Genai products and find them valuable. You shouldn't ignore that info either.
> But then why would you pick this specific products among a million of others? You wouldn't even know about it without the hype!
That's true of everything ever to different extents. You wouldn't have heard of any product if someone hadn't told you about it, either word of mouth, advertising or you seeking it out in some. I don't see how that possibly matters.
And yes, the fact that there's a lot of hype - a lot of people very excited and saying this is something that is super useful for them - is absolutely the reason most people have heard about it. That's good! A lot of people finding value in a product and telling others is pretty good evidence.
Your analysis assumes that we can readily know if it’s useful. But we often can’t.
Consider these absurdities: “I don’t need to test this new drug before releasing it to market, as long as it is safe and effective!
Or “I don’t need to worry about phishing scams. I can click on any link as long as it is safe.”
Or more to the point: “This food tastes good, so it must be good for me. There’s no need for hygiene and food safety laws.”
I don’t trust LLMs, because I have tested them, and continue to test them, and they are patently unreliable. They can still be useful in a momentary, self-contained way. But my studies have led me to use them only for answers that are easily verified.
> People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.
It matters because they called it artificial intelligence. The point I sew people bring up, though, boils down to whether its intelligent at all, not fake vs real. Debating whether "artificial" and "fake" are synonymous would be an odd stance to take, its the "intelligence" part that is the sticking point.
Beyond just the name, it matters because the risk of it going wrong is different. If it is a useful tool but not intelligent, the risk is mainly in how people will use it. That's no different than any technology.
If it is intelligent, the risk is in how it will use itself. It depends on how you want to define intelligence but, at least in my opinion, the ability to determine your own goals and desires is a prerequisite for intelligence. From that view, we can only hope that an AI is aligned with us...and given that we have all but abandoned both the alignment problem and the interpretability problem that doesn't seem likely.
I don't think we have a different usage based on the label "intelligent" or "not intelligent". I would leave this discussion to (armchair) philosophers, and focus on the utility and risks - which are exactly the same, regardless of the label we put on the box, because the products in the box do exist and act in the same way. An automaton with access to the red button can kill humanity just the same as an AI with access to the red button - the problem is giving it access to the red button, which the AI fans often seem all to happy to give.
Words matter though, if one wants to call it intelligence we first have to define what that means then use that definition. Plenty of people have tried to define it and there's isn't one answer for that, but I've never seen anyone argue that intelligence boils down only to computation. To call these tools intelligent means that there is more to it than computation, and that's an important difference.
The risks aren't the same here. Computation, if we want to say these "AI" tools are nothing more than complex math, is only as risky as the person using it.
Intelligence is potentially more dangerous, regardless of what traits you may think is required in addition to computation to make it intelligent.
We don't actually know how these LLMs work at time of inference and we can't analyze the trained dataset to understand why it would give an answer. That's fairly benign under the "its just computation" view, but that black box is full of unknown risk with any definition of intelligence because we don't actually understand how the thing works, why it does what it does, or what it will do next.
An unhinged man with a gun is significantly riskier than an unhinged man without a gun, so I'm not sure I get your argument. But if you mean it in the philosophical/categorizational way, I'll just move on because I don't have the knowledge or the interest to label big things while they're flying towards my head - I rather duck first and see afterwards what they were.
Right, an unhindged man with a gun is worse than without a gun. A gun that can itself become unhinged is likely worse, or at least more of an unknown.
When intelligence comes into the picture the question is what the intelligent thing will do, not how a person will use it. If its just a tool that a person can use it almost certainly isn't itself intelligent at all.
E.g... create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv.
We had that in 1990's VCRs with no "AI" required. It also fast-forwarded through the commercials during recorded programs.
>We had that in 1990's VCRs with no "AI" required. It also fast-forwarded through the commercials during recorded programs.
Yes, I owned several of those Panasonic VCRs with the "Commercial Advance™" feature. Also had a Hitachi VCR with same feature licensed from ADLE. The heuristics used a combination of detecting a fade-to-black screen transitions and higher audio levels of loud commercials. It was simplistic criteria but was "good enough".
The problem is that VCR required 2 separate passes of the VHS tape. The 1st pass was to record the video and then the VCR automatically rewound the tape and the 2nd pass played it back to itself to analyze and mark the ad segments.
That approach does not work to watch live feeds of sports broadcasts. To use the Panasonic VCR approach, one would have to "record" it first -- which defeats the purpose of watching the game live. To instantly block live ads without any waiting, you need technology with more "intelligence" or "smarts" or whatever people want to call it.
My point is that if you hype up a "live-tv-ads-skipping device" with "Unicorn Fairy Dust Technology" -- people won't complain that the company called it "unicorn whatever" -- as long as it actually works and improves their lives. It's when such a device does not work (e.g. Apple Intelligence fiasco) is when the meta analysis and lectures about such as "you know, that device doesn't actually have any horses inside of it with a horn coming out of its head".
Precisely why I don't use AI. It is not. I can't trust it bar for some "research this for me" followed by a thorough review of the source material, which is something I am already really adept to do myself, so no value here.
Is search, even legacy search like a table of contents or the Dewey decimal system not useful? It may not point you to what you want either.
LLMs are an ok iteration on search with up and downsides.
One of the upsides is better contextual hinting from the users input. One of the downsides is that it also makes it trivial to spew out so much bullshit content that soon I doubt it will be able to train on most of the public internet anymore.
> create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv.
OK so there are a couple of approaches to this issue, e.g. based on audio signal level, constant audio patterns, constant video patterns - and you don't really need to involve machine learning here. You can call it automation and it is perfectly fine.
People have been using the term "AI" in so many contexts for so many things that it's almost meaningless because it's so vague.
This reminds me of a very narrow set of people who consume meat. I consume meat as well, but I attempt to balance it despite knowing no animal can consent. I will use a meat substitute and I think there can be a world where synthetic meats and other changes long term displace that suffering.
But, there is a very small set of meat consumers that seem to "need" that element of the product. They want something to have suffered or died and removing that component of it is not desirable for them.
Why does anyone care to want these kinds of "real" things? If suffering can be reduced who cares?!
the problem with synthetic meat is the same problem as everything synthetic: we don' know if it is safe. it took decades to find out that some artificial sweeteners cause cancer for example.
for me the conclusion is that everything synthetic is more likely dangerous in some form than it is helpful. i'd rather give up meat before eating synthetic meat. i also avoid synthetic materials in clothing, etc...
i am all for reducing the suffering of animals, but health and the environment come first.
> I still have not downloaded an AI app of any kind to my phone.
If he has an iPhone, there’s no need to download it. It’s already there.
That said, he has some good points.
Unfortunately (or maybe fortunately), we won’t be able to “opt out,” forever. At some point, ML is bound to become endemic.
It’s like those stupid scan-guns that supermarkets in my area are starting to ask customers to use. They are scan guns that you pick up, as you go in, and scan each purchase. When you check out, you just scan a barcode on the cashier stand, and Bjorn Stronginthearm’s your uncle.
I refuse to use them, as the only reason they exist, is to fire cashiers.
Sooner or later, however, I am unlikely to be able to avoid them.
> I refuse to use them, as the only reason they exist, is to fire cashiers.
That's great, why would I pay someone to do something I absolutely don't mind doing myself, and even save some time while doing it. Do you also still pay someone to pump fuel into your car?
> Do you also still pay someone to pump fuel into your car?
Until like two years ago I did (now-rural Oregonian). Now I pay the same for gas, getting it no faster and having to do the work and even sometimes I now even have to deal with loud commercials they put on the pumps. The only upside is it's easier to fill up late at night, yay.
I don't voluntarily assume legal risks that come in to play if I make a mistake.
That and I've been in public restrooms and seen how few people wash their hands. I hate having to touch public screens. Touch public screens and then handle my food? Pass.
If I'm expected to start doing something that they formerly paid employees to do, I'd expect to get something in return, like a grocery discount or something.
It's like if restaurants started making you cook your own food. Why even bother to go?
> I refuse to use them, as the only reason they exist, is to fire cashiers.
This reminds me of when the bus company of my hometown transitioned to having a driver + cashier to only having a driver that takes cash. Half of the workforce just gone overnight. Of course the writing was on the wall with electronic ticketing, but still.
It makes me feel I'm part of a "First They Came" situation. Sure maybe these people found other jobs for themselves (or so I tell myself) but still.
This is a strange example because have you heard of the combine harvester. It made, together with other farming machines, useless what 90% of people used to do
The combine wouldn't have been useful without monocropping practices, and those are only possible due to chemical pesticides and herbicides.
I'd point to the poisons we're willing to dump on our fields as the primary reason for so many jobs being replaced. The combine is just what replaced them, not the root cause of why they could be replaced.
In most cities, public transport service frequency and coverage has expanded massively. So they halved the workers per bus and doubled the number of busses.
If they can be easily automated they should be. Keeping a human doing a job that doesn't need to be done by a human is a bullshit job. It might not have been a bullshit job twenty years ago, that doesn't matter. Things change.
No the bullshit jobs I see people complain about are often linked to "Couldnt someone automate this" like taking notes from inbound phonecalls and simply relaying messages inside the office.
"I refuse to use them, as the only reason they exist, is to fire cashiers"
But that's not a great job in the first place and I am not a fan of keeping jobs for the sake of it. So we do need to figure out a way of income for those who cannot make a transition, but my mission is not to keep cashiers.
> Unfortunately (or maybe fortunately), we won’t be able to “opt out,” forever. At some point, ML is bound to become endemic.
I don't hear people wanting to opt out of ML though, they want to opt out of AI.
Beyond just the bad name of these tools though, one absolutely can opt of out then. It just depends on how far one is willing to go to avoid using them and what all they will give up in the process.
>I refuse to use them, as the only reason they exist, is to fire cashiers
They are a much faster, more convenient experience than standing in line waiting for a cashier lane, scanning items after picking. It's just great, as the customer.
I'm glad they still have 1-4 cashier lanes for refuseniks, pensioners and people without the loyalty card (etc) who don't want to deal with it. Everyone's happy.
They are a much faster, more convenient experience than standing in line waiting for a cashier lane, scanning items after picking.
I'll have to start shopping in your town.
At every supermarket in my city, there is a long line of people waiting to use the self checkouts. I keep an eye on the last person in that line, and I'm almost always done first by using a human cashier.
People just buy into the "tech=better" meme without even thinking.
I also noticed this. I think we just want to avoid human interaction, I also do this myself, but I have no illusions that the self-checkout is faster, or that I'm doing the right thing.
You only have to hear "place the item in the bagging area, and then scan the next item" 100x and sometimes extra aggressively random. Oh, and then something scans twice and you have to wait 5 minutes because they only have one human helping with 8 different scanning stations. Or the scan gun starts beeping really loudly and not scanning anymore for some reason, and you have to return it to the dock, take it out again and hope it starts working. Or you're left handed or standing in a way the system doesn't like and the camera thinks you're trying to steal something so you have wait, again for the one human to come over review the footage look puzzled and tell the computer that yeah everything is okay
I travel a lot for work(200+ days a year) and so I see the best and the worst.
In every country I've been, the self checkout is implemented differently.
Only in certain countries, low trust societies, do you have to wait for the scale to calculate the weight of the goods to prevent shop lifting
In Switzerland as an example, self checkout is a breeze. There is no such scale at all. Fast, highly efficient with good software.
Same in Sweden, though for some reason some grocery stores in Sweden insist on being a member before you can use the self checkout lanes. I think because they were early adopters of the technology and weren't sure they could trust their customers(hint: they could). Another annoying thing is that you have to scan your receipt to exit the store. This you usually don't have to do in Switzerland.
I've seen the same type of inefficient system you're accustomed to in the UK at pretty much all the stores over there and in Rimi in Riga.
Frankly it works a lot better in Latvia than the UK, though still annoying and disrupts the workflow.
That describes most of them around here (suburban NY, USA).
The [REDACTED DRUG STORE CHAIN] ones are the worst. They get confused at the drop of a hat, and have some employee, constantly intervening. I suspect they run “brogrammer” code.
They also don’t work well for large orders or things that aren’t bar coded. They’re mostly good for a small number of bar coded items. And some stores have probably excessively trimmed human cashiers.
I don't agree at all. I think they're a terrible customer experience. I stopped using them a while back and am much happier for that. I'm not alone -- some stores have already started staffing more checkout lanes and removing some self-checkout kiosks.
They’re maybe more convenient for the shop but the lines seem equally long where I am as people aren’t as fast as a cashier. I don’t notice any difference in efficiency other than I now need to scan and look everything up myself. So a net loss for me.
Why should we protect the existence of jobs that are both unnecessary and make the customer experience worse? Aren't we still in the situation that there are more positions for unskilled labor than applicants?
Because we still live in an economic and societal system that requires everyone to have a job and spend more of their money on useless shit.
If we want to truly want to automate away jobs, we should also change our economic system such that people don't have to work like they do today. Without that we'll just create more jobs for everyone to fill, automating away some while cresting others and all we really sis wash shake things up and stress out those involved.
> I refuse to use them, as the only reason they exist, is to fire cashiers.
I use them because it lets people not be cashiers and they can be stocking the store and doing something more useful than being a glorified barcode scanning robot.
How is stocking shelves more fulfilling than being a cashier? Cashiers at least interact and talk with people.
I have nothing against people using self-checkout, but making it sound like you're doing something good for people seems absurd and is only there to make oneself feel better when, in reality, jobs WILL be lost because of it. No need to sugarcoat it.
Of course that's an issue - but I don't think it's the main reason.
If you go back in time to when the original super market designs were made, prices where on items with printed labels and the person at the checkout actually did the adding up by entering the numbers!
ie checkouts originally existed because adding up to total cost was a non-trivial job, and then you had to hand over cash and somebody needed to calculate the change, work out how you split that into the notes and coins and count it out.
Overtime these steps were automated way - first with barcodes for cost totalling and then both electronic payment methods and payment machines that take cash and automatically give change.
Then the job just became swiping the items across the barcode scanner and pressing a couple of buttons to enable the electronic payment and it took a while for people to realise the original reason for the job had gone and customers could do it themselves.
Why do the cashiers (that still work there), call them "scan and steal," then?
From what I have seen, they actually make shoplifting much easier. It's just that the product loss is less than salary and benefits of the cashiers, and I'll bet that the company has some way to recoup the losses.
Oh yeah. This is the big open secret. Theft is way up with these. It turns out that when you take people out of the equation and combine that with prices going up, a lot more shoppers figure it is only natural to compensate themselves a little. Just some little things you can plausibly explain as having overlooked them if someone finds out. Buy three chocolate bars, scan two.
I have no idea what the next iteration will be, or if this is considered acceptable in terms of profits.
Admittedly, it's been a while since I've gone, but I used to simply not stop for the door-checkers. They'd want to look in the cart/at the receipt and I'd keep a nice brisk pace about my business.
You think I stole it? Get the real cops. I actually paid for everything, enjoy.
Why'd I stop going? Armed private security with more ego than training. Much better options. Absolutely not advice, but usually the right look/pace does wonders. It helps to be defensible.
> AI is basically text-predict combined with data mining. That’s it.
That sounds fine.
Its like when you hear a conservative pundit claim that all antifa are weak people who need extra genders, and then in the next breath complain that they are an effective, brutal, brick throwing, pundit punching street militia.
Pick a lane.
>As for a machine writing a novel for me in a matter of milliseconds — I have no idea how that could possibly generate authentic pride or produce anything other than a cavernous inner emptiness?
So now its issue is that it doesnt give you good feelings?
> As for a machine writing a novel for me in a matter of milliseconds — I have no idea how that could possibly generate authentic pride or produce anything other than a cavernous inner emptiness?
From my experience trying this. It isn’t quite that easy. The AI has a massive amount of difficulty staying on track and remembering earlier facts when writing a story. There is zero risk of them replacing writers for anything but short stories for now.
Sudowrite literally breaks out the story into chapters, and has you arrange all the facts of the story so they can be summarily accessible at all times.
Even making a massive snowflake for the whole story didn’t quite do it for me. It would forget relatively unimportant things, like the character had said something to someone, then three scenes on misremembers what they actually said.
The issue is that some work is just something we want to be done, and other work is something we want to do for the joy/satisfaction of it. And a big part of the conflict is that the same task is likely to fall into different categories for different people. When I read a novel I don't really care about the author or what they went through to write it, but the author obviously feels very differently.
> Its like when you hear a conservative pundit claim that all antifa are weak people who need extra genders, and then in the next breath complain that they are an effective, brutal, brick throwing, pundit punching street militia.
Those two aren't mutually exclusive.
I haven't met many (any?) Antifa members and wouldn't begin to categorize them, but a person can both want non-binary genders and throw bricks.
I belive the claim is that *saying* "only weak people need extra genders" is incompatible with also *saying* "they are physically dangerous due to capacity for violence".
The comment does not itself appear to be making the claim "wanting extra genders is weak", but rather is criticising those who do say that.
> It's not saying "wanting extra genders is weak", it's criticising those who do say that.
Sure, I get that. I wasn't trying to make that point.
I take issue with the idea that weak people can't be violent or physically dangerous.
It depends a bit on what kind of weakness they mean, but physically weak people can still throw bricks or pull triggers and emotionally weak people can still lash out when they feel they have no other choice.
I'd actually argue that weak people are more likely to be more violent if/when they do lash out because it may be a last resort for them. Bottle things up for too long and they tend to blow up.
> So now its issue is that it doesnt give you good feelings?
The issue is that novels used to have value because they required effort and time. Now being able to produce one means its value dropped significantly. Only for the artificial ones, though, of course.
> But I’m still paying the price for that: every time I log in to my bank account now, it’s like peeling barnacles off the hull of a ship to get rid of all the new charges that Apple and Google have concocted.
That part isn't very convincing. I have no charges on my bank account from Apple or Google.
Anyway, i think a potential reason to reject AI is what Kurt Vonnegut has laid out in his 1952 novel "Player Piano": Do we want automation take away jobs we actually like? I highly recommend reading this book, it is once again very relevant today.
I think this statement concisely embeds the premise that might be the root cause of much of the problem.
That premise is "For something to have worth, someone has to be willing to pay for it".
That is only true for the narrowest definition of worth. If worth is reduced to a property to facilitate trade and nothing else then it works.
Worth can also carry a sense of non-tradable value, It can be meaningful. You can do something because you like doing it and someone else could be glad you were doing it. Requiring a financial transaction to turn that goodwill into self esteem seems to be a fundamental problem with our society.
I think that the concern that AI will take away creative jobs is missing the forest for the trees. I want to write more about my thoughts on this but the gist of it is those jobs may become rarer, but that doesn't have to mean that creative endeavors will cease. I think we need a broader reimagining of what employment means to individuals and to society.
I enjoy understanding what my programs do to the deepest level, so making or using AI are both boring; they remove the fun part of programming and leave only the boring parts (mainly debugging). I haven't liked the current ML field since the beginning (early 2010s in my case) for this reason.
I want a tool, not a slave. I don't want it to be "smart", but an extension of my body. A thinking body part is always more annoying to deal with, because you have to reverse-engineer what it's doing to get it to do what you want.
I don't think this reasoning applies to everyone. I think it's fine for other people to use ML algorithms. I just don't want them myself.
Also dont forget about deskilling. Right now its fine because you are able to debug. But it would get gradualy lot harder if you are not flexing that muscle.
> The upside was a few years feeling like I was part of the future as I sipped my lattés and floated through the dawning post-industrial era with my sleek silver Apple gadgets.
This too. The author just has the drama on everything turned up to a thousand percent. It’s incredibly irritating and made me stop reading.
There's quite a bit of hyperbole in the article, but that part reads like a marketer's dream. Imagine that: The idea that all you need to do to solve your social cohesion problems is to buy something. How incredible would such a proposition be!
We all know very well that buying something isn't the difference between why our peers will or won't engage with us. However devaluing others into superficial strawman is an accessible coping mechanism for social rejection.
When children say something like this, it's because they're not aware they're being bullied and buying the <thing> isn't going to fix it.
When an adult says it, it's a bit more concerning - it shows that they perhaps have certain unaddressed social anxieties, or are avoiding dealing with their antisocial behaviours that are leading to their social rejection.
Probably, I was also objecting to that (and also don't live in the US). But I wouldn't read too much into that sentence: it's just part of the author's self-indulgent style.
Really? Not even being from US, I can attest that simply carrying a silvery laptop or a phone with the half eaten apple logo literally opened doors. Salespeople would have nearly empty iphones besides their personal androids just for the purpose of showing off.
The "one of us" mentality has shifted to other things over the years, but some industries still live on the image.
In the US, Apple has done an all time great job marketing their products.
I don't think there is much more too it. "If I didn't buy Apple it would hurt my career". Obviously, completely absurd but how else can you grow a company to be worth 3.3 trillion selling tech gadgets at a massive premium.
It is easy to convince oneself too that this marketing is factual reality after spending so much money too. I mean that new iphone wasn't an expense, it was a career investment!
I have a serious question: is there any good collection or list of people who have rejected technologies when introduced?
I'm not talking about entire societies of people like the Amish.
I'm talking about people who otherwise use technology, but then do something like this publicly etc.
I want to know if time even remembers these people beyond that one tid bit. I am curious if they went on to do anything else or what technologies "caused" them to defect.
>is there any good collection or list of people who have rejected technologies when introduced?
Sounds hard to curate. Do you have a specific threshold of notoriety in mind? There could be tons of average people doing this and no good way to find out about it.
I mean, Stallman, if you count software commercialization as technology.
There are plenty of game developers who reject game engines in lieu of building their own from varying levels of scratch. Jonathan Blow, for example, made a number of successful and artistically meaningful games that heavily leveraged his custom tech stack.
I refused to use ratcheting spanners and air tools and other useful things when working on cars or bikes. I was so very wrong.
There is a time and place for an air powered ratchet or big rattle gun. To think “my muscles will atrophy and I’ll stop thinking of clever ways to loosen that thing” is just wrong. It’s confusing the desired outcome with the method.
I did resist copilot up until recently and now laugh at myself. Use it to power through the boring template crap and leave yourself the juicy morsels. It’s faster and more satisfying.
> I did resist copilot up until recently and now laugh at myself. Use it to power through the boring template crap and leave yourself the juicy morsels.
I don't want a tool to help me power through the boring template crap.
I don't want to power through the boring template crap.
I don't want the boring template crap to exist.
I want to be actively working on things that make the boring template crap cease to exist.
Advanced tools are really useful when you get to the intermediate level. Then you can speed up the boring stuff and spend more time on advanced things and learning.
There is still value for beginners to do things the hard way for a while. Write the unit tests by hand, see how they all repeat the same pattern 90% of the time. Now you see the pattern and you can use LLM-assisted intellisense to speed it up.
And because you know how they're supposed to look like, you can see when the LLM goes off the rails.
> Because AI is no good for us — no good for our minds, creativity, or competence — and as it gets jammed down our throats, we are the only ones with the power to refuse.
I think there are a lot of risks with AI, but I'm not convinced that it's intrinsically bad for "our minds, creativity, or competence". In many ways it's let me be MORE creative in the ways I want to be, by helping me overcome obstacles that were always blockers in the past.
"AI" automates the simple things with crazy efficiency.
Like I did a bunch of RSS-feeds for sites that don't have them (or they're crap quality). I just gave Cursor (Claude 3.7 I think) the HTML page and told it to write a parser that generates an Atom feed with Go from the page.
In most cases it was right on the first go, a few I had to adjust to make them look right in FreshRSS.
It even automatically suggested caching entries in a sqlite db to reduce load on the original site. This was for feeds that only have a link with no content, the application opens the link, fetches the relevant content and adds it to the custom feed.
random CLI app I don't know how to use > explain in plain english what I want to achieve > ChatGPT outputs the command I need
Ok, not random but I use it for FFmpeg all the time. Example from yesterday when I needed to convert a TrueHD audio file into stereo FLAC: https://i.imgur.com/5ib99qh.png
I'm still waiting for someone to create a SLM (small language model) specifically focused on just being an interactive man-page.
"I want to find all mp4 files, convert them to AV1 and move them to this directory" -> SLM generates script and maybe even runs it automatically Claude Code -style. All locally with no internet needed.
This exists, just use https://github.com/sigoden/aichat with local ollama and a model like qwen2.5-coder:7b or better (e.g. gemma3:12b).
Add this to ~/.bashrc :
# bind Alt-e on the command line to replace text with command
_aichat_bash() {
if [[ -n "$READLINE_LINE" ]]; then
READLINE_LINE=$(aichat -e "$READLINE_LINE")
READLINE_POINT=${#READLINE_LINE}
fi
}
bind -x '"\ee": _aichat_bash'
But if you don't like that you can press Ctrl-Shift-_ (bash has emacs keybindings) to undo and try something else. You can also put a # mark in front and hit enter, then up arrow then Alt-e so you know what created the command.
You may be able to boycott it now, but I am pretty sure there will come a time where that's not feasible if you want to participate in society, the same way it's no longer feasible to "boycott the internet".
Lots of people never use the internet, just as lots of people don't have a smartphone. Both are minorities, but they happily get along and fully participate in society anyway.
Oh I do know a few people who legitimately don't use the internet and still participate in society. Its more possible than it seems when most people are online constantly.
> create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv
Is someone working on this? I'd love to contribute. I have this idea every time I see someone watching commercial TV. The volume (and volume!) of ads is insane.
I was easily able to make a podcast ad remover with LLMs, but real-time ad muting of a video stream is something I haven't tried yet (possibly easier because of closed captioning?).
> But we can, for example, click down below Google’s AI offerings to look at actual links. And we can generally go about living our lives in our sad old way without the benefit of AI “personal assistants”
I don't think this will be possible for much longer. Google is getting so unusable, I am actually starting to believe that they are intentionally screwing their traditional search product to force users to "onboard" on to their AI results instead.
I'm not afraid of AI (LLMs). In fact, I think it has many useful applications that will come to light in the next few years.
I'm terrified of AI companies. They have no qualms about destroying economies and societies, burning the planet down with carbon emissions, whatever they think will make it more likely for them become the Google or Amazon of AI. They will fuck the rest of us over without a second thought, if it means they might win.
100% agree. AI is trash; increasingly invasive, life-destroying trash. There is no worse category of trash. There is nothing it cannot do which people cannot already do. The difference might be measured with various KPIs, but the material difference is that the output of AI is bereft of meaning. By definition, because it is not sentient it lacks intent. So it is all for nothing, means nothing and is nothing. Reject it.
> Unwary, I fell for the techno-optimism of the past two decades and ended up with a diminished attention span and a bunch of mysterious subscription charges to show for it. Well. Fool me once, shame on me. Fool me twice, shame on Sam Altman. I know, much better now, the folly of turning over my own mental powers to a bunch of techies promising a brilliant future.
> I’m not making that mistake again. Nor should you.
Context:
> The people pushing AI now are the same sorts who spent the 2010s promoting web 2.0 as a new vision of freedom and global connectivity, all while destroying traditional media and ripping off as much private data as they possibly could and cheerfully selling it to advertisers.
> This recent history raises an important question: why in the world should we trust these people ever again?
> These are remarkable achievements. But they do infantilise us. I’m pretty sure that, if my phone were taken away from me, I couldn’t find my own way from my home to my place of work.
So many issues in this article, but I'll only address a couple. The author seems to be saying that AI will somehow deprive people of their creativity, but I can't fathom that at all. It's a crystallization of the AI-making-art conundrum that so many have. But I see no reason why people won't continue to be creative on their own. Not if their creativity is genuinely springing from within. Nobody is being forced to use AI to create works of art, or even assist in it. Just as nobody is being forced to use the spell check or auto-complete features in word processors. They're just there as an option for those who choose to utilize them. I do see how these things can lower the financial value of artistic works though, which I think is the actual issue that many have. It's the fear that AI doing art will lead to less earning potential by artists, which is a different issue.
And the author alludes to the creation of further dependency on Big Tech such as OpenAI. This totally ignores the fact that there are quite a few (actually) open models out there. One can do a fully local setup, use a model service such OpenRouter, or even self-host a model on a GPU service such as Runpod. There are all these options available, depending on user preference and skill, so one doesn't have to become dependent on Big Tech's gated offerings.
Overall, the only thing I can really say is I hope the author is very close to retirement or already there, as if they keep this stance on AI they'll eventually fall very far behind, unless they're into plumbing or a similar extremely hands-on field.
I watch synthwave videos on youtube. I focus on low view videos to encourage/validate people that create. My feed is now full of AI synthwave to the point I stopped. So now the hundreds of people I used to encourage no longer have my encouragement. Multiply this by everything. This isn't free trade where it lifts everyone up. People only have so much attention to give out, and AI slop is crowding out actual artists for attention.
> AI is more or less the same thing — it uses our wonder to convince us of a simulacrum of intelligence when what we are really witnessing is, in a sense, our own childish excitement at a trick of anthropomorphisation
This claim would stand better if it wasn't able to solve math and physics problems I struggle with.
I guess for every grift for AI hype there's the other side seeking to capture an audience that rejects it and making super hyperbolic statements to get there lol. This doom and gloom is also getting just as tiring.
Like imagine rejecting the internet in it's entirety because the "dotcom" era was way overhyped. That's what the author sounds like right now. Whatever rises out of the ashes of this hype will be far more useful than what we had before and we all move forward.
I used to enjoy learning to understand the small technical details of the design work I did. I spent a lot of time learning HTML, CSS and JS and how to use them properly and effectively. It used to be easy, when my brain was more plastic. Now, not so much (I got old).
So, along comes AI, which promises to solve my problems - I won't need to learn the messy details because I just sort of describe what I want and AI handles the details.
God, what a disappointment. I don't know what I was thinking that AI was supposed to do for me, but this doesn't seem like what I wanted. I'm missing the details, which is where the interesting bits lie.
Probably why I avoid management - sure, managing people and things has details, but those details aren't interesting to me. I probably do need AI to help with that bit … except, hahaha, upper management doesn't need me there, or anywhere, because AI can do that stuff (but probably not the messy details - those were "handled by someone else's code" which was just copied into the system).
The future is kind of disappointing right now, in that regard.
Leaving aside the question of whether AI is/will ever be actually capable of what tech bros claim, I don't think there is anything inherent to AI that will cause these issues. The problem will be with society's mode of production. We have a society now where people could be doing less work but are instead forced to do—as Graeber calls them—Bullshit Jobs.
Bertrand Russell and John Maynard Keynes said almost a century ago that as technology advanced people would have to do less work. I think if AI lives up to the hype it could be a tremendous boon for everyone if we can restructure our societies to take advantage.
I found one of the comments more interesting. It shows how people can be so terribly wrong in one case and correct in another.
>Text predict combined with data mining’…is all 99.9% of [what we have quaintly come to define as] human intelligence is, too.
>Learn a trade, Sam. Start a manufacturing company. Go and dig ditches. Perform brain surgery. Fly a chopper. Provide hands-on care for someone who is ill. Be a stay-at-home dad, even. AI can’t do anything,
I agree with boycotting AI, and personaly go as far as refusing to ever speak to a voice recognition system, as one comenter here put's it....I picked my lane
and sure it's sometimes difficult, and I feel a bit bad for forcing and blundering my way through
to the human who then has to deal with my obstinence.....some of who...are truely outraged at my flat out refusal to just "go along to get along"
though to be perfectly honest, I am a pragmatist and so will exploit whatever technology that becomes main stream, for my benifit......but at one remove.....through business only services and identities that I have.
And as a hard core refusenick, I can see how sophisticated fingerprinting accross different platforms has become, and how there are various
AI's getting better at faking that it is a human presence attempting to interact, which with, what?
5 billion people on line, trying but now failing to connect because of AI out clicking them, there is a very real chance of things going horribly wierd and becoming non functional.
See if it's useful, ignoring the religious like hype, and use it for what it's worth. And stop calling it "AI" because it doesn't have anything to do with "intelligence".
LLMs will write the code for you is bullshit. LLMs will get you some starter or reference info if it's a widely discussed subject is true.
Apple products are crap, software wise. They're just less crap than the competition so I use them.
Those automate things that humans find tedious. Modern AI automates things that humans find give their lives meaning. If we automate creative and intellectual work, what else is there?
Junk writing from someone who doesn't understand AI.
I use ChatGPT in many useful ways. For instance, I ask ChatGPT to explain my texts back to me to help prevent misunderstandings due to misleading sentences. How's that a bad thing? This is incredibly useful when there's nobody around to proofread my texts.
>AI is more or less the same thing — it uses our wonder to convince us of a simulacrum of intelligence when what we are really witnessing is, in a sense, our own childish excitement at a trick of anthropomorphisation
People keep bringing up the "fake intelligence" vs "real intelligence" argument but it actually doesn't matter.
What's important is if it is useful.
E.g... create a ad-skipping device that uses so-called "artificial intelligence" to detect commercials during sports broadcasts and automatically mutes the tv. People would embrace AI like that instead of complaining about it.
If a pundit tried to advise the consumer who wants to avoid ads, "you know, that ad-skipping technology is _just_ fancy linear algebra and there's no _real_ intelligence behind it! You're dumbing down your brain by letting the AI mute the ads automatically instead of you doing it yourself." ... that's not a compelling argument. The usefulness of blocking ads outweighs any theoretical thresholds for real intelligence.
A lot of generative AI is not useful, so people will complain about it by falling back on the "it's not real intelligence" argument.