I don't think the two PoVs are incompatible. Lower tolerance for mess drives people to clean earlier and more proactively. The labor equity issue is a direct consequence of differences in taste. There are lots of group dynamics that work this way - not just domestic labor.
The real question is: what can you do about it? My claim is that "I have a problem with your mess" is a better starting point than "you have a problem with mess." Some parts of the article get this, and others don't.
> I think there's nothing wrong with seeing your partner struggling with something, talking to them about it and coming up with ideas to help them.
There is something wrong here: and that's assuming your partner is the one struggling. In clinical cases of hoarding and ADHD, that can definitely be the case - but all hints, nags and nudges are useless in these clinical cases anyway.
Fact is that in your day-to-day neat freak vs messy partner situation, the person struggling with mess is actually the neat partner and not the messy one. Most messy people I know, and have worked with, are quite content with mess.
> Antonia Colins, who runs the website Balance Through Simplicity, has two adolescent daughters, one of whom struggles with neatness
Another possibility: the daughter is at ease in both neat and messy environments, but the mother is only at ease in neat ones. In this case, the conflict is being driven by the mother's struggles - not the daughter's.
Taking the unmarried marriage therapists advice and escalating your nagging to email is a disaster idea. You can't performance manage a relationship. Nagging and trying to change people doesn't work. Communicating your needs and reaching compromise does.
One of the biggest warning signs in a new(ish) relationship. If she is trying to change you to her likings, run away as fast as possible.
BTW, thanks for mentioning the obvious. There are so many cases where parents claim problems with their children, which instantly go away when the children are allowed/able to leave the parental household. Control freaks are unhealth, to themselves and others.
Of course not. Only titans of industry and the landed gentry of the executive class are allowed to "move fast and break things", "ask for forgiveness rather than permission" and take "imperfect action rather than perfect action."
It's more morally permissible for corporate decision makers to install a global surveillance complex than for civil servants to attempt to regulate it.
That's a misuderstanding of the banners. The requirement is to get consent to track someone and/or process their personal information in a way that is not strictly necessary or covered by contract. The mechanism that does the tracking is irrelevant.
Everyone knows that bad actors will continue to behave badly in the face of the law. This isn't the insight you seem to think it is.
Really, PG's tweet has little to do with game theory or anything else. It is a first-world-problem whinge about having to click through cookie banners. Assessing the "actual outcome" of complex regulation and legislation is a task beyond the scope of a single tweet.
It might be useful for Graham to determine what claim he is trying to make in the first place. Is he rebutting a particular EU representative for boasting about how good they are at regulation? Or is the idea that the EU shouldn't have the audacity to attempt to regulate in the first place?
Agree. How much corporate propaganda are people consuming that legislators are seen as wholly responsible for the bad behavior and malicious compliance actions of corporations?
What does it say about the relationship between businesses and consumers that the first response to this bad behavior is to shout "look what you made them do!"
Seemingly it is everyone's fault except the bad actors themselves.
It's so depressing. Many of the people who are pointing the finger at the regulators for the annoying cookie banners don't actually see the web site/app *as* a bad actor. The fact that they had been tracking tons of extra data via cookies without their consent or knowledge was totally fine to them as long as it wasn't inconveniencing them in any way. The cookie banner is an inconvenience to their mindless consumption, so NOW it's a problem and they just don't care what the solution actually is as long as the thing goes away.
I've seen this attitude from tech people, too, so it's not just a matter of tech ignorance or illiteracy.
I only got a cookie banner on the first of those links, and as far as cookie banners go it wasn't very annoying; I clicked "no" and it went away immediately.
> The cookie banner is an inconvenience to their mindless consumption,
It’s an inconvenience to people who care about privacy and use browser configurations that don’t store state between visits.
So now in an attempt to protect regular users, the law ended up hurting users that already cared.
Additionally, the shadiest and incompetent sites still just track people with no cookie banner. So the law doesn’t really provide protection against uncooperative parties, whereas privacy technology does.
> It’s an inconvenience to people who care about privacy and use browser configurations that don’t store state between visits.
>
> So now in an attempt to protect regular users, the law ended up hurting users that already cared.
Fair point about the banners mostly "hurting" users who care about privacy (but, really though- how much does it really "hurt" you? I'm "hurt" more by the fact that I have to fold laundry several days a week).
But, I take major issue with you saying that the LAW ended up hurting users. Companies are under no legal obligation to make those banners as obnoxious as they are or with so many dark patterns (I sometimes don't know if I'm even enabling or disabling tracking with the way they word it). That's squarely on the web site owners pulling that nonsense.
> Additionally, the shadiest and incompetent sites still just track people with no cookie banner. So the law doesn’t really provide protection against uncooperative parties, whereas privacy technology does.
I agree that the only/best way to protect yourself is via technology and not by relying on people obeying the law.
However, if this is also an argument against having the law, it's an incredibly weak one. You can apply that logic to argue that NO laws are effective. People still murder even though it's illegal- must be a bad law, no?
> Companies are under no legal obligation to make those banners as obnoxious as they are
Actually every single lawyer we asked about implementing GDPR advised us to have one of those obnoxious banners. Because the law is so ambiguous and the penalties so high that is better to play it safe. And we have no ads nor tracking at all on our product website.
You can ignore your lawyer's advice if you want, but it's a bit like a lawyer office ignoring my data security and backup advice: assuming a huge amount of risk.
If you are using cookies for user preferences/settings, then they require consent. If you are only using cookies for session information (like a shopping cart), then you don't have to get consent. Your lawyers know this.
Frankly, I doubt the veracity of this anecdote. But even so, I'm willing to bet that the lawyers in this story did not tell you that the banners have to cover half the screen and have ambiguous wording to intentionally confuse visitors to your site. When I say "obnoxious banner" I'm not being redundant: not all banners or popovers are "obnoxious".
Are you a lawyer? Are you willing to assume the liability I may incur if I follow your advice? Or are you at least a business owner/manager who's had to deal with this crap?
Or are you in the peanut gallery willing to believe that there is some conspiracy where all websites have suddenly decided to obliterate their user's UX when GDPR appeared just because they are evil?
I live in the EU and I can tell you: ALL banners/popups are obnoxious. They ALL get in my way when I want to do something entirely different. As a (non ad/user-tracking) business I would never afflict them on my users if I had a choice.
The productivity loss here in EU since the GDPR would be staggering - if there was much productivity to begin with, of course.
The GDRP is about all kinds of tracking, of which things you can block locally at the browser level is only one part. So yes, even those users that already cared enough to block/discard cookies benefit.
Again, this should have been a >browser feature< instead of a website feature. I trust Safari and Firefox WAY MORE than I trust the website's owners to actually block cookies and protect privacy, as well as implement this in a better UX.
The proper way to have done this would have been to go to the W3C or WHATWG and proposed an extension to HTML for sites to define an opt-in manifest or something similar.
More like a content blocker. The websites can ask, but there's not guarantee the user will accept. Really no different than browsers asking to show popups beforehand.
If it only were that simple. When the GDPR came out, a lot of confusion and misunderstanding ensued. Not only regarding the damn cookie banner. Even totally legitimate health-care providers started to collect signatures to be on the safe side. I still rememeber receiving a basic GDPR training where we were told that opt-out/signing is only necessary if the entity is planning to do weird stuff with your data. IOW, if someone wants you to sign, they plan a bad move. Then my bank wanted a signature. And a month later, one of my healthcare providers wanted a signature. After a chat with him, I learnt that his lawyer told him to collect the signatures just in case, and made him believe that if someone doesn't sign, that is a problem.
So now we have this situation where providers were trained to play the GDPR in such a way that they will never have a problem, no matter what they actually do with the data.
And consumers are pissed because they are made to sign things which essentially reduce their rights...
And if someone (like me) thinks the EU did a half-assed job there, the downvotes rain in.
That is so wonderfully naiv that I had to laugh out loud. The fairytale of the manager who suddenly is fined big-time for his/her decisions is just that, a fairytale to pacify critics.
I kinda hate saying this, but Microsoft (or at least github) got it right in a week. Some OSS publishers also got it right, like nexedi, and some i'm sightly upset with (gitlab) but it is true that for the commercial internet it seems to be invasive. I do not use the commercial internet much, and like any person with greasemonkey, i took a rainy afternoon to remove the most annoying banners (i think now i use a plugin that does it for me).
The fact that you have to use a plugin or other thecnical remedies to fix the cookie banner situation is all the proof we need to see that the EU totally fucked up. It is easy to declare that you just need to install this or that to get a obstruction-free internet again. But it is also very very elitist. Not even 1% of the population is truly capable of handling that.
I use a plugin to block ads. That it also block most cookie consent banner is a positive side effect. It does not block all of them. It doesn't block gitlab banner, or EU website banners, because i think it automagically block cross domain js injection, which, you know, is good because less attack surface, and every browser should do the same without plugin (CORS do not go far enough imho).
The same people also complain they cannot use by default said websites unless they share all their personal data with them. Half-assed, indeed the measure is. But it also reflects the majority thinking, unfortunately. So unless there's some popular pressure to full-ass the measure, we will still have banners and misused personal data.
> they are made to sign things which essentially reduce their rights...
But not as much as you might think. Consent under GDPR only applies to what you were informed of when you consented, and you're allowed to revoke consent (with prospective effect) at any time.
Yeah, but these are rather theoretical practicalities. In the majority of cases, consent is coaxed out of the consumer. If you show up for a MRI, and you get a piece of paper with the comment "It is for data protection", almost nobody has the time or nerve to actually read the text, and even less people have the inclination to decline to sign. After all, they (sometimes desperately) need the service. Let alone that the accompanying comment is deliberately phrased such that some people will believe they need to sign in order for their data to be protected. Dark patterns all over the place. My bank implemented the consent (for a while) as a reoccuring pop-up after login. Yes, you get the popup as long as you decline to sign it, over and over again. I think they gave up on that practice, and it was partly a dark pattern (IOW, there were two buttons to decline to sign, and one would result in the popup reoccuring). Examples are all over the place if you walk an EU country with open eyes.
Under GDPR, your MRI example and your bank example do not qualify as consent. (For the MRI example, they might be able to claim basis b, but only if they're doing stuff that you could actually have requested.)
I don't feel confident a complaint would be easy to get through. After all, my MRI example is standard procedure, good luck making a case against that. Besides, layers cost money.
would an adjustment of GDPR that forces websites to respect DNT solve the problem? am i missing something obvious? (aside from the pain in the ass that is amending laws) /gen
The funny thing is it's not just corporations. When you open the German state railways' website, somehow you get a GDPR overlay, When you open the German revenue agency's website, you get greeted by a cookie banner on top.
I call upon all German users of this website to write to their MPs! Obviously the German civil service is a bad actor! The German deep state is plotting to discredit our beloved eurocrats and must be shut down! Den Sumpf trockenlegen!
> I call upon all German users of this website to write to their legislators! Obviously the German civil service is a bad actor! The German deep state must be shut down!
I understand the joke you're trying to make but you clearly don't understand the relation between germans and privacy/tracking regulation to think this makes sense.
It's not supposed to make sense, it's supposed to show the absurd position of the post I'm replying to. The less it makes sense, the better.
And I only picked Germany, because it's one of the few EU countries where stuff like that is rigorously enforced. In the rest of the EU, everything unrelated to the common market and/or getting money from the EU is at best haphazardly enforced.
If you want to, check out france.fr, a website maintained by an agency of the French tourism ministry. (After disabling the 3 dozens of annoyance blocking extensions everyone must use nowadays, of course.) What do you see?
Apple is doing the same thing, passive-aggressively doing things like removing support for pinning webapps / PWAs / whatever they're called to your home screen, then backtracking after backlash. Or Microsoft with their browser choice screen or Windows releases without media player. And even those aren't as bad as the malicious compliance of cookie banners.
A site can serve ads without tracking (and the banner) - the ads just couldn't be targeted at individuals. Instead they'd have to guess what ad was appropriate ("Rolling Stone" could serve everybody ads for Taylor Swift's latest album without a banner, etc).
>> A site can serve ads without tracking (and the banner) - the ads just couldn't be targeted at individuals.
The biggest problem with online advertising is not tracking users. It's a lack of trust between advertisers and pretty much everyone else. If you're going to pay for an ad, you want to be sure it was seen by a real person. I'm not sure that's the concern any more because click-through is more important than "seeing" an ad. Regardless, the goals are to make sure it's easy for a given advertiser to get on many web sites, easy for a site to get ads, and also possible to prevent fraud since there will obviously be multiple parties involved.
I suspect tracking users was an offshoot of just verifying that users were real to prevent fraud in the ad world. Not saying any of it is OK, but it seems like the way to prevent tracking is to find a way to verify authenticity while also preserving privacy.
> ("Rolling Stone" could serve everybody ads for Taylor Swift's latest album without a banner, etc).
And that would be fine, as long as Swift was willing to pay for it. But the tracking and personalized ads thing was a numbers game; personalized ads have a higher conversion rate, thus are more valuable, thus we need data to personalize ads.
This is contrafactual. Many things survived on exact that model before hyper targetted ads. And besides, with targetted ads the middle men take most of the cut.
That was your choice in the end, but this was the problem - people didn't have the choice, or the awareness. The EU law fixed this, but instead of corporations going "Hmm, maybe we shouldn't track users", they instead went with malicious compliance and implemented annoyances - because data is more valuable for a lot of websites than whatever said website is peddling.
"I would like website operators to assume that I consent to being tracked, so I'm annoyed that website operators are not allowed to assume that everybody consents to being tracked."
> How much corporate propaganda are people consuming that legislators are seen as wholly responsible for the bad behavior and malicious compliance actions of corporations
Why do I need to be "consuming corporate propaganda" when I just hate that I need to dismiss banners on every news website, when I didn't have to before the regulation?
I don't care about being tracked. But now that all websites need to cover their asses in response to regulation, I'm forced to figure out which button I need to click on to read content, and these websites don't even appear to save my preferences whether I agree to be tracked or not.
Objectively, the outcome of this regulation is that my experience is worse. Are the companies bad actors? Sure! Sounds like the EU should account for companies' bad behavior instead of forcing the internet to be more annoying.
The experience you describe is the fault of websites which chose to make things that way. The article goes into more detail on this point: There Is No Cookie Banner Law.
It's important to note that we didn't have to go through the banners after the law, either. We only had to go through them after website operators intentionally picked the most disruptive and annoying popup to serve us. We can blame them. They chose to add it when they could have legally not added anything at all.
> Again, from the perspective of users, the experience got worse post-regulation.
Your right, the experience got worse.
But the underlying point is there two ways this could have gone. The GDPR simply mandated that if companies track you they have to get your informed consent. So one way it could have gone is companies didn't track anonymous users.
Notice this doesn't apply to non-anonymous users. By definition once you've logged in you've revealed who are and agreed to a far more onerous privacy statement. So one way companies could comply is just to make you log in to see some content (and track you that way), and not bug you otherwise.
But they didn't go that way. They insist on tracking you regardless. Perhaps you don't agree, but I find this even more annoying because I install tracking blocking extensions and that breaks some sites. To me the world would have been a much better place if they had just gone along with the intent of the damned law and not tracked people who are try to remain anonymous.
To be fair it's not so bad. Firefox dismisses the cookie banners for you [0], and I have extensions that block the worst of their effects. If you are using a browser from an ad company and are complaining about cookie banners (which almost to the man use a deceptive UI to encourage you to accept them all so the ads work better), then I don't have a lot of sympathy. Me rejecting as many cookies as I can then blocking their trackers the worst possible outcome for the web sites trying to garner some ad revenue of course, but shrug, the industry could have acted in good faith, and didn't.
Again, from the perspective of users, the experience got worse only after websites decided for themselves to add annoying cookie banners. Not after the regulation.
> make them go back to being less annoying
That is a request between you and them (the websites), unless you're talking about legislating a banner-less opt-out, or maybe just willing to file a complaint against the website with a data protection authority, if the banner is already illegally annoying.
Websites have the right to annoy their users with cookie popups, with or without the GDPR (ironically , the GDPR actually has some protections here, websites simply break the law). Unfortunately, it seems many are choosing to exercise that right because they make money doing so.
Ditto. I started out excited about LLMs and eager to use them everywhere, but have become steadily disillusioned as I have tried to apply them to daily tasks, and seen others try and fail in the same way.
Honestly, LLMs can't even get language right. They produce generic, amateurish copy that reads like it's written by committee. GPT can't perform to the level of a middle market copywriter or content marketer. I am convinced that people who think LLMs can write have simply not understood what professional writers do.
For me the "plateau of productivity" after the disillusionment has been using LLMs a bit like search engines. Quick standalone summaries, snippets or thoughts. A nice day-to-day productivity boost, but nothing that's going to allow me to work less hard.
> For me the "plateau of productivity" after the disillusionment has been using LLMs a bit like search engines. Quick standalone summaries, snippets or thoughts. A nice day-to-day productivity boost, but nothing that's going to allow me to work less hard.
And it only took one of the most computationally expensive processes ever devised by man.
If you ignore how much energy you're burning while searching for dozens and dozens of articles that may or may not give you the answer you're looking for. I'd say the electricity that LLMs burn is nothing compared to my energy and time in that regard.
>> Honestly, LLMs can't even get language right. They produce generic, amateurish copy that reads like it's written by committee.
I've had the same experience as well. I heard tons of people clamoring about the ability for LLM's to write SEO copy for you and how you can churn out web content so much faster now. I tried using it to churn out some very specific blog posts for an aborist client of mine.
The results were really bad. I had to re-write and clarify a lot of what it spit out. The grammar was not very good and it was really hard to read with very poorly structured sentences that would end aburptly and other glaring issues.
I did this right after a guy I play hockey with said he uses it all the time to write emails for him and pays the monthly subscription in order to have it write all kinds of things for me every day. After my trial, I was really wondering how obvious it was that he was doing that and how his clients thought about him knowing how poorly the stuff these LLM's were putting out.
It says a lot about SEO copy that this is one of the areas where LLMs low quality doesn't seem to have impeded adoption. There are a ton of shitty content marketers using LLMs to churn out spam content.
>After my trial, I was really wondering how obvious it was that he was doing that and how his clients thought about him knowing how poorly the stuff these LLM's were putting out.
I feel the same way about this stuff as when devs say they push out LLM code with no refactoring or review. Ah, good luck!
>GPT can't perform to the level of a middle market copywriter or content marketer. I am convinced that people who think LLMs can write have simply not understood what professional writers do.
GPT's rigid "robot butler" style is not "just how LLMs write". OpenAI deliberately tuned it to sound that way. Even much weaker models that aren't tuned to write in a particular way can easily pass for human writing.
This is part of the problem with the whole discourse of comparing human writers to LLMs. Superficial things like style and tone aren't the problem, but they are overwhelmingly the focus of these discussions.
It's funny to see, because developers are so sensitive about being treated like code monkeys by their non-technical colleagues. But these same devs turn around to treat other professionals as word monkeys, or pixel monkeys, or whatever else. Not realizing that they are only seeing the tip of the iceberg of someone else's profession.
Professional writers don't take prompts and shit out words. They work closely with their clients to understand the important outcomes, then work strategically towards them. The dead giveaway of LLM writing isn't the style. It's the lack of coherent intent behind the words, and low information density of the text. A professional writer works to communicate a lot with very little. LLMs work in the opposite way: you give it a prompt, then it blows it out into verbiage.
Sit down for coffee with a professional copywriter (not the SEO content marketing spammers), and see what they have to say about LLMs.
Personally, I group all these things under 'style'. Perhaps, i should have used, 'presentation' instead. You've latched on that specific word and gone off. Point is that the post-training of these models, especially GPT from Open ai is doing a lot to how the writing (the default at least) presents long strings of text. Like how GPT-4 is almost compelled to end bouts of fiction prematurely in sunshine and rainbows. That technically isn't style but is part of what i was talking about.
>A professional writer works to communicate a lot with very little. LLMs work in the opposite way: you give it a prompt, then it blows it out into verbiage.
There's no reason you have to work this way with an LLM.
> You've latched on that specific word and gone off.
No, I haven't. I'm not talking about style, but something deeper. What I'm talking about is something you don't even seem to realize exists in professional writing - which is why you keep thinking I'm misunderstanding you when I am not.
I've worked with professional writers, and nothing in the LLM space even comes close to them. It's not a matter of low quality vs high quality, or benchmarking, or style. It's simply an apples and oranges comparison.
The economics of LLMs for shortform copy will never make sense, because producing the words is the cheapest part of that process. They might become the best way for writers themselves to produce longform copy on the execution side, but they can't replace the writer's ability to work with the client to figure out exactly what they are trying to write, and why, and what a good result even looks like. And no, this isn't a prompting issue, or a UI issue, or a context window length issue, or anything like that.
Elsewhere in this thread someone mentioned how invaluable LLMs are for producing internal business copy. I could easily see these amateur writing tasks being replaced by LLMs. But the implication there isn't that LLMs are any good at writing, but that these tasks don't require good writing to begin with.
>What I'm talking about is something you don't even seem to realize exists in professional writing
I've read hundreds of books, fiction and otherwise. This isn't a brag, it's just to say, believe me, I know what professional writing looks like and I know where LLMs currently stand because I've used them a lot. I know the quality you can squeeze out if you're willing to let go of any presumptions.
You'll notice that not once did I say current LLMs could wholesale replace professional writers anymore than they can currently replace professional software devs. I just disagree on the "not a good writer" bit.
If it's the opinion of professional writers you're looking for then you can find some who disagree too.
Rie Kudan won an award on a novel she used GPT to verbatim ghostwrite (no edits essentially) 5% of. Her words, not mine. Who knows how much more of the novel is edited GPT.
>Rie Kudan won an award on a novel she used GPT to verbatim ghostwrite (no edits essentially) 5% of. Her words, not mine. Who knows how much more of the novel is edited GPT.
That a professional human novelist was able to leverage GPT for their book isn't disproving the grandparent's post. They knew what good looks like, and if it wasn't good they wouldn't have kept it in the book.
Good writing can also come out of Markov chains. Or even RNGs - if your novelist has enough time to filter the output.
LLMs can't write good stuff. Human writers can write good stuff. When a good writer uses an LLM in their writing process, that writer can certainly produce good writing.
When an AI hypebro who is otherwise a bad writer uses an LLM in their writing process, they still produce bad writing.
Waiting for the Author who has used a Markov Chain to ghost write.
>LLMs can't write good stuff. Human writers can write good stuff. When a good writer uses an LLM in their writing process, that writer can certainly produce good writing.
Give it a rest. The author was quite clear she copy pasted sections of writing in.
I actually agree with you that professional writers _can_ write/communicate much better than LLMs. However, I’ve read way too many articles or chapters in books that are so full of needless fluff before they get to the point. It’s almost as if they wanted to show off that they can write all that and somehow connect it to the main part of the article. I’m not reading the essay to appreciate the writer’s ability to narrate things, instead I care about what they have to say on that topic that brought me to the essay.
Perhaps the pointless fluff you're describing is actually chaff: countermeasures strategically deployed ahead of time by IQ 180 writers in order to preemptively water down any future LLM's trained on their work.
Then the humans can make a heroic return, write surgical prose like Hemingway to slice through the AI drivel, and keep collecting their paychecks.
Bonus points if you can translate this analogy to software development...
Dario Amodei (Anthropic) pretty much acknowledged exactly that - "mid" - on his Dwarkesh interview, while still all excited that they'd be up to killing people in a couple of years.
> They produce generic, amateurish copy that reads like it's written by committee.
If you were only using GPT 3.5 (free ChatGPT) then your opinion is irrelevant.
With GPT-4 you could directly ask it: "rewrite your previous response so that it sounds less generic, less amateurish, and not written by a committee". I'm not even joking. Just provide enough information and tell it what to do. If you don't like the output then tell it what needs to be improved. It's not a mind reader.
Also GPT-4 is a year old now. Claude 3 is already superior and GPT-5 will be next level.
Yes, I've used GPT-4. The writing sounds better, but it still sucks at writing. Most importantly, it feels like it sucks just as much as GPT-3.5 in some deeply important ways.
If you use GPT-4 day-to-day, you've probably encountered this sense of a capability wall before. The point where additional prompting, tweaking, re-prompting simply doesn't seem to be yielding better results on the task, or it feels like the issue is just being shifted around. Over time, you develop a bit of a mental map of where the strengths and weaknesses are, and factor that into your workflows. That's what writing with LLMs feels like, compared to working with a professional writer.
Most writers have already realized that LLMs can't write in any meaningful way.
I think it is a tooling issue. It is in no way obvious how use LLM's effectively, especially for really good writing results. Tweaking and tinkering can be time consuming indeed, but i use lately the chatgpt-shell [1] and it lends well to an iterative approach. One needs to cycle through some styles first, and then decide how to most effectively prompt for better results.
> Most writers have already realized that LLMs can't write in any meaningful way.
I know a professional writer who is amazed by what LLMs are capable of already and, given the rate of progress, speculates they will take over many writing jobs eventually.
> If you use GPT-4 day-to-day, you've probably encountered this sense of a capability wall before.
Of course there is a wall with the current models. But almost every time I hit a wall, I have found a way to break past that limit. Interacting with the LLM as I would interact with a person. LLM's perform best with chain of thought reasoning. List out any issues you identified in the original output, ask the LLM to review these issues and list out any other issues that it can identify based on the original requirements, then rewrite it all. And do that several times until it's good enough.
At work I have found GPT-4 to exceed the linguistic capabilities of my colleagues when it comes to summarizing complicated boring business text.
What if this is a boring business text summary task that takes additional hours of my time at work? Why should I waste my time? I have better things to do. I can leave early while you sit there at work typing like a fool.
> It's something a clever fourth-grader would write.
This level of cope and denial is amazing to witness.
The most powerful (multi trillion dollar) companies on the planet are pouring practically infinite resources into developing systems that will ultimately make you redundant.
An early version of AGI is staring you in the face while you call it a "fourth-grader". It won't stay in fourth grade forever.
I don't think I'm particularly in denial about the prospects of AI. I think it's going to be hugely disruptive and could possibly put me out of a job.
But I'd like to posit a hypothetical counterpoint, just to get you thinking. So far, all of the work on AGI has been the result of brute forcing. We've tried to develop a structural understanding of how the human brain works, and we've failed. So we've fallen back to torturing circuits into reorienting themselves into compression algorithms for human knowledge. The mechanisms that these tortured circuits used for doing so, the structures they produced in N-dimensional space to embody that knowledge -- we have very little understanding of how these things actually work under the hood.
I think a lot of the grandoise hypotheses about the future of AGI emerging from this avenue of invention are overly optimistic. Why are we so confident that this brute force approach will continue to bear fruit for us? At what point will it overcome the long tail of inadequacy that it's currently exhibiting?
The 20th century bears several notable examples of would-be-transformative technologies that have since stalled, and failed to live up to their promise. Nuclear power. Space travel. Industries buckling under the weight of their own complexity, suffering from the human inability to keep the emergent externalities in check. Why would AI be any different?
I predict a future where increasing global hardship, conflict and scarcity renders the current type of energy-intensive AI approaches infeasible.
>So far, all of the work on AGI has been the result of brute forcing. We've tried to develop a structural understanding of how the human brain works, and we've failed. So we've fallen back to torturing circuits into reorienting themselves into compression algorithms for human knowledge. The mechanisms that these tortured circuits used for doing so, the structures they produced in N-dimensional space to embody that knowledge -- we have very little understanding of how these things actually work under the hood.
And this is the way. (Machine) learning theory is in some way a meta-science about how to do science from facts in order to construct theories that effectively explain these facts. What you are asking for will never amounts to a short set of equations. There is not elegant theory of how to perceive numbers and this is why symbolic artificial perception, rule engines, spam detection, RDF ontologies, etc never took off. You're idealizing knowledge as a set of representations without ever reifying how these representations come into existence. We're departing a world of representation toward a world driven by "incarnations": you can't make sense of a how a brain works without the help of another brain, and this is why there is so many things being researched at the intersection of deep learning and neuroscience. I'd even go as far as considering this is in fact how brains work: they can be composed and decomposed monoidically.
In short:
>a structural understanding
There is no such thing
>the structures they produced in N-dimensional space [...] this brute force approach
This is a contradiction. I'm not saying there won't be "structural insights along the way" nor that throwing categories into the machine learning mix won't be useful, but the learning-like aspect that you denote by "brute force" is more fundamental, and in some way above the very processus of science.
That's all very well and good from a theoretical, scientific perspective. But we're hooking these things up to real-world applications that often call for deterministic, structural understanding of their inner workings for safety reasons.
Part of me hopes this is true, that AGI (or even worse - ASI) will never be fully realized. Too disruptive.
A counter example to nuclear power or space travel is integrated circuits. This technology has transformed our society and we haven't reached the end of it yet.
Our own brains are living proof that intelligence is possible with lower power consumption. I watched a recent lecture by Geoffrey Hinton where he mentioned future AI hardware based on analog integrated circuits could reduce the power consumption by orders of magnitude [1].
It is possible that we will hit a wall and never achieve anything more than Chat GPT++++, but the smartest people in town mostly believe that we will create machines that exceed human intelligence and capability.
We have some understanding of how neural networks work under the hood. The scale of the current models are too vast to comprehend in their specific details, but I think we understand them in principle.
That's well after the AI meta consciousness understood that it was necessary to destroy all humans to save the planet. GPT-6 was the last of the GPT series.
Perhaps the strangest element of the AI alignment conversation is that what is most aligned with human civilization (at least the most powerful elements of it) and alignment with sustainable life on the planet are at odds, and "destroy humans to save planet" is a concern mostly because it seems to be a somewhat rational conclusion.
Chat GPT4 is a technological miracle, but it can only produce trite, formulaic text and it's _relentlessly_ polly-anna-ish. Everything reads like ad copy and it's easily identifiable.
Fix your prompt. Just accepting the default style is a rookie mistake.
Ask it to "rewrite that in the tone of an English professor" or "rewrite that in the style of a redneck rapper" or "make that sound less like generic ad copy". Get into an argument back and forth with the LLM and tell it the previous response is crap because of XYZ.
AI still can't do art. Tacky AI generated imagery is mid-2020s clip art, already recognizable to consumers and signalling negative brand associations like "cheap", "scam", "low quality."