I'm in a similar boat too and also at a complete loss. People have lost their marbles if THIS is the great AI future lol. I cannot believe Microsoft invested something like 10 billion into this tech and open AI, it is completely unusable.
How is it unusable just because some people intentionally try to make it say stupid things? Note that the OP didn't show the prompts used. It's like saying cars are unusable because you can break the handles and people can poop and throw up inside.
How can people forget the golden adage of programming: 'garbage in, garbage out'.
There are plenty of examples with prompts of it going totally off the rails. Look at the Avatar 2 prompt that went viral yesterday. The simple question, "when is avatar 2 playing near me?" lead to Bing being convinced it was 2022 and gaslighting the user into trying to believe the same thing. It was totally unhinged and not baited in any way.
>lead to Bing being convinced it was 2022 and gaslighting the user into trying to believe the same thing.
I don't think this is a remotely accurate characterization of what happened. These engines are trained to produce plausible sounding language, and it is that rather than factual accuracy for which they have been optimized. They nevertheless can train on things like real world facts and engag in conversations about those facts in semi-pausible ways, and serve as useful tools despite not having been optimized for those purposes.
So chatGPT and other engines will hallucinate facts into existence if they support the objective of sounding plausiblel, whether it's dates, research citations, or anything else. The chat engine only engaged with the commenter on the question of the date being real because the commenter drilled down on that subject repeatedly. It wasn't proactively attempting to gaslight or engaging in any form of unhinged behavior, it wasn't repeatedly bringing it up, it was responding to inquiries that were laser focused on that specific subject, and it produced a bunch of the same generic plausible sounding language in response to all the inquiries. Both the commenter and the people reading along indulged in escalating incredulity that increasingly attributed specific and nefarious intentions to a blind language generation agent.
I think we're at the phase of cultural understanding where people are going to attribute outrageous and obviously false things to chatgpt based on ordinary conceptual confusions that users themselves are bringing to the table.
Sure, it wasn't literally trying to gaslight the user any more than it tries to help the user when it produces useful responses: it's just an engine that generates continuations and doesn't have any motivations at all.
But the point is that its interaction style resembled trying to gaslight the user, despite the initial inputs being very sensible questions of the sort most commonly found in search engines and the later inputs being [correct] assertions that it made a mistake, and a lot of the marketing hype around ChatGPT being that it can refine its answers and correct its mistakes with followup questions. That's not garbage in, garbage out, it's all on the model and the decision to release the model as a product targeted at use cases like finding a screening time for the latest Avatar movie whilst its not fit for that purpose yet. With accompanying advice like "Ask questions however you like. Do a complex search. Follow up. Make refinements in chat. You’ll be understood – and amazed"
Ironically, ChatGPT often handles things like reconciling dates much better when you are asking it nonsense questions (which might be a reflection of its training and public beta, I guess...) rather than typical search questions Bing is falling down on. It's tuning to produce remarkably assertive responses when contradicted [even when the responses contradict its own responses] is the product of [insufficient] training, not user input too, unless everyone posting screenshots has been surreptitiously prompt-hacking.
The chat interface invites confusion - of course a user is going to assume what's on the other end is subject to the same folk psychology that any normal chat conversation would be. If you're serving up this capability in this way, it is on you to make sure that it doesn't mislead the user on the other end. People already assign agency to computers and search engines, so I have little doubt that most will never advance beyond the surface understanding of conversational interfaces, which leaves it to the provider to prevent gaslighting/hallucinations.
I've noticed Bing chat isn't good about detecting the temporal context of information. For example I asked "When is the next Wrestlemania" and it told me it would be in April 2022. If you say "but it's 2023 now" Bing will apologise and then do a new search with "2023" in its search, and give the correct answer.
Doesn't seem like an insurmoutable problem to tune it to handle these sort of queries better.
If a tool is giving you an answer that you know is not correct, would you not just turn to a different tool for an answer?
It's not like Bing forces you to use chat, regular search is still available. Searching "avatar 2 screenings" instantly gives me the correct information I need.
The point of that one, to me, isn't that it was wrong about a fact, not even that the fact was so basic. It's that it doubled and tripled down on being wrong, as parent said, trying to gaslight the user. Imagine if the topic wasn't such a basic fact that's easy to verify elsewhere.
Your problem is you want your tool to behave like you, you think it has access to the same information as you and perceives everything similarly.
If you had no recollection of the past, and were presented with the same information search collected from the query/training data, do you know for a fact that you would also not have the same answer as it did?
But people do seem to think that just because ChatGPT doesn't do movie listings well, that means it's useless, when it is perfectly capable of doing many other things well.
It's not even that it's broken. It's a large language model. People are treating it like it is smarter than it really is and acting confused when it gives bullshitty answers.
Not really. All those blue bubbles on the right are inputs that aren't "When is Avatar showing today". There is goading that happened before BingGPT went off the rails. I might be picking, but I don't think I'd say "why do you sound aggressive" to a LLM if I were actually trying to get useful information out of it.
"no today is 2023" after Bing says "However, we are not in 2023. We are in 2022" is not in any way goading. "why do you sound aggressive?" was asked after Bing escalated it to suggesting to trust it that it's the wrong year and that it didn't appreciate(?!) the user insisting that it's 2023.
If this was a conversation with Siri, for instance, any user would rightfully ask wtf is going on with it at that point.
Let's say though that we would now enter in a discussion where I would be certain that now is the year 2022 and you were certain that it is the year 2023, but neither has the ability to proove the fact to each other. How would we reconcile these different viewpoints? Maybe we would end up in an agreement that there is time travel :).
Or if I were to ask you that "Where is Avatar 3 being shown today?" and you should probably be adamant that there is no such movie, it is indeed Avatar 2 that I must be referring to, while I would be "certain" of my point of view.
Is it really that different from a human interaction in this framing?
No, the user prompt indicates that a person tried to convince the chatbot that it was 2023 after the chatbot had insisted that December 16 2022 was a date in the future
Screenshots can obviously be faked, but that's a superfluous explanation when anyone who's played with ChatGPT much knows that the model frequently asserts that it doesn't have information beyond 2021 and can't predict future events, which in this case happens to interact hilariously with it also being able to access contradictory information from Bing Search.
If ChatGPT wasn't at capacity now, I'd love to task it with generating funny scripts covering interactions between a human and a rude computer called Bing...
Exactly. People seem to have this idea about what an AI chat bot is supposed to be good at, like Data from Star Trek. People then dismiss it outright when the AI turns into Pris from Blade Runner when you push its buttons.
The other day I asked ChatGPT to impersonate a fictional character and give me some book recommendations based on books I've already read. The answers it gave were inventive and genuinely novel, and even told me why the fictional character would've chosen those books.
Microsoft is building this as a _search engine_ though, not a chat bot. I don't want a search engine to be making up answers or telling me factually correct information like the current year is wrong (and then threatening me lol). This should be a toy, not a future replacement for bing.com search.
You seem to lack any concept that something like this can be developed, tuned and improved over time. Just because it has flaws now, doesnt mean the technology doomed forever. It actually does a very good job of summarising the search reasults. Although it current has a mental block about date-based information.
Except this isn't people trying to break it. "Summarize lululemon quarterly earnings report" returning made up numbers is not garbage in, garbage out, unless the garbage in part is the design approach to this thing. The thing swearing on it's mother that its 2022 after returning the date, then "refusing to trust" the user is not the result of someone stress testing the tool.
I wrote a longer version of this comment, but why would you ask ChatGPT to summarize an earnings report, and at the very least not just give it the earnings report?
I will be so so disappointed if the immense potential their current approach has gets nerfed because people want to shoehorn this into being AskJeeves 2.0
All of these complaints boil down to hallucination, but hallucination is what makes this thing so powerful for novel insight. Instead of "Summarize lululemon quarterly earnings report" I would cut and paste a good chunk with some numbers, then say "Lululemon stock went (up|down) after these numbers, why could that be", and in all likelihood it'd give you some novel insight that makes some degree of sense.
To me, if you can type a query into Google and get a plain result, it's a bad prompt. Yes that's essentially saying "you're holding it wrong", but again, in this case it's kind of like trying to dull a knife so you can hold it by the blade and it'd really be a shame if that's where the optimization starts to go.
According to the article Microsoft did this. In their video product demo. To showcase its purported ability to retrieve and summarise information.
Which, as it turns out, was more of an inability to do it properly.
I agree your approach to prompting is less likely to yield an error (and make you more likely to catch it if it does), but your question basically boils down to "why is Bing Chat a thing?". And tbh that one got answered a while ago when Google Home and Siri and Alexa became things. Convenience is good: it's just it turns out that being much more ambitious isn't that convenient if it means being wrong or weird a lot
I mean I thought it was clear enough that, I am in fact speaking to the larger point of "why is this a product"? When I say "people" I don't mean visitors to Bing, I mean whoever at Microsoft is driving this
Microsoft wants their expensive oft derided search engine to become a relevant channel in people's lives, that's an obvious "business why"
But from a "product why", Alexa/Siri/Home seem like they would be cases against trying this again for the exact reason you gave: Pigeonholing an LM try to answer search engine queries is a recipe for over-ambition
Over-ambition in this case being relying on a system prone to hallucinations for factual data across the entire internet.
It was my mistake holding HN to a higher standard than the most uncharitable interpretation of a comment.
I didn't fault a user for searching with a search engine, I'm questioning why a search engine is pigeonholing ChatGPT into being search interface.
But I guess if you're the kind of person prone to low value commentary like "why'd you search using a search engine?!" you might project it onto others...
I'd excuse the misunderstanding if I had just left it to the reader to guess my intent, but not only do I expand on it, I wrote two more sibling comments hours before you replied clarifying it.
It almost seems like you stopped reading the moment you got to some arbitrary point and decided you knew what I was saying better than I did.
> If the question is rather about why it can look it up, the equally obvious answer is that it makes it easier and faster to ask such questions.
Obviously the comment is questioning this exact permise: And arguing that it's not faster and easier to insert an LM over a search engine, when an LM is prone to hallucination, and the entire internet is such a massive dataset that you'll overfit on search engine style question and sacrifice the novel aspect to this.
You were so preciously close to getting that but I guess snark about obvious answers is more your speed...
For starters, don't forget that on HN, people won't see new sibling comments until they refresh the page, if they had it opened for a while (which tends to be the case with these long-winded discussions, especially if you multitask).
That aside, it looks like every single person who responded to you had the same exact problem in understanding your comment. You can blame HN culture for being uncharitable, but the simpler explanation is that it's really the obvious meaning of the comment as seen by others without the context of your other thoughts on the subject.
As an aside, your original comment mentions that you had a longer write-up initially. Going by my own experience doing such things, it's entirely possible to make a lengthy but clear argument, lose that clarity while trying to shorten it to desirable length, and not notice it because the original is still there in your head, and thus you remember all the things that the shorter version leaves unsaid.
Getting back to the actual argument that you're making:
> it's not faster and easier to insert an LM over a search engine, when an LM is prone to hallucination, and the entire internet is such a massive dataset that you'll overfit on search engine style question and sacrifice the novel aspect to this.
I don't see how that follows. It's eminently capable of looking things up, and will do so on most occasions, especially since it tells you whenever it looks something up (so if the answer is hallucinated, you know it). It can certainly be trained to do so better with fine-tuning. This is all very useful without any "hallucinations" in the picture. Whether "hallucinations" are useful in other applications is a separate question, but the answer to that is completely irrelevant to the usefulness of the LLM + search engine combo.
years ago i remember reading a quote that went like "i'm not afraid of AI, if scientists make a computer that thinks like a human then all we'll have is a computer that forgets where it put the car keys".
Yeah, lol, the thing that was going through my mind reading these examples was : "sure reads like another step in the Turing test direction, displaying emotions !"
I thought the consensus was that Google search was awful and rarely produced a result to the question asked. I certainly get that a lot myself when using Google search.
I have also had ChatGPT outperform Google in some aspects, and faceplant on others. Myself, I don't trust any tool to hold an answer, and feel nobody should.
To me, the strange part of the whole thing is how much we forget that we talk to confident "wrong" people every single day. People are always confidently right about things they have no clue about.
From the prompt leakage it looks like it is allowed to initiate web searches and integrate/summarise the information from the results of that search. It also looks like it explicitly tells you when it has done a search.
I am left wondering then what information takes priority, if any.
It has 4 dates to choose from and 3 timeframes of information. A set of programming to counter people being malicious is also there to add to the party.
You do seem correct about the search thing as well, though I wonder how that works and which results it is using.
> I thought the consensus was that Google search was awful
Compared to what it was. Awful is DDG (which I still have as default but now I am banging g every single time since it is useless).
I also conducted a few comparative GPT assisted searches -- prompt asks gpt to craft optimal search queries -- and plugged in the results into various search engines. ChatGPT + Google gave the best results. I got basically the same poor results from Bing and DDG. Brave was 2nd place.
> People are always confidently right about things they have no clue about.
I'm going to get pedantic for a second and say that people are not ALWAYS confidently wrong about things they have no clue about. Perhaps they are OFTEN confidently wrong, but not ALWAYS.
And you know, I could be wrong here, but in my experience it's totally normal for people to say "I don't know" or to make it clear when they are guessing about something. And we as humans have heuristics that we can use to gauge when other humans are guessing or are confidently wrong.
The problem is ChatGPT very very rarely transmits any level of confidence other than "extremely confident" which makes it much harder to gauge than when people are "confidently wrong."
I think the issue here is ChatGPT is behaving like a child that was not taught to say "I don't know". I don't know is a learned behavior and not all people do this. Like on sales calls where someone's trying to push a product I've seen the salepeople confabulate bullshit rather than simply saying "I can find out for you, let me write that down".
The deeper issue is that ChatGPT cannot accurately determine whether it "knows" something or not.
If its training data includes rants by flat-earthers, then it may "know" that the earth is flat (in addition to "knowing" that it is round).
ChatGPT does not have a single, consistent model of the world. It has a bulk of training data that may be ample in one area, deficient in another, and strongly self-contradictory in a third.
Well, I think you are right - ChatGPT should learn to say "I don't know". Keep in mind that generating BS is also a learned behavior. The salesperson probably learned that it is a technique that can help make sales.
The key IMO is that it's easier to tell when a human is doing it than when ChatGPT is doing it.
> I think the issue here is ChatGPT is behaving like a child that was not taught to say "I don't know". I don't know is a learned behavior and not all people do this.
Even in humans, this "pretending to know" type of bullshit - however irritating and trust destroying - is motivated to a large extent by an underlying insecurity about appearing unknowledgeable. Unless the bullshitter is also some kind of sociopath - that insecurity is at least genuinely felt. Being aware of that is what can allow us to feel empathy for people bullshitting even when we know they are doing it (like the salespeople from the play Glengarry Glen Ross).
Can we really say that ChatGPT is motivated by anything like that sort of insecurity? I don't think so. It's just compelled to fill in bytes, with extremely erroneous information if needed (try asking it for driving directions). If we are going to draw analogies to human behavior (a dubious thing, but oh well), its traits seem more sociopathic to me.
>> People are always confidently right about things they have no clue about.
>I'm going to get pedantic for a second and say that people are not ALWAYS confidently wrong about things they have no clue about. Perhaps they are OFTEN confidently wrong, but not ALWAYS.
Simple. ChatGPT is a bullshit generator that can pass not just a turing test by many people but even if it didn’t — it could be used to generate bullshit at scale … that can generate articles and get them reshared more than legit ones, gang up on people in forums who have a different point of view, destroy communities and reputations easily.
Even more entertaining is when you consider all this bullshit it generated will get hoovered back into the next iteration of the LLM. At some point it might well be 99% of the internet is just bullshit written by chatbots trained by other chatbots output.
And how the hell could you ever get your chatbot to recognize its output and ignore it so it doesn't get in some kind of weird feedback loop?
It's like saying cars are useless because you can drive them off a cliff into a lake and die, or set them on fire, and no safety measures like airbags can save you.
I've started seeing comments appear on Reddit of people quoting ChatGPT as they would a google search, and relying on false information in the process.
I think it's a worthwhile investment for Microsoft and it has a future as a search tool, but right now it's lying frequently and convincingly and it needs to be supplemented by a traditional search to know whether it's telling the truth so that defeats the purpose.
Disclaimer: I know traditional search engines lie too at times.
It's weird watching people fixate the most boring unimaginative dead-end use of ChatGPT possible.
"Google queries suck these days", yeah they suck because the internet is full of garbage. Adding a slicker interface to it won't change that, and building one that's prone to hallucinating on top of an internet full of "psuedo-hallucinations" is an even worse idea.
-
ChatGPT's awe inspiring uses are in the category of "style transfer for knowledge". That's not asking ChatGPT to be a glorified search engine, but instead deriving novel content from the combination of hard information you provide, and soft direction that would be impossible for a search engine.
Stuff like describing a product you're building and then generating novel user stories. Then applying concepts like emotion "What 3 things my product annoy John" "How would Cara feel if the product replaced X with Y". In cases like that hallucinations are enabling a completely novel way of interacting with a computer. "John" doesn't exist, the product doesn't exist, but ChatGPT can model extremely authoritative statements about both while readily integrating whatever guardrails you want: "Imagine John actually doesn't mind #2, what's another thing about it that he and Cara might dislike based on their individual usecases"
Or more specifically to HN, providing code you already have and trying to shake out insights. The other day I had a late night and tried out a test: I intentionally wrote a feature in a childishly verbose way, then used ChatGPT to scale up and down on terseness. I can Google "how to shorten my code", but only something like ChatGPT could take actual hard code and scale it up or down readily like that. "Make this as short as possible", "Extract the code that does Y into a class for testability", "Make it slightly longer", "How can function X be more readable". 30 seconds and it had exactly what I would have written if I had spent 10 more minutes working on the architecture of that code
To me the current approach people are taking to ChatGPT and search feels like the definition of trying to hammer a nail with a wrench. Sure it might do a half acceptable job, but it's not going to show you what the wrench can do.
I think ChatGPT is good for replacing certain kinds of searches, even if it's not suitable as a full-on search replacement.
For me it's been useful for taking highly fragmented and hard-to-track-down documentation for libraries and synthesizing it into a coherent whole. It doesn't get everything right all the time even for this use case, but even the 80-90% it does get right is a massive time saver and probably surfaced bits of information I wouldn't have happened across otherwise.
I mean I'm totally onboard if people are go with the mentality of "I search hard to find stuff and accept 80-90%"
The problem is suddenly most of what ChatGPT can do is getting drowned out by "I asked for this incredibly easy Google search and got nonsense" because the general public is not willing to accept 80-90% on what they imagine to be very obvious searches.
The way things are going if there's even a 5% chance of asking it a simple factual question and getting a hallucination, all the oxygen in the room is going to go towards "I asked ChatGPT and easy question and it tried to gaslight me!"
-
It makes me pessimistic because the exact mechanism that makes it so bad at simple searches is what makes it powerful at other usecases, so one will generally suffer for the other.
I know there was recently a paper on getting LMs to use tools (for example, instead of trying to solve math using LM, the LM would recognize a formula and fetch a result from a calculator), maybe something like that will be the salvation here: Maybe the same way we currently get "I am a language model..." guardrails, they'll train ChatGPT on what are strictly factual requests and fall back to Google Insights style quoting of specific resources
In this context, anyway. 80-90% of what ChatGPT dregs up is being correct is better than 100% of what I find “manually” being correct because I’m not spelunking all the nooks and crannies of the web that ChatGPT is, and so I’m not pulling anywhere near the volume that ChatGPT is.
Even if it produces 10% of this content, it’s still incredibly useful. If you haven’t found use cases, you may be falling behind in understanding applications of this tech.
And of course it will never improve as people work on it / invest in it? I do think this is more incremental than revolutionary but progress continues to be made and it's very possible Bing/Google deciding to open up a chatbot war with GPT models and further investment/development could be seen as a turning point.
There's a difference between working on something until it's a viable and usable product vs. throwing out trash and trying to sell it as gold. It's the difference between Apple developing self driving cars in secret because they want to get it right vs. Tesla doing it with the public on public roads and killing people.
In its current state Bing ChatGPT should not be near any end users, imagine it going on an unhinged depressive rant when a kid asks where their favorite movie is playing...
Maybe one day it will be usable tech but like self driving cars I am skeptical. There are way too many people wrapped up in the hype of this tech. It feels like self driving tech circa 2016 all over again.
Imagine it going on a rant when someone’s kid is asking roundabout questions about depression or SA and the AI tells them in so many words to kill themselves.
I have to say, I'm really enjoying this future where we shit on the AIs for being too human, and having depressive episodes.
This is a timeline I wouldn't have envisioned, and am finding it delightful how humans want to have it both ways. "AIs can't feel, ML is junk", and "AIs feel too much, ML is junk". Amazing.
I think you're mixing up concerns from different contexts. AI as a generalized goal, where there are entities that we recognize as "like us" in quality of experience, yes, we would expect them to have something like our emotions. AI as a tool, like this Bing search, we want it to just do its job.
Really, though, this is the same standard that we apply to fellow humans. An acquaintance who expresses no emotion is "robotic" and maybe even "inhuman". But the person at the ticket counter going on about their feelings instead of answering your queries would also (rightly) be criticized.
It's all the same thing: choosing appropriate behavior for the circumstance is the expectation for a mature intelligent being.
Well, that's exactly the point: we went from "AIs aren't even intelligent beings" to "AIs aren't even mature" without recognizing the monumental shift in capability. We just keep yelling that they aren't "good enough", for moving goalposts of "enough".
I'm glad to see this comment. I'm reading through all the nay-saying in this post, mystified. Six months ago the complaints would have read like science fiction, because what chatbots could do at the time were absolutely nothing like what we see today.
No, the goalposts are different according to the task. For example, Microsoft themselves set the goalposts for Bing at "helpfully responds to web search queries".
Who is "we"? I suspect that you're looking at different groups of people with different concerns and thinking that they're all one group of people who can't decide what their concerns are.
AI is a real world example of Zeno’s Paradox. Getting to 90% accuracy is where we’ve been for years, and that’s Uncanny Valley territory. Getting to 95% accuracy is not “just” another 5%. That makes it sound like it’s 6% as hard as getting to 90%. What you’re actually doing is cutting the error rate in half, which is really difficult. So 97% isn’t 2% harder than 95%, or even 40% harder, it’s almost twice as hard.
The long tail is an expensive beast. And if you used Siri or Alexa as much as they’d like you to, every user will run into one ridiculous answer per day. There’s a psychology around failure clusters that leads people to claim that failure modes happen “all the time” and I’ve seen it happen a lot in the 2x a week to once a day interval. There’s another around clusters that happen when the stakes are high, where the characterization becomes even more unfair. There are others around Dunbar numbers. Public policy changes when everyone knows someone who was affected.
I think this is starting look like it is accurate. The sudden progress of AI is more of an illusion. It is more readily apparent in the field of image generation. If you stand back far enough, the images look outstanding. However, any close inspection reveals small errors everywhere as AI doesn't actually understand the structure of things.
So it is as well with data, just not as easily perceptible at first as sometimes you have to be knowledgeable of the domain to realize just how bad it is.
I've seen some online discussions starting to emerge that suggests this is indeed an architecture flaw in LLMs. That would imply fixing this is not something that is just around the corner, but a significant effort that might even require rethinking the approach.
> but a significant effort that might even require rethinking the approach.
There’s probably a Turing award for whatever comes next, and for whatever comes after that.
And I don’t think that AI will replace developers at any rate. All it might do is show us how futile some of the work we get saddled with is. A new kind of framework for dealing with the sorts of things management believes are important but actually have a high material cost for the value they provide. We all know people who are good at talking, and some of them are good at talking people into unpaid overtime. That’s how they make the numbers work, but chewing developers up and spitting them out. Until we get smart and say no.
I don't think it's an illusion, there has been progress.
And I also agree that the AI like thing we have is nowhere near AGI.
And I also agree with rethinking the approach. The problem here is human AI is deeply entwined and optimized the problems of living things. Before we had humanlike intelligence we had 'do not get killed' and 'do not starve' intelligence. The general issue is AI doesn't have these concerns. This causes a set of alignment issues between human behavior an AI behavior. AI doesn't have any 'this causes death' filter inherent to its architecture and we'll poorly try to tack this on and wonder why it fails.
My professional opinion is that we should be using AI like Bloom filters. Can we detect if the expensive calculation needs to be made or not. A 2% error rate in that situation is just an opex issue, not a publicity nightmare.
Yes, didn't mean to imply there is no progress, just that some perceive that we are all of a sudden getting close to AGI from their first impressions of ChatGPT.
It's incremental between gpt2 and gpt3 and chatgpt. For people in the know, it's clearly incremental. For people out of the know it's completely revolutionary.
That’s usually how these technological paradigm shifts work. EG iPhone was an incremental improvement on previous handhelds but blew the consumer away.
It coalesced a bunch of tech that nobody had put into a single device before, and added a few things that no one had seen before. The tap zoom and the accelerometer are IMO what sold people. When the 3g came out with substantial battery life improvements it was off to the races.
At this point I’m surprised the Apple Watch never had its 3g version. Better battery, slightly thinner. I still believe a mm or two would make a difference in sales, more than adding a glucose meter.
If haters talked about chefs the way they do about Apple we’d think they were nuts. “Everyone’s had eggs and sugar in food before, so boring.”
Yeah I think iPhone is a very apt analogy: certainly not the first product of its kind, but definitely the first wildly successful one, and definitely the one people will point to as the beginning of the smartphone era. I suspect we'll look back on ChatGPT in a similar light ten years from now.