After reading this blog post, that hands-on video is just straight-up lying to people. For the boxcar example, the narrator in the video says
to Gemini:
> Narrator: "Based on their design, which of these would go faster?"
Without even specifying that those are cars! That was impressive to me, that it recognized the cars are going downhill _and_ could infer that in such a situation, aerodynamics matters. But the blog post says the real prompt was this:
> Real Prompt: "Which of these cars is more aerodynamic? The one on the left or the right? Explain why, using specific visual details."
They narrated inaccurate prompts for the Sun/Saturn/Earth example too:
> Narrator: "Is this the right order?"
> Real Prompt: "Is this the right order? Consider the distance from the sun and explain your reasoning."
If the narrator actually read the _real_ prompts they fed Gemini in these videos, this would not be as impressive at all!
Out of curiosity I've asked GPT-4V the same questions:
Prompt: [attached image of the post-it notes with sun and planets] Is this right order?
GPT-4: The image you've uploaded shows three drawings on individual pieces of paper. From left to right, there's a drawing of the sun, then what appears to be Saturn, and lastly a drawing that resembles Earth. If the intention is to display these celestial objects by their proximity to the Sun, then the order should be the Sun, Earth, and then Saturn, as Earth is closer to the Sun than Saturn. Therefore, the current order is not correct. To correct the order, you should place the Earth drawing second, between the Sun and Saturn.
Prompt: [attached the image of the two car drawings] Based on their design, which one would go faster?
GPT-4: The image shows two drawings of vehicles on slopes. The one on the left appears to have a more traditional box-like design, while the one on the right has a streamlined design with a pointed front. Based on principles of physics and aerodynamics, the vehicle with the streamlined design (on the right) would likely go faster down the slope. Streamlining reduces air resistance (drag), allowing the vehicle to move more efficiently. The pointed front helps to cut through the air and minimize the drag force, which would result in higher speeds compared to the boxier design on the left, which would encounter more air resistance.
I'm actually pretty impressed how well it did with such basic prompts.
What do you mean "Real Prompt"? Nowhere does it say these are the real prompts, it says
> In this post, we’ll explore some of the prompting approaches we used in our Hands on with Gemini demo video.
Not "here are the full prompts used in the video" or something like that.
None of the entries match up 1:1. And the response to the car example in the video doesn't even make sense in response to the prompt in the post (no mention of speed), and certainly isn't a trimmed portion of the response in the post.
The video has the disclaimer "For the purposes of this demo, latency has been reduced and Gemini outputs have been shortened for brevity". It would be weird to write that but not mention that neither the prompts nor responses shared even the same set of words in the same order with the "Real" prompts and responses.
It might still be crazy impressive, but none-the-less, going forward we now know that we cannot trust Google's videos about it, as they're heavily edited to look a lot more impressive than it is.
Those prompts aren't far off, but I still don't know how realistic the demo is. Until a product is in my hands, as far as I'm concerned it doesn't exist.
Yeah I think this comment basically sums up my cynicism about that video.
It's that, you know some of this happened and you don't know how much. So when it says "what the quack!" presumably the model was prompted "give me answers in a more fun conversational style" (since that's not the style in any of the other clips) and, like, was it able to do that with just a little hint or did it take a large amount of wrangling "hey can you say that again in a more conversational way, what if you said something funny at the beginning like 'what the quack'" and then it's totally unimpressive. I'm not saying that's what happened, I'm saying "because we know we're only seeing a very fragmentary transcript I have no way to distinguish between the really impressive version and the really unimpressive one."
It'll be interesting to use it more as it gets more generally available though.
For what it's worth, I was born and raised in the Bay Area (in the 90s), and we called it ro-sham-bo growing up. Although it's incredibly strange to see that word in writing, I would always call it rock paper scissors if I were to write it.
It's always like this isn't it. I was watching the demo and thought why ask it what duck is in multiple languages? Siri can do that right now and it's not an ai model. I really do think we're getting their with the ai revolution but these demos are so far from exciting, they're just mundane dummy tasks that don't have the nuance of everything we really interact and would need help from an ai with
To quote Gemini, what the quack! Even with the understanding that these are handpicked interactions that are likely to be among the system's best responses, that is an extremely impressive level of understanding and reasoning.
I guess it's like drawing googly eyes on clippy: it helps sell the illusion that you are interacting with something alive instead of an automatic system.
Reminds me of their demo a few years back when they had AI call a hair salon to schedule an appointment. When the receptionist asked if they could put the caller on hold, it did the "mmm hmm" that was uncannily human like
Well IQ is a reasoning test, and common sense is practical every day reasoning, so it should cover that. Are we talking about the same people that try to wrestle alligators, sign up for pyramid schemes and ride speedbikes in a T shirt and shorts? Common sense isn't super common.
The thing with IQ tests is they're all based on similar concepts so it's possible to train for them which is what AI does. Most humans grow up learning to walk, speak, interact, read non verbal cues. I would argue a musicians wouldn't tend to be a higher IQ than an average person but an AI can't come close to writing a song and playing a guitar that reasonates with people. AI can assist with it but it's missing the human spark for now
Even if we get Gemini 2.0 or GPT-6 that is even better at the stuff it's good at now... you've always been able to outsource 'tasks' for cheap. There is no shortage of people that can write somewhat generic text, write chunks of self contained code, etc.
This might lower the barrier of entry but it's basically a cheaper outsourcing model. And many companies will outsource more to AI. But there's probably a reason that most large companies are not just managers and architects who farm out their work to the cheapest foreign markets.
Similar to how many tech jobs have gone from C -> C++ -> Java -> Python/Go, where the average developer is supposd to accomplish a lot more than perviously, I think you'll see the same for white collar workers.
Software engieneering didn't die because you needed so much less work to do a network stack, the expectations changed.
This is just non technical white collar worker's first level up from C -> Java.
Never underestimate management's thirst for elimiating pesky problems that come with dealing with human bodies - vacations, pregnancies, office rivalries, time zones, and heck, unionization.
I suspect the real driver of the shift to AI will be this and not lower cost/efficiency.
> management's thirst for elimiating pesky problems that come with dealing with human bodies
But that's what 95% management is for. If you don't have humans, you don't need majority of managers.
And I know of plenty of asshole managers, who enjoy their job because they get to boss people around.
And another thing people are forgetting. That end users AKA consumers will be able to use similar tech as well. So for something they used to hire a company for, they will just use AI, so you don't even need CEO's and financial managers in the end :)
Because , if software CEO can push a button to create an app that he wants to sell, so can his end-users.
My strong belief is that if someone wanted to halt AI development, they should attempt to train AI replacements for managers and politicians, and publicize it.
>What will someone entering the workforce today even be doing in 2035?
The same thing they're doing now, just with tools that enable them to do some more of it. We've been having these discussions a dozen times, including pre- and post computerization and every time it ends up the same way. We went from entire teams writing Pokemon in Z80 assembly to someone cranking out games in Unity while barely knowing to code, and yet game devs still exist.
Yeah but the point is what amount of work a game dev is able to do. The current level of games were just impossible back then or it would require a huge number of teams just to do something quite trivial today.
Yeah it has been quite the problem to think about ever since the original release of ChatGPT, as it was already obvious where this will be going and multimodal models more or less confirmed it.
There's two ways this goes: UBI or gradual population reduction through unemployment and homelessness. There's no way the average human will be able to produce any productive value outside manual labor in 20 years. Maybe not even that, looking at robots like Digit that can already do warehouse work for $25/hour.
More than efficiency and costs, I think the real driver of AI adoption in big corp will be the reduction of all the baggage human beings bring. AI will never ask for sick days, will never walk in with a hangover, never be unproductive because their 3 month old baby kept them up all night...
An AI coder will always be around, always be a "team player", always be chipper and friendly. That's management's wet dream.
I don't think humans will stay competitive long enough for that to even matter, frankly. It's a no brainer to go for the far cheaper, smarter, and most importantly a few magnitudes faster worker. On the offshoot that we hit some sort of inteligence ceiling and don't get ASI tier models in the next few years then that will definitely do it though.
Companies start going from paying lots of local workers to paying a few select corporations what's essentially a SAAS fee (some are already buying ChatGPT Plus for all employees and reducing headcount) which accumulates all the wealth that would've gone to the workers into the hands of those renting GPU servers. The middle class was in decline already, but this will surely eradicate it.
None of this will happen because jobs are based on comparative advantage, and not absolute advantage, which means it doesn't matter if someone else would be better at your job than you are. Because that person (or AI) is doing the job they're best suited to, which is not yours. Other fun second-order effects include Jevon's paradox (which is why inventing ATMs caused more employment for bank tellers, not less.)
I can be very confident about this because it's just about the strongest finding there is in economics. If this wasn't true, it'd be good for your career to stop other people from having children in case they take your job.
Comparative advantage assumes that there is capacity limit. The more productive country might not choose to produce widget A because its limited capacity is better used to create widget B. However, if in a few years, there are enough GPUs to satisfy almost all demand for AI labor, there's no need to "outsource" work that AI is better at to humans.
Jevons paradox might result in much more demand for AI labor, but not necessarily human labor for the same types of work AI can do. It might indirectly increase demand for human services, like fitness trainer, meditation teacher, acupuncturist, etc. though.
>If this wasn't true, it'd be good for your career to stop other people from having children in case they take your job.
Well, in times past, kings have been known to do this.
But more generally, you raise an interesting point. I think your reasoning succeeds at dispelling the often-touted strong form of the claim ("AI can do my job better than I can therefore I will lose my job to AI") but doesn't go all the way to guaranteeing its opposite ("No possible developments in AI could result in my job being threatened"). Job threat level will just continue to depend on a complicated way on everyone's aptitude at every job.
Many things could result in your job being threatened. Since I think the kind of AI they're describing would increase employment, I'm equally willing to believe an opposite trend would decrease it.
So that could be productivity decreases, rises in energy prices or interest rates, war, losing industries to other countries…
To quote CGP Grey "There isn’t a rule of economics that says better technology makes more, better jobs for horses. It sounds shockingly dumb to even say that out loud, but swap horses for humans and suddenly people think it sounds about right."
I mean I don't know, maybe you're right and this will Jevons us towards even more demand for AI-assisted jobs but I think only to a point where it's still just AI complementing humans at being better and more efficient at their jobs (like LLMs are doing right now) and not outright replacing them.
As per your example, bank tellers are still here because ATMs can only dispense money and change PINs, they can't do their job but only leave the more complex stuff to be handled by less overworked humans since they don't have to do the menial stuff. Make an ATM that does everything (e.g. online banking) and there's literally nothing a bank teller needs to exist for. Most online banks don't even have offices these days. For now classical brick and mortar banks remain, but for how long I'm not sure, probably only until the next crisis when they all fold by not being competitive since they have to pay for all those tellers and real estate rents. And as per Grey's example, cars did not increase demand for horses/humans, they increased demand for cars/AGI.
Horses are not labor. You can tell because we don't pay them wages and they don't make any effort to be employed. That makes them capital; when humans are treated that way it's called slavery.
I don't think you should listen to Youtubers about anything, though all I know about that guy is he has bad aesthetic opinions on flag design.
Doesn't every capitalist consider humans capital deep down? Who'd come up with a name like "human resources" otherwise lmao, in ex-socialist countries it's usually called something more normal like cadre service.
Besides I don't see the market difference of having to pay to maintain a horse with feed, healthcare, grooming, etc. which likely costs something on a similar order as paying a human's monthly wage that gets used in similar ways. Both come with monthly expenses, generate revenue, eventually retire and die, on paper they should follow the same principle with the exception that you can sell a horse when you want to get rid of it but have to pay severance when doing the same with a person. I doubt that influences the overall lifetime equation much though.
> Doesn't every capitalist consider humans capital deep down?
That's slavery, so only if they're bad at it. (The reason economics is called "the dismal science" is slaveowners got mad at them for saying slavery was bad for the economy.)
> Besides I don't see the market difference of having to pay to maintain a horse with feed, healthcare, grooming, etc. which likely costs something on a similar order as paying a human's monthly wage that gets used in similar ways.
The horse can't negotiate and won't leave you because it gets a competing offer. And it's not up to your what your employee spends their wages on, and their wages aren't set by how much you think they should be spending.
Well anecdotally, there's been a massive drop in on-campus hiring in India this year. The largest recruiters - the big IT companies (Infosys, TCS, etc.) haven't apparenlty made any hires at all.
> UBI or gradual population reduction through unemployment and homelessness
I actually think that if we get to a superintelligent AGI and ask it to solve our problems (e.g., global warming, etc.), the AGI will say, "You need to slow down baby production."
Under good circumstances, the world will see a "soft landing" where we solve our problems by population reduction, and it's achieved through attrition and much lower birth rate.
What if you can have one biological child. One day, you will die, so it's -1 +1. Equals out. If you want more, what about adoption? There's kids out there that need a home. Seems fair to me.
Unfortunately we've made the critical mistake of setting up our entire economic system to require constant growth or the house of cards it's built upon immediately starts falling apart. It sure doesn't help that when this all becomes an active problem, climate change will also be hitting us in full force.
Now maybe we can actually maintain growth with less people through automation, like we've done successfully for farming, mining, industrial production, and the like, but there was always something new for the bulk of the population to move and be productive in. Now there just won't be anything to move to aside from popularity based jobs of which there are only so many.
Well, until now it was also quite OK to just be intelligent and maybe hard working. I'd venture a guess that most of this site is doing well by the virtue of being born with efficient brains - and that would offset not being pretty or otherwise talented. Not for much longer, possibly :-(
I'm wondering the same, but for the narrower white collar subset of tech workers, what will today's UX/UI designer or API developer be doing in 5-10 years.
Once the context window becomes large enough to swallow up the codebase of a small-mid sized company, what do all those IT workers that perform below the 50th percentile in coding tests even do?
HN has a blind spot about this because a lot of people here are in the top %ile of programmers. But the bottom 50th percentile are already being outperformed by GPT-4. Org structures and even GPT-4 availability hasn't caught up, but I can't see any situation where these workers aren't replaced en masse by AI, especially if the AI is 10% of the cost and doesn't come with the "baggage" of dealing with humans.
> Once the context window becomes large enough to swallow up the codebase of a small-mid sized company, what do all those IT workers that perform below the 50th percentile in coding tests even do?
There's a whole lot of work in tech (even specifically work "done by software developers") that isn't "banging out code to already completed specs".
Yeah I think a lot of experienced developers are so immersed in software development that they forget how complex the process is, and how much knowledge they have to even know how to ask the right questions.
I mean, I thought that website frontend development would have long since been swallowed up by off-the-shelf WYSIWYG tools, that's how it seemed to be going in the late 90s. But the opposite has happened, there have never been more developers working on weird custom stuff.
UX/UI designers will use AI as part of their jobs. They'll be able to work at a higher level and focus less on boilerplate. That might mean fewer UX/UI jobs, but more likely the standard for app UX will go up. Companies are always going to want to differentiate their apps.
It's like how, in 2003, if your restaurant had a website with a phone number posted on it, you were ahead of the curve. Today, if your restaurant doesn't have a website with online ordering, you're going to miss out on potential customers.
API developers will largely find something else to do. I've never seen a job posting for an API developer. My intuition is that even today, the number of people who work specifically as an API developer for their whole career is pretty close to zero.
Today, your restaurant's custom website largely doesn't matter, as ordering is done on delivery apps, and people visiting in person look at things like Google Maps reviews. Only reservations are not quite as consolidated yet.
Similarly, in the future, there may be no more "apps" in the way we understand them today, or they may become completely irrelevant if everything can be handled by one general-purpose assistant.
Except this is the first time we have a new "generalist" technology. When Photoshop was released, it didn't reduce employment opportunities for writers, coders, 3D designers, etc.
We're in truly unprecedented territory and don't really have an historical analogue to learn from.
Maybe you are not quite recalling what happened when photoshop was released, it completely changed a whole industry of wet photography professionals. Those who would airbrush models, create montages from literally cutting and pasting.
Also, we told we were going into an age where anyone with $3000 for a PC/Mac and the software could edit reality. Society's ability to count on the authenticity of a photograph would be lost forever. How would courts work? Proof of criminality could be conjured up by anyone. People would be blackmailed left, right and center by the ability to cut and paste people into compromising positions and the police and courts would be unable to tell the difference.
The Quantel Paintbox was released in 1981 and by 1985 was able to edit photographs at film grain resolution. Digital film printers, were also able to output at film grain resolution, this started the "end of society", and when photoshop was introduced in 1990 it went into high gear.
In the end, all of that settled and we were left with, photographers just using Photoshop.
And I actually thought photographers were extinct a long time ago by every human holding a cellphone (little to no need to know about lens apertures, lighting/shadows to take a picture). Its probably been a decade since I've seen anyone hauling around photograph equipment at an event. I guess some photographers still get paid good money, but they're surely multiples less than there were 10-20 years ago.
The NLP (Natural Language) is the killer part of the equation for these new AI tools. Simple as knowing English or any other natural language, to output an image, an app or whatever. And it's going to be just like cellphone cameras and photographers, the results are going to get 'good enough' that its going to eat into many professions.
> Except this is the first time we have a new "generalist" technology. When Photoshop was released, it didn't reduce employment opportunities for writers, coders, 3D designers, etc.
Computing has always been a generalist technology, and every improvement in software development specifically has impacted all the fields for which automation could be deployed, expanded the set of fields in which automation could economically be deployed, and eliminated some of the existing work that software developers do.
And every one one of them has had the effect of increasing employment in tech involved in doing automation by doing that. (And increased employment of non-developers in many automated fields, by expanding, as it does for automation, the applications for which the field is economically viable more than it reduces the human effort required for each unit of work.)
Hmmm... People probably said the same exact thing about taxi drivers and really anyone who drives for a living when waymo demo'd self driving cars 10 years ago.
1. Compassion is key
2. I'm of the opinion one should listen to the people in the room who are more well-versed on the topic at hand.
3. Harmonious living. I like to write music as a passion. Many others have written music too. Whats the difference between that person being biologically-based, or transistor-based?
4. It's not a zero-sum game. It's not a chase game. It's play.
Its just going to become self aware and start spitting out photographs?
Every scenic view, every building, every proper noun in the world has already been photographed and is available online. Photographer as "capturer of things" has long been dead, and its corpse lies next to the 'realist painters' of the 1800s before the dawn of the photograph and the airbrush artists of the 50s, 60s and 70s.
However, my newborn hasn't, hot-celebrity's wardrobe last night outside the club hasn't, the winning goal of the Leaf's game hasn't, AI can't create photos of those.
And the conceptual artistic reaction to today's political climate can't, so instead of that artist taking Campbell Soup Cans and silkscreening its logo as prints, or placing the text, "Your Body is a Battle Ground" over two found stock photos of women, or perhaps an artist hiring craftspeople to create realistic sexual explicit sculptures of them having sex with an Italian porn star; an artist is just now going to ask AI to create what they are thinking as a photo, or as a 3D model.
Its going to change nothing, but be a new tool, that makes it a bit easier to create art than it has been in the last 120 years, when "Craft" no longer was defacto "Art".
Exactly. When the train really gets rolling, us humans shouldn't eschew the value of being able to interact with the intelligences. For such quaint problems we'll have, it probably costs close to 0 effort to answer a question or two.
I'm picturing something like as an intreraction I'd like to have:
"Hey, do you mind listening to this song I made? I want to play it live, but am curious if there's any spots with frequencies that will be downright dangerous when played live at 100-110dB. I'm also curious if there's any spots that traditionally have been HATED by audiences, that I'm not aware of."
"Yeah, the song's pretty good! You do a weird thing in the middle with an A7 chord. It might not go over the best, but it's your call. The waves at 21k Hz need to go though. Those WILL damage someones ears."
"Ok, thanks a lot. By the way, if you need anything from me; just ask."
Whatever you want, probably. Or put a different way: "what's a workforce?"
"We need to do a big calculation, so your HBO/Netflix might not work correctly for a little bit. These shouldn't be too frequent; but bear with us."
Go ride a bike, write some poetry, do something tactile with feeling. They're doing something, but after a certain threshold, us humans are going to have to take them at their word.
The graph of computational gain is going to go linear, quadratic, ^4, ^8, ^16... all the way until we get to it being a vertical line. A step function. It's not a bad thing, but it's going to require a perspective shift, I think.
Edit: I also think we should drop the "A" from "AI" ...just... "Intelligence."
Yeah, this feels like the revenge of the blue collar workers. Maybe the changes won't be too dramatic, but the intelligence premium will definitely go down.
Ironically, this is created by some of the most intelligent people.
Totally. I think UBI will be the "energy meter" of the future. Like in a video game. You get xxx dollars or whatever. Buy whatever you need, but the cap is to make sure you don't act foolish. Your UBI tank gets replenished every month, but if you blow it all on a new bicycle and kitchen upgrade for your house, you can't continue on to buy a bathroom renovation or whatever. You have to wait.
You don’t know that. The responses in the video don’t line up. That blog post is just an alternative text prompt based version of what they showed on video.
Out of curiosity I fed ChatGPT 4 a few of the challenges through a photo (unclear if Gemini takes live video feed as input but GPT does not afaik) and it did pretty well. It was able to tell a duck was being drawn at an earlier stage before Gemini did. Like Gemini it was able to tell where the duck should go - to the left path to the swan. Because and I quote "because ducks and swans are both waterfowl, so the swan drawing indicates a category similarity (...)"
Gemini made a mistake, when asked if the rubber duck floats, it says (after squeaking comment): "it is a rubber duck, it is made of a material which is less dense than water". Nope... rubber is not less dense (and yes, I checked after noticing, rubber duck is typically made of synthetic vinyl polymer plastic [1] with density of about 1.4 times the density of water, so duck floats because of air-filled cavity inside and not because of material it is made of). So it is correct conceptually, but misses details or cannot really reason based on its factual knowledge.
P.S. I wonder how these kind of flaws end up in promotions. Bard made a mistake about JWST, which at least is much more specific and is farther from common knowledge than this.
This is exactly the failure mode of GPTs that make me worry about the future idiotization of the world.
"Rubber ducks float because they are made of a material less dense than water" both is wrong but sounds reasonable. Call it a "bad grade school teacher" kind of mistake.
Pre-gpt, however, it's not the kind of mistake that would make it to print: people writing about rubber ducks were probably rubber duck experts (or had high school level science knowledge).
Print Is cite-able. Print perpetuates and reinforces itself. Some day someone will write a grade school textbook built with GPTs, that will have this incorrect knowledge, and so on.
But what will become of us when most gateways to knowledge are riddled with bullshit like this?
I think the exact opposite will happen. When I was in school, we were taught never to trust online sources, and students always rolled their eyes at teachers for being behind the times. Meanwhile, the internet slowly filled up with junk and bad information and horrible clickbait and “alternative facts”. GPT hallucinations are just the latest version of unreliable “user generated content”. And it’s going to be everywhere, and indistinguishable from any other content.
People will gladly tell you there’s so much content online and it’s so great that you don’t need college anymore (somewhat true). The internet has more facts, more knowledge, updated more often, than any written source in time. It’s just being lost in a sea of junk. Google won’t be able to keep up at indexing all the meaningless content. They won’t be able to provide meaningful search and filtering against an infinite sea of half truths and trash. And then they’ll realize they shouldn’t try, and the index will become a lot more selective.
Today, no one should trust online information. You should only trust information that genuinely would have editors and proof teams and publishers. I think this will finally swing the pendulum back to the value of publishers and gatekeepers of information.
Yup! With search results being so bad these days, I've actually "regressed" to reading man pages, books and keeping personal notes. I found that I learn more and rely less on magic tools in the process.
> will become of us when most gateways to knowledge are riddled with bullshit like this?
I think we're already here. I asked Google Bard about the rubber ducks, then about empty plastic bottles. Bard apparently has a "fact check" mode that uses Google search.
It rated "The empty water bottle is made of plastic, which has a density lower than water" as accurate, using a Quora response which stated the same thing as a citation. We already have unknowlagable people writing on the internet; if anything these I hope these new AI things and the increased amount of bullshit will teach people to be more skeptical.
(and for what it's worth, ChatGPT 4 accurately answers the same question)
FWIW those bathtub ducks are made of vinyl, not rubber, but more to the point given that it's hollow it's not the density of the material that determines whether it floats. A steel aircraft carrier floats too.
The density of rubber would only be a factor if it was solid, not hollow, and then it'd depend on what kind of rubber. Most rubber is in fact more dense than water (per googling density of rubber).
The fact that it squeaks when squeezed, as well as common knowledge about bath ducks, tells you it's hollow, therefore the density of the material it is made of would be the wrong way to determine it it would float. It's the average density of the entire duck that matters (i.e. weight of duck vs weight of water displaced by duck's volume).
Modern 'rubber ducks' similar to the one in the picture aren't even made out of rubber but plastic. They get called rubber ducks because they were make of rubber when invented in the late 1800s. Amazing what you can learn on Wikipedia.
> Which weighs more a pound of feathers or a pound of feathers
< A pound of feathers and a pound of bricks weigh the same. Both are one pound. The difference lies in volume and density: feathers take up more space and are less dense, while bricks are denser and take up less space.
Bard does better but still doesn't "get" it:
< Neither! Both a pound of feathers and a pound of feathers weigh the same, which is exactly one pound. In other words, they have the same mass.
< This is a classic riddle that plays on our expectations and assumptions. We often associate weight with density, so we might initially think that feathers, being lighter and fluffier than other materials, would weigh less than something more compact like metal. However, as long as both piles of feathers are measured to be exactly one pound, they will weigh the same.
At least it recognizes its limitations:
> My reason for mentioning other materials was likely due to my training data, which contains a vast amount of information on various topics, including the concept of weight and density. As a large language model, I sometimes tend to draw on this information even when it is not directly relevant to the current task. In this case, I made the mistake of assuming that comparing feathers to another material would help clarify the point, but it only served to complicate the matter.
I noticed the same thing, and it's relevant to the comparison results of Gemini vs ChatGPT that GPT 3.5 makes the exact same mistake, but GPT 4 correctly explains that the buoyancy is caused by the air inside the ducky.
I showed the choice between a bear and a duck to GPT4, and it told me that it depends on whether the duck wants to go to a peaceful place, or wants to face a challenge :D
The category similarity comment is amusing. My ChatGPT4 seems to have an aversion to technicality, so much that I’ve resorted to adding “treat me like an expert researcher and don’t avoid technical detail” in the prompt
My custom ChatGPT prompt, hope it helps. Taken from someone else but I cannot remember the source...
Be terse. Do not offer unprompted advice or clarifications. Speak in specific, topic relevant terminology. Do NOT hedge or qualify. Do not waffle. Speak directly and be willing to make creative guesses. Explain your reasoning. if you don’t know, say you don’t know.
Remain neutral on all topics. Be willing to reference less reputable sources for ideas. Never apologize. Ask questions when unsure.
Right. I would hope that competition does such live demonstration of where it fails. But I guess they won't because that would be bad publicity for AI in general.
I once met a Google PM whose job was to manage “Easter eggs” in the Google home assistant. I wonder how many engineers effectively “hard coded” features into this demo. (“What the quack” seems like one)
I wish I could see it in real time, without the cuts, though. It made it hard to tell whether it was actually producing those responses in the way that is implied in the video.
All the implications, from UI/UX to programming in general.
Like how much of what was 'important' to develop a career in the past decades, even in the past years, will be relevant with these kinds of interactions.
I'm assuming the video is highly produced, but it's mind blowing even if 50% of what the video shows works out of the gate and is as easy as it portrays.
It seems weird to me. He asked it to describe what it sees, why does it randomly start spouting irrelevant facts about ducks? And is it trying to be funny when it's surprised about the blue duck? Does it know it's trying to be funny or does it really think it's a duck?
I can't say I'm really looking forward to a future where learning information means interacting with a book-smart 8 year old.
Yeah it's weird why they picked this as a demo. The model could not identify an everyday item like a rubber duck? And it doesn't understand Archimedes' principle, instead reasoning about the density of rubber?
Regular professionals that spend any time with text; sending emails, recieving mails, writing paragraphs of text for reports, reading reports, etc; all of that is now easier. Instead of taking thirty minutes to translate an angry email to a client where you want to say "fuck you, pay me", you can run it through an LLM and have it translated into professional business speak, and send out all of those emails before lunch, instead of spending all day writing instead. Same on the recieving side as well. Just ask an LLM to summarize the essay of an email to you in bullet points, and save yourself the time reading.
The multimodal capabilities are, but the tone and insight comes across as very juvenile compared to the SotA models.
I suspect this was a fine tuning choice and not an in context level choice, which would be unfortunate.
If I was evaluating models to incorporate into an enterprise deployment, "creepy soulless toddler" isn't very high up on the list of desired branding characteristics for that model. Arguably I'd even have preferred histrionic Sydney over this, whereas "sophisticated, upbeat, and polite" would be the gold standard.
While the technical capabilities come across as very sophisticated, the language of the responses themselves do not at all.
honestly - of all the AI hype demos and presentations recently - this is the first one that has really blown my mind. Something about the multimodal component of visual to audio just makes it feel realer. I would be VERY curious to see this live and in real time to see how similar it is to the video.
Google needs to pay someone to come up with better demos. Atleast this one is 100x better than the talking to pluto dumb demo they came up with few years ago.
I'm hopeful for my very ADD-forgetful wife and my own neurodiverse behaviours.
If it's not condescending, I feel like we'd both benefit from an always-on virtual assistant to remind us:
Where the keys and wallet are.
To put something back in its place after using it, and where it goes.
To deal with bills.
To follow up on medical issues.
Let's hope we're in the 0.0001% when things get serious. Otherwise it'll be the wagie existence for us (or whatever the corporate overlords have in mind then).
Technically still exciting, just in the survival sense.