None of the people making these decisions care about the long-term best interest of the company. Sundar doesn't give a shit about Google's future, he is laser focused on what really matters to him and the people he reports to: the stock price. A big round of layoffs can juice the stock, and it's a nice way to keep the numbers going up in between industry events where they can show off deceptively edited product demos and knowingly lie about the capabilities of their current and future AI offerings.
To put it another way: Google doesn't want to be a software company anymore. Google does not care about making software, or products, or the people who make or use their products. Google wants to be a growth company where the stock price goes up by two-digit percentages every quarter. That is absolutely the only thing that Google cares about. Google has realized that the best way to make this happen is to commit securities fraud by lying to their investors about their products, and by drip-feeding layoffs to show that they're serious about their underlying financials. It's theater, playing pretend at being business people. The individual products are allowed to go about their business as long as they don't cost too much money, but Google doesn't want to make money by having good products that people love to use, Google wants to make money by being a hyper-growth unicorn again, and they will do anything at all to recapture that kind of growth even if they're slitting the throat of the company to do it.
Whether this attitude is good for Google or its users is left as an exercise to the reader.
We already have a term for prompting a computer in a way that causes it to predictably output useful software; we called that programming, and people on this website used to think that knowing how to do that was a worthwhile field of study.
Google simply does not have a culture of giving a shit about people's experiences with their product. If you are having a problem you better either have that problem so frequently and severely that it shows up on whatever monitoring system they're using to evaluate release health, or you better get comfortable with it for the long haul.
Google also routinely removes AI suggestions for searches that produce embarrassing results (you don't get them for searches about keeping cheese on your pizza anymore, for example), so it's even harder to validate once a result goes viral.
My friend described the original symptom cluster as "Havana few too many syndrome" (as in a few too many drinks), which I think is probably about right. From there it's just people working themselves into a panic over nothing, like american cops with fentanyl "exposure": https://www.npr.org/2023/05/16/1175726650/fentanyl-police-ov...
#2 was definitely a problem for them; Teslas are out of action longer because of the wait times, AND they are more expensive to repair (which means they're more expensive to insure, and some carriers won't cover them at all). Being out longer costs Hertz a lot of money, and having to eat higher maintenance costs on top of that is a brutal double-dip.
edit just to add: This is much less of a problem for individual owners; I know people who are still happy with their Teslas, and a single person needing a single replacement car while theirs is awaiting repair is not a big deal. But a car company needing a thousand replacements while a thousand cars are sitting in storage is pretty bad for them.
I love that too. You definitely don't see as many of them these days. By 2006 they were kind of a punchline (cf the TV series "30 Rock" and their portrayal as a goofy dead-end tech for weirdos, sold by Dennis Duffy).
This might or might not be an interesting digression (apologies if it's the latter!) but many medical professionals still carry beepers or pagers of some kind. Not like "an app on their phone that will ring your phone at you even through Do-Not-Disturb" (I have one of those), but something that is very recognizably an old school beeper. They often have a SIM card in them, and the newer ones sometimes have wifi as well for redundancy.
My wife is a nurse at a cancer treatment center, she coordinates care for extremely sick people who are getting very specialized treatments and she's kind of the front-line person for dealing with them and project managing emergency situations, so she and all the doctors she work with carry them. I thought it was actually pretty cool :)
I asked her about it once, and apparently the hospital system looked at the more modern app-based paging stuff and decided that while it was cheaper, the reliability hit wasn't worth it to them. The physical hardware for these things is outrageously sturdy, they have a lifespan of like a decade, they're extremely easy to replace. Sure, your wifi might be out or your telephony might be down, but that's a problem your app has to deal with too. Apps are easier to provision, but it's an extra layer of stuff that can go wrong (your phone is getting an update or out of battery, you left it in your car because you were playing music with it and forgot to take it out of the console, it got stolen because phones are recognizably valuable) so they just stuck with the old familiar form factor that does one thing, extremely reliably.
This isn't a criticism of the app-based paging systems or anything; they're quite reliable in my experience. I just thought it was a neat additional data point about the considerations that go in to the thought process about provisioning an alarm for your employees when the alarm almost always means either "I have a time-sensitive question about a patient's ongoing medical emergency" or "your patient is about to die".
Hospitals in my area of the US still use POCSAG pagers, totally unencrypted. They do mention patient information, but I guess the obscurity makes it ok.
> Hospitals in my area of the US still use POCSAG pagers, totally unencrypted. They do mention patient information, but I guess the obscurity makes it ok.
Nope, the obscurity doesn't make it okay. If it takes place over the phone lines, it is arguably exempt from encryption requirements under HIPAA (much like a fax).
Otherwise, they're just turning a blind eye and hoping nobody notices (which is surprisingly common when it comes to HIPAA).
The good news (for them, not for patients) is that, even if they get caught, the maximum fine is $2 million per calendar year per category of violation, so if they're flush enough they don't even need to bother being compliant in this area.
It's over the air, not even phone lines. PDW, SDRSharp, and an rtl-sdr dongle is all that's needed.
And yes, there is a lot of patient info in that traffic. It's not illegal for the hospital to broadcast this, and it's not illegal to listen in and decode the signals, but it is very much illegal to do anything with the information gathered.
> It's over the air, not even phone lines. PDW, SDRSharp, and an rtl-sdr dongle is all that's needed. And yes, there is a lot of patient info in that traffic. It's not illegal for the hospital to broadcast this, and it's not illegal to listen in and decode the signals, but it is very much illegal to do anything with the information gathered.
I'm not familiar with this particular technology, which is why I didn't make a definitive claim in my previous comment. But I am quite intimately familiar with HIPAA and related regulations, and I am extremely skeptical of the third sentence you wrote.
Maybe it uses particular spectrum that is considered illegal to tamper with, just like analog cell phone signals, and HIPAA (inappropriately IMHO) leans on that to explain away an exemption from encryption?
I don't think I have any logs of these any more, but when I was listening on the local hospital's pager traffic, I seem to recall messages that were along the lines of [last name][room number][sexually transmitted disease test is complete].
Surprised me at the time too because I used to do work dealing with processing CDA documents into fhir data and I know how crazy HIPAA can be with PHI/PII, but at the same time these legal frameworks often have carveouts or super serious adoption deadlines that keep getting pushed to next year (and then next year, and then next year).
At least twice, I've accidentally set my iOS devices to the Do Not Disturb focus mode. First time made me miss a job interview calendar reminder, leading to me (1) learn there's no way to disable this 'feature', the 'do not disturb' focus cannot be deleted, and (2) setting the DnD-focus-mode-specific wallpaper to something radically different from normal just so I'd spot it faster next time. It did happen again, but the second time I knew what was up even on the lock screen and turned the focus back to normal before it did any harm.
Fixing the company sounds good, but you have to remember that the people who would be fixing it are the people who got it to this point in the first place.
I think it's very likely that nobody currently at Boeing has the ability and willingness to make the kinds of changes they would need to make in order to become a functional company again, because Boeing has spent over two decades systematically purging senior engineers from management and leadership in order to become another crappy company full of empty suits with MBAs, who don't understand the product they're making, and don't care if they're literally killing people and the company is rotting out from under them as long as they can monetize the rot to make their quarterly numbers.
That's correct. LLMs are plausible sentence generators, they don't "understand"* their runtime environment (or any of their other input) and they're not qualified to answer your questions. The companies providing these LLMs to users will typically provide a qualification along these lines, because LLMs tend to make up ("hallucinate", in the industry vernacular) outputs that are plausibly similar to the input text, even if they are wildly and obviously wrong and complete nonsense to boot.
Obviously, people find some value in some output of some LLMs. I've enjoyed the coding autocomplete stuff we have at work, it's helpful and fun. But "it's not qualified to answer my questions" is still true, even if it occasionally does something interesting or useful anyway.
*- this is a complicated term with a lot of baggage, but fortunately for the length of this comment, I don't think that any sense of it applies here. An LLM doesn't understand its training set any more than the mnemonic "ETA ONIS"** understands the English language.
**- a vaguely name-shaped presentation of the most common letters in the English language, in descending order. Useful if you need to remember those for some reason like guessing a substitution cypher.
If you can watch the video demo of this release, or for that matter the Attenborough video, and still claim that these things lack any form of "understanding," then your imagination is either a lot weaker than mine, or a lot stronger.
Behavior indistinguishable from understanding is understanding. Sorry, but that's how it's going to turn out to work.
Why are people so eager to believe that people can? When it comes to the definitions of concepts like sentience, consciousness, thinking and understanding, we literally don't know what we're talking about.
It's premature in the extreme to point at something that behaves so much like we do ourselves and claim that whatever it's doing, it's not "understanding" anything.
We've studied human behavior enough to understand that there are differences between animals in the level of cognition and awareness they (outwardly) exhibit.
Are we not generally good at detecting when someone understands us? Perhaps it's because understanding has actual meaning. If you communicate to me that you hit your head and feel like shit, I not only understand that you experienced an unsatisfactory situation, I'm capable of empathy -- understanding not only WHAT happened, but HOW it feels -- and offering consolation or high fives or whatever.
A LLM has an understanding of what common responses were in the past, and repeats them. Statistical models may mimic a process we use in our thinking, but it is not the entirety of our thinking. Just like computers are limited to the programmers that code their behavior, LLMs are limited to the quality of the data corpus fed to them.
A human, you can correct in real time and they'll (try to) internalize that information in future interactions. Not so with LLMs.
By all means, tell us how statistically weighted answers to "what's the next word" correlates to understanding.
By all means, tell us how statistically weighted answers to "what's the next word" correlates to understanding.
By all means, tell me what makes you so certain you're not arguing with an LLM right now. And if you were, what would you do about it, except type a series of words that depend on the previous ones you typed, and the ones that you read just prior to that?
A human, you can correct in real time and they'll (try to) internalize that information in future interactions. Not so with LLMs.
Not so with version 1.0, anyway. This is like whining that your Commodore 64 doesn't run Crysis.
We've put an awfully lot of effort into figuring that out, and have some answers. Much of the problems in exploring the brain are ethical because people tend to die or suffer greatly if we experiment on them.
Unlike LLMs, which are built by humans and have literal source code and manuals and SOPs and shit. Their very "body" is a well-documented digital machine. An LLM trying to figure itself out has MUCH less trouble than a human figuring itself out.
Just wanted to say that I think your rendition of Zellige is really nice, thank you for sharing! This is a very cool drawing language, and I think your art here is a really solid example of how creative you can be even with very limited tools.
I made a pinwheel, which is maybe not that exciting, but was a lot of fun for me. I like the bracket syntax a lot, and figuring out how to make this work actually recaptures some of the old sense of exploration and fun that I felt before programming was my day job.
Thanks, please consider sharing your pinwheel, I'm sure it was awesome to make too!
You're touching on an important point. I tried to figure out CFR[] by trial and error and it was a lot of fun, the kind of which I didn't feel for a long time. I think this boils down to two things:
- Bounded complexity, which encourages you to explore it to its fullest
- Immediate feedback, the result is not disconnected from the act of editing the code, so your iteration cycle is super short.
To put it another way: Google doesn't want to be a software company anymore. Google does not care about making software, or products, or the people who make or use their products. Google wants to be a growth company where the stock price goes up by two-digit percentages every quarter. That is absolutely the only thing that Google cares about. Google has realized that the best way to make this happen is to commit securities fraud by lying to their investors about their products, and by drip-feeding layoffs to show that they're serious about their underlying financials. It's theater, playing pretend at being business people. The individual products are allowed to go about their business as long as they don't cost too much money, but Google doesn't want to make money by having good products that people love to use, Google wants to make money by being a hyper-growth unicorn again, and they will do anything at all to recapture that kind of growth even if they're slitting the throat of the company to do it.
Whether this attitude is good for Google or its users is left as an exercise to the reader.