Dumb question, but if this is "open source" is there source code somewhere? Or does that term mean something different in the world of models that must be trained to be useful?
I've known people abandon veganism (for vegetarianism) over cheese, since it's such a common ingredient in restaurant food. Butter feels a little less likely.
Butter is high in saturated fats which are bad for you. It is also very calorie dense. Eating a lot of calorie dense foods makes it difficult to control your weight.
It's probably better than margarine but I wouldn't describe it as a health food.
N=1, but I’ve been doing low carb paleo for 15+ years, from about age 20 to my current late 30s. I live off of butter, tallow, and lard. My weight has only crept up when I’ve eaten a lot of processed food. I get quite lean even with high fat if I fast more frequently or dip into ketosis. I’m trying to pack on some extra muscle with weight lifting right now and it’s not easy to get enough clean calories short of eating spoonfuls of (happy, pastured) bacon grease.
All I’m trying to say is that butter isn’t the enemy. Maybe commercial dairy production practices are the enemy, won’t argue with that.
> I’m trying to pack on some extra muscle with weight lifting right now and it’s not easy to get enough clean calories short of eating spoonfuls of (happy, pastured) bacon grease.
I thought I was the only one with that issue! I'm not paleo but my diet is heavy on whole foods, salads, and I don't eat much in carbs so once I started weight lifting, getting enough calories during the bulking phase has always been a struggle.
Protein is pretty easy by just chugging a few protein shakes a day, but calories? I had to start drowning my salads in olive oil and trying to sneak it or avacado oil or butter into every dish I cooked. A stick of butter only has like 800 calories!
Problem with eating lots of animal fats isn't necessarily weight gain, but increased risk of ASCVD from raised LDL-c/ApoB. Can't really see/feel that until you keel over with an MI.
Some people call themselves vegans but will still use animal products that they feel are ethical. Also, some vegans do occasionally use animal products just because they want to.
I don’t think it’s a conspiracy but it’s weird that the vegan topic even came up in this article because it is immaterial to the main topic.
I've seen this story making the rounds, but this isn't news, is it?
All self-driving companies maintain teams that make a decision when the cars get confused or stuck, and they report the number of such handoffs to NHTSA.
Is it just that there are teams in the Philippines specifically?
1) "Often" is a gross mischaracterization. It's so infrequent you wouldn't believe. Nearly all rides are performed fully autonomously without human intervention. But "often" sure sounds spicy!
2) "its autopilot is just guys from the Philippines": no, it's not. A human is in the loop to help hint to the Waymo Driver AI platform what action to take if its confidence level is too low or it's facing a particularly odd edge case where it needs to be nudged to take an alternate route. This framing makes it sound like some dude in Manilla is remote controlling the car. They're not. They're issuing hints to and confirming choices by the Waymo Driver which remains in full control of the vehicle at all times.
Because lay people, even non-technically-sophisticated lay people naturally start wondering "well, isn't there some delay between a person in the Philippines and the car in the US? how could that be safe? what if the internet dips out or the connection drops?" Which are good and valid points! And why this framing is so obnoxious and lazy. The car is always driving itself.
They finally issued a correction in the linked article that makes it clear they're not remote controlling the cars, but the headline is still really slanted and a frustrating framing. When you ride in these things, you can see just how incredible this technology is and how far we've come.
There's also the implicit xenophobia/offshoring angle that people in a call center in the Philippines must be doing low quality work and/or being exploited.
Well they are being exploited for potentially illegal purposes.
Forget self driving cars for a minute. If Domino’s wants to deliver a pizza to me, the delivery car driver needs to be licensed (pretty much in that state).
It doesn’t matter even if Domino’s extends some sort of liability insurance. Laws are laws. Legally driver must be licensed. It doesn’t matter if they drive while on a speakerphone call with a licensed driver.
It also wouldn’t matter if the delivery driver had no license but carried a licensed passenger at all times. It wouldn’t matter even if the passenger owned the car.
Having a person drive a car in a country in which they are not licensed to drive seems fundamentally illegal. It’s not a technology issue. It doesn’t matter if there are sensors and satellite links involved. The driver must be licensed.
Somehow society had decided in favor of a little convenience to forget all principles and let tech companies run roughshod over laws, societal norms, and basic human decency.
This is worse than the 1970s mentality of “if it came from the computer it must be correct”. Now it is AI…
Should probably be licensed to drive in the US if “explicitly proposing a path for the vehicle to consider” as Waymo has disclosed…
I would not personally be comfortable “explicitly proposing a path” for a vehicle operating in the Philippines since I’ve never even been there, let alone driven there. Why would I be comfortable with somebody doing the reverse?
It seems possible that people in the Philippines providing advice to Waymo vehicles in the US get some training on US road signage, traffic regulations, etc. (I can't see how it would make any sense for Waymo to pay people to do this and not give them the information they need to do it reasonably well, since the whole point is for them to handle difficult cases.)
And it would be difficult for whatever training Waymo provides to its employees to be less stringent than the lax license requirements of most US states.
Tourists can drive in the US on their foreign license. Can that be used as a loophole for a call center?
Also, maybe it is a gray area where they are not asking what they don't want to hear. Those offshore subcontractors already break any US law they want because they aren't hiring humans inside the US, they are providing a service from abroad.
Specifically, how do you know the operator can drive?, as you ask. But also, how do you know your operator won't steal your PII / bank account details out of your law enforcement physical jurisdiction?
As far as I understand it, they aren't being allowed to drive. They are doing the equivalent of "ignore that, it's not a real obstacle" or "try to go around this way", and then the car takes that input into account and does the actual driving (steering, control of throttle/brake) on it's own as usual.
I don't need, legally, to demonstrate any knowledge of this to drive on US roads currently (or even, strictly speaking, to know what side of the road I should drive on).
No, I'm saying that no one should be "concerned that non-registered drivers in one country are being allowed to drive remotely in a different country" because they aren't driving.
One that I heard a lot is that if you're in the US during the day talking to an offshored tech support person, it's the middle of the night for them. The A-team doesn't work overnight, so you're getting at best second tier. blah blah
The guy says there are workers abroad, not exclusively in Phillipines. Phillipine call centers work when it is night in the US. There almost certainly is /are other centers in another location which work when it is daytime in the US.
Because Night shifts are always more expensive. Nothing to do with any A, B or C Team.
Edir: "Markey then asked about where the operators are located, to which Peña says they have "some in the U.S. and some abroad,” however he did not know an exact percentage of those located elsewhere. "
He gave a non answer, quite surely on purpose. Since the interviewer didn't explicitly ask "Only in the Phillipines?", I can see the guy retorting "I never said there weren't operators in other places" (again, without saying which other places, or even if there is any other place)
It probably has more to do with the fact that Filipinos speak english. There's no other countries like that in Asia. I mean, Singapore I guess, but they're busy with their own things.
I'm not saying it's true or not true, I'm saying I don't know what "xenophobia" has to do with evaluating the quality of workers being used in potentially life-saving situations.
I'd have a way easier time buying the idea that there's genuine concern for the quality of this work if say, few Americans old enough to do so were licensed to drive. But er, actually it's estimated at almost 90% because the standards are extremely lax.
What "potentially life-saving situations" are you envisioning?
Nobody had mentioned any evaluation of anything. The Grandparent mentioned that xenophobia makes the headline more spicy. "Remote operator" phrase is not as attractive as "Remote operators from Phillipines" or even "Pinoys" can be.
Edit: "They finally issued a correction in the linked article that makes it clear they're not remote controlling the cars, but the headline is still really slanted and a frustrating framing"
They are being exploited. They live in a lower cost-of-living country than where their services are rendered, and so neither demand nor receive the same wages as someone in the USA. The contracting company profits - quite intentionally! - from labour arbitrage.
The Western companies who employ or contract people in these other countries aren't altruistically investing. They're on the hunt for people who will accept lower wages, and for governments that won't insist on workers rights, health and safety.
Hiring specifically in Texas or Arizona because you heard it's lower cost-of-living than the Bay Area, and not being willing to offer Bay Area salaries to people there... that's still exploitation.
If you were instead hiring from anywhere (because you'd be happy with a remote worker, and they have the same employment rights) and willing to pay the same as you'd pay your Bay Area workers, i.e. it's about the hunt for talented/capable employees whereever they might be, rather than a hunt for cheap ones, that's no longer exploitation.
Yeah but ironically it's actually the workers in the US who are being exploited. The workers in the developing countries are largely beneficiaries since they get access to wages and a labor market far beyond their local region. (Obviously the companies still benefit the most.)
They are being exploited. I've traveled to Cebu City where many of these call centers are located. My wife is from the area. To Filipinos, it's a good job, but the quality of life for these workers is still very poor. It's not a living wage; most can't afford to live on their own.
No, I have a friend from Madagascar where they have the same type of 'jobs' (basically classifying stuff for AI, or checking reported images to see if it's porn or worse). It is a 'good job' in the sense that it's a 'desk' job you can do at home that also signal education, so it's social value is high. It is also very 'competitive' so the pay is low and the hours to live on it approach 90/week (it's a 12h/day job)
No I'm familiar with the call center jobs op means, they're good paying (for the region) but you have to go in to the physical location in the city. Which usually means paying rent or a long commute.
It's not lazy framing, this is what "journalism" is now. Push your agenda as far as you can, misrepresenting as many facts as you like. At the very end of your story -- which >85% will never get to -- walk back your misdirections with a paragraph or two of facts, right next to your bolded "sign up" text. None of this is unintentional or accidental.
I think in response to the propaganda and opinion that has been passed as journalism there are very compelling new journalism outlets like bellingcat. So there is hope and probably space for journalism that fills this gap.
Well one concern could be something like - ride share companies already extracted a lot of the profit share of local taxi companies out of their local economies and moved it to Silicon Valley. But at least there were local jobs so a good amount of money stayed in the local economy.
Now with driverless all the money leaves the local economy to go to Silicon Valley. And then what human labor is required is then offshored.
I assume you have sources for the claims you're making above? Like actual data on the number of people employed doing this work, how often they "guide" the car, etc? Otherwise it's hard to believe your claims.
Interesting, an immediate downvote asking for sources.
Yes, the display will chime and indicate "We're working to help you get moving again" and "Sit tight and keep your seatbelt fastened." Support may call in if necessary. Or if you've reached out to Support, they'll explain that they're going to send a command to the vehicle or modify the route.
It's not unusual and it's quite a routine mode. It will happen if there's a jam in front of you, like a parking lot or narrow passage. Twice, we were behind a marked police car that was sort of double-parked. You'll know Support is imminent when the car is hesitating, and the virtual reality may indicate multiple-choice paths.
It happened to me once today at a "valet parking" stand, in a very busy drop-off circle. (I traveled about 25 miles.) I was also cracking up, because it chose a lot of backroads for the routes, which could've been done on arterials. So I was treated to a real "scenic route" at no extra charge. But Support never called in, and the delay so brief, I am unsure whether their input was necessary to clear the way.
I did once receive a human driver to move the car. The dropoff was in the wrong United States lot, and when I told them I couldn't exit there, they said the battery was so low it needed the American Depot ASAP and wouldn't obey, so an American human was dispatched from The United States of America, and moved it a couple hundred yards... or meters, if you're Pinoy. It was a distinct process for her to climb in and disengage the Waymo Driver, but otherwise just a normal American thing. I mean, she was required to recite the Pledge of Allegiance, show her ID to the camera, and finish by singing the Star-Spangled Banner, but otherwise it was normal, and could've happened anywhere. God Bless Waymo.
This defense is missing the point. Yes, humans aren’t remote-driving the cars, and yes, most miles are autonomous. But the relevant question isn’t how often a human intervenes — it’s how many humans must be continuously available for the system to function at all. Even if interventions are rare, Waymo still needs operators on shift, fully alert, low-latency, and trained for local conditions, and that cost exists whether they’re doing something or not. Capacity planning is driven by correlated failures, not averages: blackouts, construction, special events, and weather can cause many vehicles to request help at once, and we’ve already seen queues form. That means the human layer is sized for worst-case concurrency, not “99.99% of miles.” So no, it’s not “just guys in the Philippines driving cars,” but it’s also not “so infrequent you wouldn’t believe.” It’s a highly autonomous system with a permanent human ops shadow, and the fact that this work is offshored strongly suggests that shadow is economically material. Miles are autonomous. Ops are not.
Yes it’s just because of the Philippines. The mention of the Philippines is triggering some additional scrutiny. When Waymo themselves announced this back in 2024[1] they made no mention of the country where these humans are located. Now people are raising questions about data sovereignty and local training such as U.S. driving license. Or if you are cynic you can say it’s xenophobia.
Partially, but also as an easy attack on "bad big tech" and AI - it wins votes right as primary season starts gearing up [1] during what is being treated by both the DNC and the GOP as a highly competitive election [0].
Ed Markey is going to face a severely harsh primary this election cycle (as are other incumbents in both parties this season).
Yes, I think this is counting on the ignorance that people will believe there are "drone operators" at the console, halfway across the world, who are driving our cars [A.I. stands for "Actually Indians"?]
The way I understood the liability conversation, several years ago, was that each "autonomous vehicle" would have a corresponding operator of record, a licensed driver, who would be the responsible person for the vehicle's behavior. That there would be a designated person to carry insurance and licensing and be personally responsible and personally answer to criminal or civil charges if "their" vehicle got in a fix.
Honestly this model doesn't make any sense, as Waymo has set it up so that the only driver is the Waymo Driver making decisions, because the Waymo Driver is the only one who's privy to 100% the real-time data.
The remote CSRs, whether they're in Philippines or stateside engineers on an escalation, are explicitly not driving the car but giving it suggestions. If they need someone to "drive the car" they literally dispatch a human who gets behind the wheel, and that's how it works.
>Yes, I think this is counting on the ignorance that people will believe there are "drone operators" at the console, halfway across the world, who are driving our cars [A.I. stands for "Actually Indians"?]
...
>Honestly this model doesn't make any sense, as Waymo has set it up so that the only driver is the Waymo Driver making decisions, because the Waymo Driver is the only one who's privy to 100% the real-time data.
Their competitor Telsa does use teleoperation in their "robotaxis"? So what is ignorant about believing it to be the case in this scenario?
The article you link literally says that Tesla's teleoperation is the same kind as Waymo's, and there is nothing that the company has ever deployed that will enable "remote drone operators" so I don't know what your point is.
Tesla and Waymo both offer systems to provide sensor insight to remote observers, and the remote observers can send suggestions and nudges to the vehicles. The general public does not understand the nuance here, and they imagine someone is sitting with a steering wheel and pedals, like a radio-controlled toy or a USAF Reaper drone.
> Our remote operators are transported into the device’s world using a state-of-the-art VR rig that allows them to remotely perform complex and intricate tasks. Working with hardware teams, you will drive requirements, make design decisions and implement software integration for this custom teleoperation system.
The article notes that this is very unlike what Waymo is doing:
> This should enable Tesla to launch a service similar to Waymo without having to achieve a “superhuman level of miles between disengagement.”
> similar to Waymo
> taking a page out of Waymo’s book
> something that [...] Waymo has already deployed
> one thing that Tesla is taking from Waymo’s approach
> interesting to learn the level of teleoperation Tesla plans to deploy
Basically, this article you linked is reporting only on a job description. The job posting is for an engineer, not a teleoperator! The job posting touts the VR environment that will be used to "drive requirements", not vehicles! What company would hire a highly-skilled and credentialed engineer to be a drone pilot? It is absurd.
The general public may not fully understand this nuance. The entire point of autonomous operation is to remove humans from the decision loop and permit the machine to use its own sensors to make rapid decisions in real-time. As autonomy is refined, remote operators will intervene less and less. And as sensors are refined, humans will have less insight than the AI onboard, due to our inability to directly process those signals.
The author does not know "what level" of teleops Tesla wants to implement. But why even attempt to implement FSD or top-level autonomy, if your operators are doing the driving anyway?
This would never scale. We already discussed the incident where Waymo's disengagement overwhelmed their remote techs and it was an undesirable edge-case. In order to operate a robotaxi fleet, the disengages and takeovers need to be safe, legal, and rare.
> The job posting touts the VR environment that will be used to "drive requirements", not vehicles!
No, it says it will bring the user into the vehicle. They are driving requirements for building this telepoerating system. The role description couldn't be more clear.
> What company would hire a highly-skilled and credentialed engineer to be a drone pilot?
They're hiring an engineer to build the VR driving system, not to operate it.
> But why even attempt to implement FSD or top-level autonomy, if your operators are doing the driving anyway?
Obviously, they would like to remove the need for the teleoperators, just like Waymo would like to remove the need for its driver assistants, but Tesla is nowhere near being able to do that.
> This would never scale.
You know it. I know it. This is just to fool Cathie Wood into believing that robotaxi works for long enough until they can get Optimus working. Then if they're behind on Optimus, they'll presumably have backup teleoperators for those in small deployments (just like they're doing with robotaxi now) until they can get the next big thing working or until SpaceX buys out Tesla.
It's the question of whether these teams are composed of people who can pass a driving test in Waymo's areas of operation. I would be doubtful that they aren't but there appears to be no way for external verification of any kind.
I'm skeptical of this. Something as simple as knowing the meaning of curb painting color, turning on red, or knowing when to move past an emergency or other special vehicle requires non-universal knowledge of regulations, sometimes hyper-local. The idea that nothing they do would be affected by country-specific regulations is dubious.
Taking a driver's test, and knowing the meanings of road symbols are two different things. At no time did I imply the workers are completely ignorant of locale-specific driving details- I imagine they receive extensive training on this, but do not take a driver's test per se.
This is analogous to a situation where a passenger in a vehicle, for example, asks the driver to pull over or to drive to a given spot. I believe the passenger does not need to have a driving license to perform this task.
I have no specific knowledge of the law or the tech requirements here, but I don't think that the state of CA makes Google get CA driving licenses for its phillipines service employees.
There may be an is/ought confusion in your exchange.
It is probably true that California has no such law today. It's also true that regulation always takes a while to catch up to technological advances, and so there is a useful, separate conversation to be had about whether California (and anywhere else) ought to have such a law.
To be clear, California's legislators are paying close attention to Waymo, both because it's being deployed in their major cities, and because Alphabet is a California company.
Depending on which legislator you listen to, Waymo is either the devil that is constantly running people and cats down everywhere, or savior that will rapidly replace all human drivers because it's safer. At least for now, they are keeping a fairly light touch on the legislation for self-driving cars, both because they want to see the technology expand without unnecessary regulation, and because they want to know what the baseline fatality rate is compared to humans.
Likely when the image is clearer (personally I expect that self-driving will expand to all major US cities, and also demonstrate that it is safer than humans) they will find some regulation around remote operator qualifications.
For the time being, they have a free colorless "foreign inept CSR which we don't employ" card when something happen; something always happens given enough time.
It’s often illegal to make a U-Turn to avoid a police checkpoint for example. There’s no way someone can unstick a confused car without being able to make legally relevant choices.
For example, I said they operate in Texas as you called out California as your example. Texas doesn’t allow sobriety checkpoints but it’s an entirely new states worth of legal issues here.
I would assume that unsticking it requires forcing it to do maneuvers that it would otherwise refuse to do (or it would just unstick itself), so you'd need some knowledge of laws to do that
Think it is a much bigger deal than you’re making out, because we don’t have figures on how often the cars need assistance.
We assume it’s just occasionally but we don’t actually know that. They could be requesting assistance constantly and Waymo would have an incentive to keep that hush-hush. Certainly would not be the first time a big SV company has faked it until they technically worked.
We do know it's not all cars constantly, though. The PGE outage in San Francisco proved it, as anytime a Waymo came across a unpowered traffic light, it was configured to ask for assistance. This led to disaster, as there weren't close to enough humans to provide guidance to all the Waymos.
This is not proof of anything, it’s just an explanation that Waymo provided. We have no way of validating any part of that is true.
The way the cars behaved during the power outage could have been the result of anything - even eg. requiring full remote control and losing connectivity to a local facility
At worst, it's the outsourcing of cab drivers to remote roles in cheaper countries. No problem for the investors who are banking on another market disruption, since they're leaving the local society to hold the bag.
That makes sense to me. I really enjoyed the personal anecdotes and I thought they made the article a lot stronger for me, but a dry gas mask review would have also been an excellent, albeit different, article.
But that is exactly what the flag button is there for?! - but this discussion has been had numerous times, and the two sides will never agree.
Safest to flag (or not) as you see fit, because you are a good person rather than an evil one. Then rely on the admins to rescue needlessly ultraflagged articles as appropriate. They are pretty good at doing the right thing.
You're saying the discussion of which chemical respirators to wear to protests has been had numerous times?
I'd say this is a productive topic of conversation for many HN users. There are not "two sides" on this topic, unless we're talking 3M vs MSA. The people flagging or commenting with opposing political views are disrupting conversation, likely because they they disagree with how the topic has been framed. This is exactly like PHP fans going into a Python thread and telling everyone Python sucks, disrupting the people who just wanted to discuss getting things done within the framework of Python. They might have some valid points, but they're not germane to civil discussion.
No, I was referring to discussion of the semantics of flagging. Apologies; I thought it was phrased clearly enough, but, perhaps not. (Maybe I should have said "that discussion" rather than "this discussion"? This is my native tongue, so you can't trust me to get this stuff perfectly right.)
Ah, I don't know that there was a problem with your phrasing. Rather it's that the meta-discussion of flagging is so inactionable that the possibility didn't even cross my mind. Mea culpa.
I guess I can see how pre-installing some LLM agents makes it potentially seem "AI-centric", but I don't understand at all how this could be "US-centric".
reply