Hacker Newsnew | past | comments | ask | show | jobs | submit | pornel's commentslogin

People innately admire difficult skills, regardless of their usefulness. Acrobatic skateboarding is impressive, even when it would be faster and safer to go in a straight line or use a different mode of transport.

To me skill and effort is misplaced and wasted when it's spent on manually checking invariants that a compiler could check better automatically, or implementing clever workarounds for language warts that no longer provide any value.

Removal of busywork and pointless obstacles won't make smart programmers dumb and lazy. It allows smart programmers to use their brainpower on bigger more ambitious problems.


To me a compiler's effort is misplaced and wasted when it's spent on checking invariants that could be checked by a linter or a sidecar analysis module.

Checking of whole-program invariants can be accurate and done basically for free if the language has suitable semantics.

For example, if a language has non-nullable types, then you get this information locally for free everywhere, even from 3rd party code. When the language doesn't track it, then you need a linter that can do symbolic execution, construct call graphs, data flows, find every possible assignment, and still end up with a lot of unknowns and waste your time on false positives and false negatives.

Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart. Recovering required information from arbitrary code may be literally impossible (Rice's theorem), and getting even approximate results quickly ends up requiring whole-program analysis and prohibitively expensive algorithms.

And it's not even an either-or choice. You can have robust checks for fundamental invariants built into the language/compiler, and still use additional linters for detecting less clear-cut issues.


If the compiler is not checking them then it can't assume them, and that reduces the opportunities for optimizations. If the checks don't run on the compiler then they're not running every time; if you do want them to run every time then they may as well live in the compiler instead.

These type comments always remind me that we forget where we come from in terms of computation, every time.

It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.

It's easy to publicly shame people who do hard things for a long time in the light of newer tools. However, many people who likes these languages are using them longer than the languages we champion today were mere ideas.

I personally like Go in these days for its stupid simplicity, but when I'm going to do something serious, I'll always use C++. You can fight me, but never pry C++ from my cold, dead hands.

For note, I don't like C & C++ because they are hard. I like them because they provide a more transparent window the processor, which is a glorified, hardware implemented PDP-11 emulator.

Last, we shall not forget that all processors are C VMs, anyway.


> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.

The core of the borrow checker was being formulated in 2012[1], which is 13 years ago. No infeasibility then. And it's based on ideas that are much older, going back to the 90s.

Plus, you are vastly overestimating the expense of borrow checking, it is very fast, and not the reason for Rust's compile times being slow. You absolutely could have done borrow checking much earlier, even with less computing power available.

1: https://smallcultfollowing.com/babysteps/blog/2012/11/18/ima...


> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.

IIRC borrow checking usually doesn't consume that much compilation time for most crates - maybe a few percent or thereabouts. Monomorphization can be significantly more expensive and that's been much more widely used for much longer.


> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.

I think you're setting the bar a little too high. Rust's borrow-checking semantics draw on much earlier research (for example, Cyclone had a form of region-checking in 2006); and Turbo Pascal was churning through 127-character identifiers on 8088s in 1983, one year before C++ stream I/O was designed.

EDIT: changed Cyclone's "2002" to "2006".


I remember, I was there in the 1980's coding, hence why I know C and C++ were not the only alternatives, rather the ones that eventually won in the end.

> the processor, which is a glorified, hardware implemented PDP-11 emulator.

This specific seems like just gratuitously rewriting history.

I can get how you'd feel C (and certain dialects of C++) are "closer to the metal" in a certain sense: C supports very few abstractions and with fewer abstractions, there are less "things" between you and "the metal". But this is as far as it goes. C does not represent - by any stretch of imagination - an accurate computational model or a memory of a modern CPU. It does stay close to PDP-11, but calling modern CPUs "glorified hardware emulators of PDP-11" is just preposterous.

PDP-11 was an in-order CISC processor with no virtual memory, cache hierarchy, branch prediction, symmetric multiprocessing and SIMD instruction. Some modern CPUs (namely the x86/x64 family of CPUs) do emulate a CISC ISA on that is probably more RISC-like, but that's as far we can say they are trying to behave like a PDP-11 (even though the intention was to behave like a first-gen Intel Pentium).


> we shall not forget that all processors are C VMs

This idea is some 10yrs behind. And no, thinking that C is "closer to the processor" today is incorrect

It makes you think it is close which in some sense is even worse


> This idea is some 10yrs behind.

Akshually[1] ...

> And no, thinking that C is "closer to the processor" today is incorrect

THIS thinking is about 5 years out of date.

Sure, this thinking you exhibit gained prominence and got endlessly repeated by every critic of C who once spent a summer doing a C project in undergrad, but it's been more than 5 years that this opinion was essentially nullified by

    Okay, If C is "not close to the process", what's closer?
Assembler? After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on, that has a lower bound which none of the data gets close to?

You're repeating something that was fashionable years ago.

===========

[1] There's always one. Today, I am that one :-)


Standard C doesn't have inline assembly, even though many compilers provide it as an extension. Other languages do.

> After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on

The claim about C being "close to the machine" means different things to different people. Some people literally believe that C maps directly to the machine, when it does not. This is just a factual inaccuracy. For the people that believe that there's a spectrum, it's often implied that C is uniquely close to the machine in ways that other languages are not. The pushback here is that C is not uniquely so. "just as close, but not closer" is about that uniqueness statement, and it doesn't mean that the spectrum isn't there.


> Some people literally believe that C maps directly to the machine, when it does not.

Maybe they did, 5 years (or more) ago when that essay came out. it was wrong even then, but repeating it is even more wrong.

> This is just a factual inaccuracy.

No. It's what we call A Strawman Argument, because no one in this thread claimed that C was uniquely close to the hardware.

Jumping in to destroy the argument when no one is making it is almost textbook example of strawmanning.


Claiming that a processor is a "C VM" implies that it's specifically about C.

Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.

> Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.

Well, you're talking about languages that don't have standards, they have a reference implementation.

IOW, no language has standards for processor intrinsics; they all have implementations that support intrinsics.


> Okay, If C is "not close to the process", what's closer?

LLVM IR is closer. Still higher level than Assembly

The problem is thus:

char a,b,c; c = a+b;

Could not be more different between x86 and ARM


> LLVM IR is closer. Still higher level than Assembly

So your reasoning for repeating the once-fashionable statement is because "an intermediate representation that no human codes in is closer than the source code"?


It seems likely that C++ will end up in a similar place as COBOL or Fortran, but I don't see that as a good future for a language.

These languages are not among the top contenders for new projects. They're a legacy problem, and are kept alive only by a slowly shrinking number of projects. It may take a while to literally drop to zero, but it's a path of exponential decay towards extinction.

C++ has strong arguments for sticking around as a legacy language for several too-big-to-rewrite C++ projects, but it's becoming less and less attractive for starting new projects.

C++ needs a better selling point than being a language that some old projects are stuck with. Without growth from new projects, it's only a matter of time until it's going to be eclipsed by other languages and relegated to shrinking niches.


It will take generations to fully bootstrap compiler toolchains, language runtimes, and operating systems that depend on either C or C++.

Also depending on how AI assisted tooling evolves, I think it is not only C and C++ that will become a niche.

I already see this happening with the amount of low-code/no-code augmented with AI workflows, that are currently trending on SaaS products.


Apple got spooked by GPL v3 anti-tivoization clauses and stopped updating GNU tools in 2007.

macOS still has a bunch of GNU tools, but they appear to be incompatible with GNU tools used everywhere else, because they're so outdated.


And Apple is doing a lot of Tivoization these days. They're not yet actually stopping apps that they haven't "notarized" but they're not making it easier. One of the many reasons I left the Mac platform, both private and at work. The other reason was more and more reliance on the iCloud platform for new features (many of its services don't work on other OSes like Windows and Linux - I use all those too)

The problem with the old tools is that I don't have admin rights at work so it's not easy to install coreutils. Or even homebrew.

I can understand why they did it though. Too many tools these days advocate just piping some curl into a root shell which is pretty insane. Homebrew does this too.


Couldn't you simply use macOS without the iCloud features? Which features require iCloud to work?

You can but there's just not much point anymore.

I don't remember all the specifics but every time there was a new macos I could cross most of the new features off. Nope this one requires iCloud or an apple ID. Nope this one only works with other macs or iPhones. Stuff like that. The Mac didn't use to be a walled garden. You can still go outside of their ecosystem (unlike on iOS) but then there's not much point. You're putting a square peg in a round hole.

Now, Apple isn't the only one doing this. Microsoft is making it ever harder to use windows without a Microsoft account. That's why I'm gravitating more and more to foss OSes. But there are new problems now, like with Firefox on Linux I constantly get captcha'd. M365 (work) blocks random features or keeps signing me out. My bank complains my system is not 'trusted'. Euh what about trusting your actual customers instead of a mega corp? I don't want my data locked in or monitored by a commercial party.


The rexif crate supports editing, so you can apply rotation when resizing, and then remove the rotation tag from the EXIF data. Keeping EXIF isn't necessary for small thumbnails, but could be desirable for larger versions of the image.

Rust has a combo: people come for safety, stay for usability.

Languages struggle to win on usability alone, because outside of passion projects it's really hard to justify a rewrite of working software to get the same product, only with neater code.

But if the software also has a significant risk of being exploited, or is chronically unstable, or it's slow and making it multi-core risks making it unstable, then Rust has a stronger selling point. Management won't sign off a rewrite because sum types are so cool, but may sign off an investment into making their product faster and safer.


Generally speaking I was more prone to agreeing with Rust-haters, and thought the whole idea of how Rust lifetimes are implemented is flawed, and the borrow checker is needlessly restrictive. I also disagree with some other ideas like the overreliance on generics to do static dispatch leading to large executables and slow compiles.

To be clear I still think these criticisms are valid. However after using the language in production, I've come to realize these problems in practice are manageable. The langauge is nice, decently well supported, has a relatively rich ecosystem.

Every programming language/ecosystem is flawed in some way, and I think as an experienced dev you learn to deal with this.

It having an actual functioning npm-like package management and build system makes making multiplatform software trivial. Which is something about C++ that kills my desire to deal with that language on a voluntary basis.

The ecosystem is full of people who try to do their best and produce efficient code, and try to understand the underlying problem and the machine. It feels like it still has a culture of technical excellence, while most libraries seem to be also well organized and documented.

This is in contrast to JS people, who often try to throw together something as fast as possible, and then market the shit out of it to win internet points, or Java/C# people who overcomplicate and obfuscate code by sticking to these weird OOP design pattern principles where every solution needs to be smeared across 5 classes and design patterns.


Unicode wanted ability to losslessly roundtrip every other encoding, in order to be easy to partially adopt in a world where other encodings were still in use. It merged a bunch of different incomplete encodings that used competing approaches. That's why there are multiple ways of encoding the same characters, and there's no overall consistency to it. It's hard to say whether that was a mistake. This level of interoperability may have been necessary for Unicode to actually win, and not be another episode of https://xkcd.com/927

Why did Unicode want codepointwise round-tripping? One codepoint in a legacy encoding becoming two in Unicode doesn't seem like it should have been a problem. In other words, why include precomposed characters in Unicode?

It's quite the opposite. The number of high-quality maintained libraries is growing.

Rust keeps growing exponentially, but by Sturgeon's law for every one surviving library you're always going to have 9 crap projects that aren't going to make it. Unfortunately, crates.io sorts by keyword relevance, not by quality or freshness of the library, so whatever you search for, you're going to see 90% of crap.

There was always a chunk of libraries destined for the dustbin, but it wasn't obvious in the early days when all Rust libraries were new. But now Rust has survived long enough to outlive waves of early adopter libraries, and grow pile of obviously dead libraries. The ecosystem is so large that the old part is large too.

https://lib.rs/stats#crate-time-in-dev

Rust is now mainstream, so it's not a dozen libraries made by dedicated early adopters any more. People learning Rust publish their hello world toys, class assignments, their first voxel renderer they call a game engine. Startups drop dozens of libraries for their "ecosystem". Rust also lived through the peak of the cryptocurrency hype, so all these moon-going coins, smart contract VMs and NFTs exchanges now have a graveyard on crates.io.

When you run into dead libraries in Python or Java you don't think these languages are dying, it's just the particular libraries that didn't make it. JavaScript has fads and mass extinctions, and keeps going strong. Rust is old enough that it too has dead libraries, and big enough that it has both a large collection of decade-old libraries, as well as fads and fashions that come and go.


Google Pixel Buds have a translation feature, and a bunch of other "Gemini AI" gimmicks, available in the EU.

Apple managed to get approvals for medical devices and studies (highly regulated everywhere), custom radios and satellite communication (highly regulated everywhere).

Apple already has machine translation, voice recognition, voice recording, and dictation features shipped in the EU.

But when EU hurt Apple's ego by daring to demand to give users freedom to run software they want on devices they bought (that could break them out of a very lucrative duopoly), Apple suddenly is a helpless baby who cannot find a way to make a new UI available in the EU.


The EU has not declared that Android gatekeeps headphone technology, so the comparison to Pixel Buds is totally irrelevant. There is no interop requirement placed on them.

So all Apple needs to do is stop gatekeeping headphone technology.

What is the definition of gatekeeping technology?

In Apple’s case it mostly means that the API:s Apple use for the AirPods use should be available for others to also use. Apple is not allowed to deny or punish headphones from other manufacturers that want to use those API:s.

This is not quite the problem.

There are multiple issues at play.

The two main issues are:

1 - Sometimes processing is done in private cloud servers for complex translations. Apple is not allowed to do that for EU users. Full Stop. Even if it were not prohibited by EU law, you still have issue 2.

2 - It's unclear whether or not Apple can charge. If another dev uses the APIs, and it triggers a call to the cloud, who pays for the inference? Until Apple gains clarity on that, charging could be considered "punish"-ing a dev for using their APIs.

My own opinion? Issue 2 they will get worked out, but it won't matter because I don't think the EU will move at all on issue 1. I think they see data privacy as serving a twofold purpose. One, protecting their citizens from US surveillance. ie-National Security. And two, part of their long term strategy to decrease the influence of US tech firms in the EU. Both of which I think European policymakers and European common people feel are critical to Europe.


On-device API use is what is relevant here, services such as servers and interference services are out of scope. The DMA clearly allows companies to charge for service use, but they cannot deny API use for any competition who wants for example to use the quick pairing feature or low-latency communication.

They can design the API in such a way that you can provide your own interference solution, or just disable the cloud interference. This is purely a business decision.

It’s some combination of a market companies in the EU care about where Apple sells a product that has some amount of market share, where the threshold and market definition are totally made up and seem to only impact foreign multinationals.

If it's any consolation, Apple is in a league of their own. Any fair, proportional legislation would impact them more than anyone else.

So Apple is welcome to divest AirPods into a separate company and problem solved. Who knows, "AirPods Inc" may discover there are a great many phone brands out there that could use a nice integration and extra features. Win for consumers.

I agree, the Beats takeover should have never happened. The US is basically allowing everything to be swallowed by big-tech.

> the Beats takeover should have never happened.

I agree from a business perspective, those headphones were all brand and a bad fit for apple from a quality perspective. Do we really need regulators deciding when businesses are wasting their money?


The integration works so well because airpods and apple phones use a protocol that isn't bluetooth, their "Magic protocol". You have to own the whole stack to make it work so well.

I wish this would happen. I loved my AirPods Pros 2 but the Android/Windows support was so abysmal that I eventually ditched them.

That's literally the Boeing model

So, the problem is indeed the EU.

On the contrary, the EU is the solution.

Or you know Linux/Windows PCs that would definitely be a big win.

So many other features, such as iPhone mirroring also don't work. Quite ridiculous.

Vote with your wallet?

Easier said than done, but this relatively small annoyance is not a dealbreaker to change the phone. I had it before this feature was announced :D

Half of the new Google Pixel AI features are not enabled in EU. Magic cue, text image editing... These are on-device features too, so really not sure why

I'm a disappointed Pixel 10 owner living in Germany


Instead of these conspiracy theories, the more likely answer is that it takes time to get through these additional regulations, and they didn't want that to hold back their US rollout. Its a pattern that we've seen plenty of times already in the tech industry.

So Samsung could nail the regulation part with their earbuds, but Apple with Airpods can't.

Samsung isn’t a “gatekeeper” under the DMA. The regulations in question here don’t apply to them.

Most probably as you say they can't ship the capability yet, so they're blaming the regulations.

Or really the headphones actively register and send data outside of the EU. There's been some pushback recently on this front (ie. recent MSFT case [1]) since it's a known fact in the field that the approved 2023 EU-US DPF is basically BS, as it doesn't really address the core issues for which US companies were deamed not-compatible with GDPR.

[1] https://www.senat.fr/compte-rendu-commissions/20250609/ce_co...


In my quest to check if deamed was the correct spelling, I stumbled upon an interesting read https://reginajeffers.blog/2024/03/04/damned-or-deemed-or-de...

Indeed. But in my case it’s quite easy as it was a typo. Deemed is the correct one

EU isn't forcing Google to let random 3rd parties replace Gemini AI with TotallyHonestAndNotStealingYourData Corp's AI.

You already can replace the default assistant app with any app that declares itself as an assistant on Android, and have always been able to.

Google is a designated gatekeeper carrying all the same DMA obligations for:

Google Search, YouTube, Chrome, Shopping, Maps, Ads, the Play Store, and Android.

https://digital-markets-act.ec.europa.eu/gatekeepers_en


You can replace Gemini with Perplexity or whatever you want on your Android phone, that includes the main system-wide assistant.

It's almost like you can only shakedown your victims so many times before they say no mas.

Did you just describe Apple as a victim?

GDPR is about collection and processing of personally identifiable information. These are specific legal terms that depend on the context in which the data is collected and used, not just broadly any data anywhere that might have something to do with a person.

GDPR is aimed at companies building user databases, not allowing them to completely ignore security, accuracy, user complaints, and sell anything to anybody while lying about it. It doesn't limit individual people's personal use of data.


GDPR doesn’t mention “personally identifiable information” once; it’s concerned with personal data, which is “any information relating to an identified or identifiable natural person (‘data subject’)”.

The rest is correct: the restrictions are aimed at organisations, not individuals.

[1] https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng#art_4.tit_...


The restrictions are not aimed at organisations, but to protect individuals.

https://www.gov.uk/government/publications/domestic-cctv-usi...

"If your CCTV system captures images of people outside the boundary of your private domestic property – for example, from neighbours’ homes or gardens, shared spaces, or from public areas – then the GDPR and the DPA will apply to you. You will need to ensure your use of CCTV complies with these laws. If you do not comply with your data protection obligations you may be subject to appropriate regulatory action by the ICO, as well as potential legal action by affected individuals."

You, as an individual, have data protection obligations, if your ring doorbell captures audio/video about someone outside your property boundaries. The apple translation service seems analogous.


The ICO is pretty zealous though in this regard. To quote recital 18:[1]

This Regulation does not apply to the processing of personal data by a natural person in the course of a purely personal or household activity and thus with no connection to a professional or commercial activity.

[1] https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng#rct_18


It's likely taken the view that "purely personal or household activity" only covers the recording of audio/video in a domestic setting.

GDPR does covers individual's use of eg Ring doorbells insofar as recording video and audio outside of your own property. This would seem to be analogous.

GDPR is aimed at protecting _individual's_ personal information, irrespective of what or who is collecting or processing it.


It applies to Ring and not other doorbell cameras, because Amazon is collecting and selling access Ring video feeds.

Apple's response to EU's attempt to open up App Store has been full of pettiness, tantrums, and malicious compliance.

Apple is most likely withholding features in EU as a bargaining chip in antitrust negotiations, and to discredit EU's consumer protections. Pretending things in Europe are randomly unknowably illegal for no reason supports Apple's narrative and popular opinion in the US.


Apple is using the conservative approach, which is to misrepresent their starting position by moving the goal posts to an extreme. Then they bargain towards the "middle". It creates the illusion of bargaining.

So Apple is throwing a huge tantrum and withholding features from the EU to act like this is a much bigger deal than it is. This gives Apple a lot more bargaining room after the EU bitch slapped them.

Apple likely already has an API they could enable and be done with this. They won't do that. Apple needs exclusivity with new feature releases because they don't do things all that well anymore(Siri, maps, etc, nobody uses those because there are better alts available on ios).

But yeah Apple is just starting way to the extreme so they have more room to bargain. Hopefully the EU sees through this, again, and doesn't budge.


Essentially irrelevant to your main point, but:

>Apple needs exclusivity with new feature releases because they don't do things all that well anymore(Siri, maps, etc, nobody uses those because there are better alts available on ios).

Siri was okay for a very brief window after release and then dreadful ever since and Apple Maps was never good, but has gotten better. Etc maybe more valid idk


I keep seeing accusations of tantrums. Can a company say no without it being a tantrum?

Is there any evidence for this at all? The EU has plenty of regulation surrounding audio recording, as other comments have said. Instead of jumping to the assumption of malicious intent, I think those make more sense up front. I don't think this is a real bargaining chip for Apple to use against the EU for the side loading stuff.

I dislike Apple's malicious compliance with the EU too, but it seems unrelated here, at least without any proof.


Google Pixel Buds and Samsung Galaxy Buds basically provide the same feature of realtime translation. Either Apple is withholding the feature without any real cause, or the cause lies in some aspect where Apple doesn’t allow third-party manufacturers to provide the same feature under iOS, while Android does. I don’t know which is the case, but both put Apple in a bad light, along with the fact that they don’t explain the exact reason for the limitation.

It would not remotely surprise me to discover that either Google or Samsung were doing something untoward that Apple is not willing to do. In fact, that would be one of the least surprising things I'd ever heard.

In this case, it's apple doing the untoward thing, by artificially limiting users' devices, seemingly only for anticompetitive reasons.

As this is HackerNews, you should expect to see at least a couple commenters who believe they should have control over devices they own, including interoperability without artificial, anticompetitive limitations.


> In this case, it's apple doing the untoward thing, by artificially limiting users' devices, seemingly only for anticompetitive reasons.

Not really. They are complying by not offering features that would be considered anti-competitive. It’s not untoward, it’s just following their interpretation of the law. We obviously don’t know the discussions between Apple and the EC, but in public it’s American nerds who are complaining that the EU is bad.


The iOS feature is not anti-competitive, it is apple's choice to artificially restrict the feature if you use non-apple earbuds which is anti-competitive.

It is my understanding that this is what apple has chosen to do in areas where this iOS feature is available. Is that not the case?


If the “untoward” thing was unlawful, it would be straightforward for Apple to take Google and Samsung to court for anticompetitive practices. If it isn’t, then Apple can’t really blame the EU, and could at least advertise how they’re doing things less untowardly.

This isn’t the first time that Apple has been withholding features from the EU without ever providing a clear and understandable explanation, so there isn’t much basis for giving them the benefit of the doubt.


Do no US states have similar laws regarding recording strangers?

They do, but most states only require one party to consent.

https://en.wikipedia.org/wiki/Telephone_call_recording_laws


Notably California, home to apple, is a two party state.

The EU said that everything that Apple creates for its own devices has to have APIs for third parties. The translation feature only works for AirPods.

Ok so it's not "airpods live translation" really, but "ios live translation" and there's no technical reason to limit it to airpods?

The audio input comes from the AirPods not the iPhone. It’s processed on the iPhone.

The audio is captured by the outward facing microphones used for active noise cancellations. That’s why it only works for AirPods Pro 2, 3 and AirPods 4 with ANC. That wouldn’t just work with any headphones.

Even the AirPods Pro 2 will need a firmware update. They won’t work with just any old headphones and seeing that even the AirPods Pro 2 need a firmware update tells me that it is something they are doing with their H2 chip in their headphones in concert with the iPhone.


I mean, technically, any competitors with noise cancelling headphones able to pick up a voice stream would be able to use the same processing on the iPhone to offer an equivalent feature.

That it only works with AirPods is just Apple discriminating in favour of their own product which is exactly what the EU was going after.


Sure if they also want to train a model that supports their sound profile, build an app that captures the audio, etc.

But their $60 ANC headphones with cheap audio processing hardware in the headphones aren’t going to be sufficient.

They may even be able to use the exposed models on the phone.


> But their $60 ANC headphones with cheap audio processing hardware in the headphones aren’t going to be sufficient.

The equivalent feature on Android tells me it would. I mean it already does technically.

Are we supposed to treat Apple being late to the party as usual as some kind of exceptional thing only them could do?


According to the specs - it only works with Google’s own headphones

https://support.google.com/googlepixelbuds/answer/7573100?hl...

Which are the same price as Apple’s AirPods with ANC.

So Google also didn’t try to support the feature with generic earbuds.


The contrary is literally written in a large yellow box on the page you linked: “Note: Google Translate works with all Assistant-optimized headphones and Android phones.”

But I mean, you are free to buy overpriced Apple headphones which sounds worse than Sony, only properly works paired with an Apple phone or laptop and whose killer feature was available on their competitors buds years ago if that rocks your boat.


I have both a pair of the over ear Sony XM4’s and AirPods Pro 2 and I’m not sure I’d characterize the Sony’s sound as “better”, even when using lossless audio. They sound good but the sound profile is mostly just different, with the Sony’s leaning more bassy and the AirPods more balanced.

The noise cancellation are neck and neck but the AirPods had much less of that “pressure” sensation when using it. AirPods transparency is just plain better. Comfort for long use sessions is better on the Sony’s. Mic is better on the AirPods.


There's no EQ in the Sony iPhone app?

There is, but I haven’t had the patience to tweak that. My phone also isn’t the device that I usually use those headphones with.

You didn’t look at the prices of other “Google assistant” compatible headphones did you?

And those Sony ones aren’t cheap.

The first review I found comparing them..

https://wasteofserver.com/sony-wf-1000xm4-vs-apple/

Why would I want to by a none Apple laptop with horrible battery life, loud, and that produces enough heat to ensure that I don’t have offspring if I actually put it on my lap?


Over the course of this thread your argument went from "It's not technically possible" and "they will have to train their own models" to "I don't want to buy certain devices".

No I said it wasn’t technically possible on any cheap headphones because while the processing was done on the phone, the audio capture was done by the outside microphones on Apple headphones that have ANC and even the older ones of those required Apple to update the firmware on its own AirPods working in concert.

This is no different than Google not supporting just any old headphones.

Then the argument came that Apple’s AirPods are “overpriced” even though the cheapest AirPods that support it - AirPods 4 with ANC are in the same price range as Google’s and cheaper than the worse sounding and more expensive Sony Earbuds.


I prefer the Apple ecosystem myself but the Sony WF-1000XM are frequently available on sale (refurb WF-1000XM5 are $110 right now). I used to have the WH-1000XM3 (over the ear) and those are good too.

The whole argument seems kind of silly. Just buy the platform you want that has the features you want. If the European thinks Apple is overpriced then it's no harm that they aren't bringing features to Europe. He wasn't going to buy them and now is going to not buy them even harder.


As a reminder, the initial argument was that Apple doesn’t bring their feature to Europe because they would have to open it via an API to their competitors. Someone replied that it’s not a refusal but a technical impossibility which is easily countered by Google having done just that for years. The fact that it’s heavily downvoted despite being factually completely correct is actually hilarious to me.

The rest, which is to say that everything Apple sells beside laptops is subpar, their strategy regarding European regulations deprive them of any credibility when they pretend to care about consumers and their prices conversion in Europe is daylight robbery, is just my opinion and accessory to the discussion. I just couldn’t help myself.


No one said it’s a “technical impossibility”. The original statement was that it wouldn’t work on any cheap headphones. It’s assumed that you thought the iPhone was capturing the audio. Even then, there was some work done between the headphones and the phone and the firmware of the AirPods 2 had to be updated.

You aren’t going to save any money by getting a pair of $50 ANC headphones and hoping they work with the system - the Android variant doesn’t.


> It’s assumed that you thought the iPhone was capturing the audio.

Absolutely not. It assumed the AirPods Pro 2 unique processing was required which it clearly isn’t.

Nobody ever talked about saving money.

The whole discussion is about the EU mandating Apple play fair which would mean letting competitors access their phone processing exactly like Google is already doing.


> Nobody ever talked about saving money.

You didn’t say this?

> But I mean, you are free to buy overpriced Apple headphones

> which sounds worse than Sony,

And the Sony headphones sound worse and are more expensive.

> only properly works paired with an Apple phone or laptop

Which also isn’t true.


The fact that I rightfully qualify Apple products as overpriced don’t magically make the discussion about saving money.

Sony headphones sounds noticeably better than AirPods Pro 2 by the way and their EQ is better. AirPods have great noise cancellation but their sound quality is not that great.

> > only properly works paired with an Apple phone or laptop

> Which also isn’t true.

Care to explain to me how I set what presses do on AirPods without an Apple product. How do I disable noise cancellation and pass through? Where do I setup the level of noise cancellation?

Yes, exactly.


So noise cancelling headphones that are worse at noise cancelling are better?

And the headphones are “overpriced” even though they are the same price as comparable devices that have worse ANC?

And a simple Google search tells you how to pair AirPods to none Apple devices

https://support.apple.com/en-sg/guide/airpods/dev499c9718b/w...


> But their $60 ANC headphones with cheap audio processing hardware in the headphones aren’t going to be sufficient.

Maybe, maybe not. Assuming Apple's motivation isn't pure self-dealing, it's very consistent with Apple's behavior to forbid or impede doing things that are absolutely possible but sometimes result in a sub-par experience.


How many $60 headphones work with Google’s version?

It's oddly difficult to find solid answers to this with a web search, but it appears that it just needs protocol support, not a mic that meets specific standards. The (discontinued?) JBL 110GA is $40 on Amazon.

Which I’m not able to find on Amazon…


7 year old used headphones with one in stock using Micro USB in 2025?

All of them.

Not according to the official specs….

Or other services, such as translation using Google Translate.

Also the feature doesn't work on Android, so it is not an 'AirPods' feature but a 'iOS'+'AirPods' feature.

> Europe are randomly unknowably illegal for no reason

I mean they absolutely are especially as EU regulators categorically refuse to review anything in advance just in-case their get a budget shortfalls and need to go looking for fines.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: