I haven't trudged through Chromium's commit statistics but has Microsoft been upstreaming many contributions? I'm skeptical that they are ready to take on the full brunt of Chromium maintenance on a whim, it would take a decent while to build up the teams and expertise for it.
Before they swapped Edge over to use Chromium they were capable of maintaining their own engine just fine. Probably not overnight, but in the past they have shown that they have the budget to support a browser engine if they want to.
Because no amount of money was going to solve the problem of people saying they think Microsoft's browser is slower/worse/etc. Switching to Chromium negated that in a way nothing else could.
When Microsoft beat Netscape with IE, it was by building a far better browser. Google is a stronger competitor than Netscape ever was though. Without Google dropping the ball (like Netscape), Microsoft would never exceed Chrome's performance by enough to be the fastest, most compatible (with Chrome), etc.
It is also just classic Microsoft when they are hungry. Like making Word use WordPerfect files and keyboard shortcuts. Only today it is that their browser is mostly Google, Linux is built into Windows 11, SQL Server ships on Linux, and their most popular IDE is open-source built on open tech (Electron) they didn't create.
When they get threatened, nothing is too sacred for Microsoft to kill or adopt.
I feel like they burnt enough browser goodwill with IE that no one who was on the internet back then wants to touch a microsoft browser regardless of the engine
They are on the record about why they switched to a chromium based browser. It’s been a while, but if I’m remembering correctly, at the time Google was making changes to YouTube to make it actively slower, and use more power on IE. Microsoft realized that while they could compete as a browser, they couldn’t compete and fight google trying to do underhanded things to sabotage their browser.
Because they could archive the same product using chromium with less cost. Should that change their investment in that area would probably increase as a consequence.
No, because using Chromium was the only way the could stay relevant in the browser space. They were just unable to build the same product with their own stack.
They were facing the same problem that everybody is—Google adds features too fast to keep up. If Google went in a bad direction with Chrome, they’d Microsoft would just have to keep up with Mozilla and Apple.
Microsoft lands many changes in Chromium first before they show up in Edge (logistically it's easier to do things this way for merging reasons), but they do also upstream changes to Chromium that show up in Edge first.
Glad this feature is built into most modern operating systems these days.
For MacOS (Sequoia+) you can just forget the network and reconnect to get a new MAC address [1].
Android's documentation for if it decides to generate a new address per connection is a little vague [2], but I'm guessing forgetting and reconnecting works as well, you may also need to flip the "Wi-Fi non-persistent MAC randomization" bit in developer settings.
On Windows, flipping the "Random hardware address" switch seems to cause it to generate a new seed/address for me.
Yeah I had to flip the developer setting toggle, but worked flawlessly for my flight (American Airlines has a watch an ad for 20 minutes of free internet that only works once per MAC)
Are you saying that on IOS 18 if you enable developer mode then each time you forgot the network it gets a new Mac? But without developer mode it does not get a new Mac each time you forget it? The Apple docs linked elsewhere in this thread suggest it only gets a new Mac once per 24 hours when you forget the network normally. I’m going on a long boat trip in the next week where this trick might work for me if so!
I have a generic Android phone from many years ago where the manufacturer didn't even bother to program the WiFi NVRAM, so every time you load and unload the driver, you get a new randomly generated MAC address. Interesting that that has become a feature these days.
> it includes instructions for stack manipulation, binary operations
Your example contains some integer arithmetic, I'm curious if you've implemented any other Python data types like floats/strings/tuples yet. If you have, how does your ISA handle binary operations for two different types like `1 + 1.0`, is there some sort of dispatch table based on the types on the stack?
> I'd prefer to move forward based on clear use cases
Taking the concrete example of the `struct` module as a use-case, I'm curious if you have a plan for it and similar modules. The tricky part of course is that it is implemented in C.
Would you have to rewrite those stdlib modules in pure python?
What you're proposing is reminiscent of Keybase's account verification system. You make a post or equivalent on each platform with cryptographic proof that it's you. (e.g here's mine for GitHub https://gist.github.com/ammaraskar/0f2714c46f796734efff7b2dd...).
This depends on the Python version, but if it has the specializing interpreter changes, the `COMPARE_OP` comparing the integers there is probably hitting a specialized `_COMPARE_OP_INT` [1].
This specialization has a ternary that does
`res = (sign_ish & oparg) ? PyStackRef_True : PyStackRef_False;`.
This might be the branch that ends up getting predicted correctly?
Older versions of Python go through a bunch of dynamic dispatch first and then end up with a similar sort of int comparison in `long_richcompare`. [2]
Any plans on open-sourcing your ATC speech models? I've long wanted a system to take ATIS broadcasts and do a transcription to get sort of an advisory D-ATIS since that system is only available at big commercial airports. (And apparently according to my very busy local tower, nearly impossible to get FAA to give to you).
Existing models I've tried just do a really terrible job at it.
I've thought about the same thing; transparently, we were trying to get a reliable source of ATIS to inject into our model context and had the same issue with D-ATIS. What airport are you at? Maybe we whip up a little ATIS page as a tool for GA folks.
I was actually a little surprised to see that in there, I wouldn't really consider those features to be "memory safety" as I traditionally see it over Java.
They don't really lead to exploitable conditions, except maybe DoS if you have poor error handling and your application dies with null pointer exceptions.
> I was hopeful, and I've tried to just see if this long thread results
in anything constructive, but this seems to be going backwards (or at
least not forwards).
So probably just hoping some people on either side of the issue would see the light. But he also probably shouldn't have chimed in earlier then.
The VOR Minimum Operation Network[1] in the US is basically supposed to be that. They're decommissioning a lot of the VORs but at least guaranteeing that you'll be 100NM away from a working VOR and an airport with an approach that can be accomplished with VORs for the initial fixes.
Still definitely feels like putting a lot of reliance on GPS but at least there's a backup for the worst case.
There's also a DME Minimum Operational Network, for airliners that can use DME-DME RNAV. (That's too expensive for smaller aircraft to install, though.)
It's too bad that DME/DME RNAV isn't more widely available. The only real reason it's so expensive is that there isn't much demand for it since GPS (usually) works fine. Electronics-wise, it's not much more complicated than a transponder. Unlike a GPS, it does have to transmit, so it will always be somewhat more expensive than GPS.
The other problem is that there's a limit to how many aircraft a DME station can serve at a time (about 100), but I believe that could be greatly expanded if aircraft weren't pinging the DME so often. A position fix every second is generally fine, and it could be even more infrequent if you have a cheap inertial system to fuse with it that can fill in the track for a few seconds between pings.
Probably the required accuracy. VOR is on the order of a degree for accuracy. DME is around 0.1nm. So if you’re 50nm from the VOR, then you may have a position fix error of 0.87nm across the radial, if I did my math right.
For altitudes above 12,900 ft AGL, the official service volume for a DME is 100-130nm.
Below that it's considered "line of sight"... and some quick math shows that you'd be able to get line of sight >50 nm for all altitudes above 1700 ft AGL (which is very low).
If you're 100nm away, chances are there are more than 100 aircraft nearer to the DME than you from at least one of the two required DMEs. Especially if GPS has failed and many aircraft are trying to use backup DME-DME. Unless you're in a very sparse area.
I haven't trudged through Chromium's commit statistics but has Microsoft been upstreaming many contributions? I'm skeptical that they are ready to take on the full brunt of Chromium maintenance on a whim, it would take a decent while to build up the teams and expertise for it.