Hacker Newsnew | past | comments | ask | show | jobs | submit | ammar2's commentslogin

> Microsoft would fork it within hours

I haven't trudged through Chromium's commit statistics but has Microsoft been upstreaming many contributions? I'm skeptical that they are ready to take on the full brunt of Chromium maintenance on a whim, it would take a decent while to build up the teams and expertise for it.


Before they swapped Edge over to use Chromium they were capable of maintaining their own engine just fine. Probably not overnight, but in the past they have shown that they have the budget to support a browser engine if they want to.


Why do you think they moved to Chromium then? They switched because they could not support a competitive engine by themselves.


Because no amount of money was going to solve the problem of people saying they think Microsoft's browser is slower/worse/etc. Switching to Chromium negated that in a way nothing else could.

When Microsoft beat Netscape with IE, it was by building a far better browser. Google is a stronger competitor than Netscape ever was though. Without Google dropping the ball (like Netscape), Microsoft would never exceed Chrome's performance by enough to be the fastest, most compatible (with Chrome), etc.

It is also just classic Microsoft when they are hungry. Like making Word use WordPerfect files and keyboard shortcuts. Only today it is that their browser is mostly Google, Linux is built into Windows 11, SQL Server ships on Linux, and their most popular IDE is open-source built on open tech (Electron) they didn't create.

When they get threatened, nothing is too sacred for Microsoft to kill or adopt.


We have enough people of working age now that hasn't lived through the Microsoft of old and don't remember what they can/could do.

Microsoft firing on all cylinders, when they want to, is a terrifying force.


I feel like they burnt enough browser goodwill with IE that no one who was on the internet back then wants to touch a microsoft browser regardless of the engine


They are on the record about why they switched to a chromium based browser. It’s been a while, but if I’m remembering correctly, at the time Google was making changes to YouTube to make it actively slower, and use more power on IE. Microsoft realized that while they could compete as a browser, they couldn’t compete and fight google trying to do underhanded things to sabotage their browser.


Because they could archive the same product using chromium with less cost. Should that change their investment in that area would probably increase as a consequence.


No, because using Chromium was the only way the could stay relevant in the browser space. They were just unable to build the same product with their own stack.


Unable is not the right reason, more like management wasn't willing to fund the team as it needed.

Just like management doesn't a F about the state of UWP, WinUI and anything related to it.


They were facing the same problem that everybody is—Google adds features too fast to keep up. If Google went in a bad direction with Chrome, they’d Microsoft would just have to keep up with Mozilla and Apple.


Yes, Microsoft actively contributes to Chromium.

Microsoft lands many changes in Chromium first before they show up in Edge (logistically it's easier to do things this way for merging reasons), but they do also upstream changes to Chromium that show up in Edge first.


Glad this feature is built into most modern operating systems these days.

For MacOS (Sequoia+) you can just forget the network and reconnect to get a new MAC address [1].

Android's documentation for if it decides to generate a new address per connection is a little vague [2], but I'm guessing forgetting and reconnecting works as well, you may also need to flip the "Wi-Fi non-persistent MAC randomization" bit in developer settings.

On Windows, flipping the "Random hardware address" switch seems to cause it to generate a new seed/address for me.

[1] https://support.apple.com/en-euro/102509

[2] https://source.android.com/docs/core/connect/wifi-mac-random...


Per [1], this only works once per 24 hours on new iOS/macOS versions, and only once per two weeks on older ones though.


Yeah I had to flip the developer setting toggle, but worked flawlessly for my flight (American Airlines has a watch an ad for 20 minutes of free internet that only works once per MAC)


Are you saying that on IOS 18 if you enable developer mode then each time you forgot the network it gets a new Mac? But without developer mode it does not get a new Mac each time you forget it? The Apple docs linked elsewhere in this thread suggest it only gets a new Mac once per 24 hours when you forget the network normally. I’m going on a long boat trip in the next week where this trick might work for me if so!


I have a generic Android phone from many years ago where the manufacturer didn't even bother to program the WiFi NVRAM, so every time you load and unload the driver, you get a new randomly generated MAC address. Interesting that that has become a feature these days.


I think the rotating address is limited to 3, right? The script here generates one at random.


> it includes instructions for stack manipulation, binary operations

Your example contains some integer arithmetic, I'm curious if you've implemented any other Python data types like floats/strings/tuples yet. If you have, how does your ISA handle binary operations for two different types like `1 + 1.0`, is there some sort of dispatch table based on the types on the stack?


> I'd prefer to move forward based on clear use cases

Taking the concrete example of the `struct` module as a use-case, I'm curious if you have a plan for it and similar modules. The tricky part of course is that it is implemented in C.

Would you have to rewrite those stdlib modules in pure python?


As in my sibling comment, pypy has already done all this work.

CPython's struct module is just a shim importing the C implementations: https://github.com/python/cpython/blob/main/Lib/struct.py

Pypy's is a Python(-ish) implementation, leveraging primitives from its own rlib and pypy.interpreter spaces: https://github.com/pypy/pypy/blob/main/pypy/module/struct/in...

The Python stdlib has enormous surface area, and of course it's also a moving target.


Aah, neat! Yeah, piggy-backing off pypy's work here would probably make the most sense.

It'll also be interesting to see how OP deals with things like dictionaries and lists.


What you're proposing is reminiscent of Keybase's account verification system. You make a post or equivalent on each platform with cryptographic proof that it's you. (e.g here's mine for GitHub https://gist.github.com/ammaraskar/0f2714c46f796734efff7b2dd...).


Edit: Analyzed the wrong thing earlier.

This depends on the Python version, but if it has the specializing interpreter changes, the `COMPARE_OP` comparing the integers there is probably hitting a specialized `_COMPARE_OP_INT` [1].

This specialization has a ternary that does `res = (sign_ish & oparg) ? PyStackRef_True : PyStackRef_False;`. This might be the branch that ends up getting predicted correctly?

Older versions of Python go through a bunch of dynamic dispatch first and then end up with a similar sort of int comparison in `long_richcompare`. [2]

[1] https://github.com/python/cpython/blob/561965fa5c8314dee5b86...

[2] https://github.com/python/cpython/blob/561965fa5c8314dee5b86...


This isn't actually timing the sorting, but just the (dumb) function f.


Oh whoops, that's right. I totally missed that.


Any plans on open-sourcing your ATC speech models? I've long wanted a system to take ATIS broadcasts and do a transcription to get sort of an advisory D-ATIS since that system is only available at big commercial airports. (And apparently according to my very busy local tower, nearly impossible to get FAA to give to you).

Existing models I've tried just do a really terrible job at it.


I've thought about the same thing; transparently, we were trying to get a reliable source of ATIS to inject into our model context and had the same issue with D-ATIS. What airport are you at? Maybe we whip up a little ATIS page as a tool for GA folks.


That would be awesome! My airport is KPDK (sadly it doesn't have a good liveatc stream for its ATIS frequency).

I did collect a bunch of ATIS recordings and hand-transcribed ground-truth data for it a while ago. I can put it up if that might be handy for y'all.


If you're willing, that'd be great. I think our model will do well out of the box, but more data is more better as they say.

I spent a lot of time out at PDK when I worked briefly in aircraft sales. Nice airport!

Let me work on this and come back! I think we can ship you an API for ATIS there...


I was actually a little surprised to see that in there, I wouldn't really consider those features to be "memory safety" as I traditionally see it over Java.

They don't really lead to exploitable conditions, except maybe DoS if you have poor error handling and your application dies with null pointer exceptions.


He does open with:

> I was hopeful, and I've tried to just see if this long thread results in anything constructive, but this seems to be going backwards (or at least not forwards).

So probably just hoping some people on either side of the issue would see the light. But he also probably shouldn't have chimed in earlier then.


The VOR Minimum Operation Network[1] in the US is basically supposed to be that. They're decommissioning a lot of the VORs but at least guaranteeing that you'll be 100NM away from a working VOR and an airport with an approach that can be accomplished with VORs for the initial fixes.

Still definitely feels like putting a lot of reliance on GPS but at least there's a backup for the worst case.

[1] https://www.faa.gov/about/office_org/headquarters_offices/at...


There's also a DME Minimum Operational Network, for airliners that can use DME-DME RNAV. (That's too expensive for smaller aircraft to install, though.)


It's too bad that DME/DME RNAV isn't more widely available. The only real reason it's so expensive is that there isn't much demand for it since GPS (usually) works fine. Electronics-wise, it's not much more complicated than a transponder. Unlike a GPS, it does have to transmit, so it will always be somewhat more expensive than GPS.

The other problem is that there's a limit to how many aircraft a DME station can serve at a time (about 100), but I believe that could be greatly expanded if aircraft weren't pinging the DME so often. A position fix every second is generally fine, and it could be even more infrequent if you have a cheap inertial system to fuse with it that can fill in the track for a few seconds between pings.


One thing I have yet to understand is why DME-DME is preferred over VOR-VOR. Because the latter can support unlimited aircraft, unlike DME.


Probably the required accuracy. VOR is on the order of a degree for accuracy. DME is around 0.1nm. So if you’re 50nm from the VOR, then you may have a position fix error of 0.87nm across the radial, if I did my math right.


That's a good point. But if you're 50nm from a DME, I think you're unlikely to be able to get a lock in practice.


For altitudes above 12,900 ft AGL, the official service volume for a DME is 100-130nm.

Below that it's considered "line of sight"... and some quick math shows that you'd be able to get line of sight >50 nm for all altitudes above 1700 ft AGL (which is very low).

Source: https://www.faa.gov/air_traffic/publications/atpubs/aim_html...


Yes but only to about 100 aircraft at a time, favouring stronger (and therefore closer) signals: https://en.wikipedia.org/wiki/Distance_measuring_equipment#S...

If you're 100nm away, chances are there are more than 100 aircraft nearer to the DME than you from at least one of the two required DMEs. Especially if GPS has failed and many aircraft are trying to use backup DME-DME. Unless you're in a very sparse area.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: