These (incredibly HLE emulators for Microsoft consoles trying to live off the idea "hey, it can't be that different than Windows, right?") emulators like CXBX for the original Xbox don't tend to work super well. There's such a huge impedance mismatch between the environments expected by this code versus what actual Xbox hardware can provide (remember that games run in kernel mode and quite a few drivers are statically compiled and optimized into the games) that it's just way less surface area to go the full system emulator approach in the first place.
So what you see is some tech demo samples perfectly and maybe a couple of bespoke games working OKish, but then the complexity burden overwhelms the project and it kind of stagnates.
This emulator isn't taking the 'CXBX approach'. CXBX does not recompile the binary and relies on common system/graphics api calls not being inlined and easily replaceable with windows equivalents, which absolutely doesn't exist in any 360 game binary for all the important things.
The point is that it's still HLEing all of the imports (like CXBX). Just going the extra mile and full system emulating pays massive dividends. No, it's not close enough to Windows in super subtle ways, so don't take the bait of the import names being the same.
> The point is that it's still HLEing all of the imports (like CXBX).
You have a fundamental misunderstanding here. HLE in the style that CXBX does it is not possible for 360 games as every userland library is static linked and LTCG is enabled by default (maybe required for MS cert?) for release builds. Nearly everything is inlined and impossible to replace with calls to matching windows APIs as a result. Xenia and rexdex both emulate the kernel at the syscall level and don't touch userland at all.
> Just going the extra mile and full system emulating pays massive dividends.
Maybe for security research and correctness purposes, it has little practical value for ps3/360 gen emulation and beyond.
On original Xbox even more libraries are statically linked than 360 (they had nothing like xam.xex) and heavily inlined even without LTO because the code style at the time heavily used C macros.
Something like this was even easier for XNA games, since you could download a bundle of C# dlls and exes from your console and then after a quick decryption process, patch them to swap the dependencies. I reverse engineered a couple games this way and was able to boot them part-way within a couple hours (of course, endianness and console-only shader instructions were a problem.)
I vaguely remember this being done for FEZ back in the day. Funnily enough, the same is true for a number of cross-platform frameworks. One that sticks out was a port of David Cage game (which was largely written in lua) and making huge progress on porting the runtime from PS3 to PC.
At this point, is it still worth it to post these manually?
Checking all post submissions, keeping only the ones with sufficient engagement, and a little summary line "7 other submissions ignored, the oldest dating from Oct 2018"
Maybe I'm too lazy, but that really sounds like something I would let a computer automatically do.
You can do exact URL matching that way, but that leaves out a great many related threads. That limitation doesn't show up in this case but see e.g. https://news.ycombinator.com/item?id=36713263 - same story, but very different articles and URLs.
If anyone knows how to write code to build those sorts of lists, I'd love to know about it! the nice thing is that the relevant data is all public so anyone who wants to work on it can.
Technologically the task can be automated by scraping each submission when it’s submitted, calculate the sentence embeddings of the article, and compare the new submissions with other embeddings. However, that’s how you become a Facebook style social network with “more you might like” and whatnot.
Even that is surprisingly hard! Hard enough that we had to stop working on it when we realized that it's a startup, or at least a major undertaking, in its own right.
I agree, hn clearly stays minimal on purpose, which is why I put the last disclaimer line.
>with “more you might like”
In itself that's a very good feature, it becomes a problem with a host of other patterns to transform discoverability into addiction. Most often these 'bad recommendations' are high engagement content (controversial, or low-value but high attractiveness clickbait) that are purposefully known by the platform to only be tangentially related to the content, if at all.
In addition to the examples other commenters have given, it’s also how Rosetta 2 works to translate x86_64 to arm64 for supporting Intel binaries on M1/M2 Macs
Well, think about it this way - suppose you have a block of C code and you compile it to object code, like:
if (hitByMissile == True) {
lives--;
if (lives == 0) playing = False;
}
; next address is 0x8f02
it might turn into:
ld a, (4011h)
jr z, 8f02h
ld a, (4020h)
dec a
ld (4020h), a
jr z, 8f02h
xor a, a
ld (0x4021h), a
; next address is 8f02h
That might disassemble back to C as:
if (var4011 != 0) {
var4020--;
if (var4020 == 0) {
var4021 == 0;
}
}
like that. Less readable, because we have no variable names, and not identical to the source code but probably close enough, but the important thing is we don't care if we understand it or not. If we declare a variable that our original code stores at 4020h in a 16-bit address space, but it ends up somewhere wildly different in our 32-bit address space because we're recompiling on a newer machine, we don't care - we just care that the name gets used consistently.
If you then read through the disassembled source you could start to piece together what the variables are, though.
It's not so much the variables, but that compilation must be a lossy process. There may be many ways to interpret the assembly, and the compiler might generate different asm than the one that was decompiled.
Much less lossy than you probably would think if you've never dug through what your typical compiler outputs. There usually aren't that many ways to interpret the assembly and it doesn't matter whether it generates different assembly as long as it does the same thing.
The typical code generator for a compiler uses all kinds of boilerplate for common constructs (loops, function calls, data access) and once you know about these you can usually recognize them on sight.
Optimizing compilers can make this quite a bit harder by the way, extra passes that do all kinds of reshuffling to get rid of instructions, to combine them and to move things from memory into registers.
You can usually tell compilers to output assembly code, doing that for a program that you wrote yourself is a good exercise to see how your high level code translates into lower level code. And with optimization off all you see is the code generator's output.
Yeah this is where you start to see decompilers outputting C code that just looks like assembler written out as C. There was one I used to use years ago - can't remember the name, some odd commercial thing from one of the many commercial C products that never really took off - that would do a good job *mostly* but at some point start outputting blocks of code with lots of `register` variables in, and you knew it had gone out to lunch on that bit.
So what you see is some tech demo samples perfectly and maybe a couple of bespoke games working OKish, but then the complexity burden overwhelms the project and it kind of stagnates.