I worked with a guy like you once. He got fired. Three guesses why.
For most game development, the code is written and compiled on PCs and then shipped to a development version of the console hardware for testing. Naughty Dog in particular had a way of "live coding" their games in a Lisp dialect and shipping individual compiled functions to a running game instance on the PS2 dev box.
You think the demoscene kids can't match that?
Besides which, today's PCs have CPUs with sixteen (64-bit!) GPRs, GPUs and audio chipsets that vastly exceed the capabilities any Amiga had, gigs of RAM, and cutting edge development tools that the Amiga never had on its best day. I could be the biggest Jay Miner fanboy ever and I'd still write all my Amiga software from the comfort of a Linux PC.
> For most game development, the code is written and compiled on PCs and then shipped to a development version of the console hardware for testing.
When doing cross-compilation, is there a way to know if your cross-compiled code is as efficient/fast as code natively compiled on the target architecture ?
A cross-compiler can generate the exact same code a native compiler can. If you have a powerful development box, it will compile much faster than if you compiled on the native architecture. So there's that too.
The NES probably couldn't run an assembler that could assemble a large program at all. NES games were developed and assembled on Unix workstations and wired to the console via an in-circuit emulator that simulated a NES cart.
Based on the griping I've seen from the Amiga community, it seems like cross-compiling on normal PCs is the only way to develop for the new PPC-based "Amigas" - no native toolchain.
The same way you write code for a 64-bit machine on an 32-bit operating system, or target any other hardware feature not present on your development box. Your box doesn't have to support the feature, it's enough to have a compiler that does.
I'm not sure if you're joking, or if you need to learn more about cross-compilation.
Works pretty well... I managed to use it to write a couple of stupid demo-type effects a couple of years ago, even though I have no Amiga, and no real idea how to use one for day-to-day stuff anyway. I wrote the copper effects as a sequence of macro-driven dc.w lines ;)
68000 wasn't as much fun as I remembered, though it was an enjoyable nostalgia trip I suppose. The separate address and data registers (and actually you have 8 of each - though 1 address register is the stack pointer, and on the Amiga you often lose a6, because that holds the address of the module jump table) are annoying, all the instructions are really... damn... slow, and the addressing mode selection is a bit crap.
It was also funny to think how far technology has come on. My PC can write to disk - USB disk, I mean - 5400rpm, NTFS - using fwrite - more quickly than the Amiga's video scanout hardware can read pixel data :-|
I never worked with a real 68000, but I did do a bunch of stuff for a ColdFire once --- they stripped some of the weirder addressing modes out of the 68000 and rebadged it as 'RISC'. (Well, I laughed.)
I thought it was an okay architecture; I've worked with weirder. The address/data register thing required special compiler support (our code generator didn't distinguish). I thought the addressing modes were pretty useful, although the ColdFire didn't support memory/memory operations, which was a shame. What did you miss?
Good question. I had a look at the code again, and... it didn't look too bad? My recollection was that the lack of a base+index*scale+disp32 addressing mode (like on x86/x64) was annoying, but I'm not sure why any more.
Are you seriously asserting that modern Intel cpus have that few registers? Maybe you're talking about something I don't understand but I can't imagine what.
It's perfectly possible to cross-compile code. There is a point, though, where demoscene coding requires running the result in the real hardware for testing, as it usually takes advantage of tricks that get lost in emulation.