Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Excellent. Genuine innovation on a platform many had feared was abandoned.

My only regret is the confusing x86/x64 message. Many will interpret perf improvements being due to the 64 bitness of new compiler



Can you expand on what you mean by "confusing x86/x64 message"? I'd love to help clear it up. Are you saying that people will believe that the reason the JIT is so much faster is because it's 64 bits? That's the exact opposite of reality: 64 bit programs tend to be slower, because they have to manipulate more data (all pointers take twice as much space, the Win64 ABI requires a minimum of 48 bytes of stack per non-leaf function, etc...)


Do all 64-bit programs tend to be slower all the time though? I'd think it would be faster because you're processing more data per clock cycle. Or is that the x64 JIT hasn't been optimized to take advantage of the latest gen of 64-bit processors (I heard stuff about not making use of the latest Math.Pow a while ago)?


Programs that are pointer-heavy tend to be a little slower. There's really not that much 64-bit arithmetic going on in the average application, so the more data per clock cycle only helps in a particular class of apps. Cryptography tends to do significantly better, for that exact reason. You do seem to be conflating code quality with compiler throughput, though. I'll try to clarify in much more detail in a CLR Codegen blog post soon.


> I'd think it would be faster because you're processing more data per clock cycle.

This is only true if you happen to be dealing with integers larger than 32 bits (rarely in most of today's code). The real performance benefit of x64 has more to do with a larger number of general-purpose registers. More registers allow (but don't guarantee) programs to spend less time accessing main memory, thus gaining some speed.


Yes, I knew about the registers. I guess I overestimated the number of applications that involve large number computations.


Or perhaps I underestimated them...


Why can't the CLR come up with their own "X32" target? IIRC, they made the default for projects in VS to be 32-bit explicitly because of better codegen and lower overhead. If an app is OK with 4GB of RAM, why not let the process run in 64-bit mode, but use 32-bit pointers?

Also, why should the Win64 ABI constrain everything? Certainly it'd only be needed around the edges, but for .NET code calling other .NET code, you're free to do interesting things. (Like pass a GUID or tuple in a single register if it'd help.)


Keep in mind that Windows emphasizes a unified ABI on AMD64 largely because of what happened on x86. The x86 ABI sort of evolved into this "wild west" kind of situation, with various calling conventions for different languages/runtimes and usage scenarios (i.e. stdcall, cdecl, fastcall, pascal, etc.), which ends up making things unnecessarily complicated for the OS. (Kevin has an old blog post where he discusses this in more depth: http://blogs.msdn.com/b/freik/archive/2006/03/06/x64-calling...)

Being a good OS citizen and sticking to the Win64 ABI also makes some things significantly simpler for the rest of the runtime. An obvious example of where this pays off is native interop (e.g. P/Invoke, COM interop, C++/CLI), but one less obvious example of where this comes into play is actually managed exception handling (which is built on top of the underlying Structured Exception Handling mechanism that Windows provides). Not only does adhering to the unified ABI allow managed exceptions to interop with native exceptions, but it also makes things simpler for debuggers, anything that needs to walk the stack, etc.

Remember, the CLR is really more of an execution engine, and not so much a "virtual machine". We try not to disrupt the architectural conventions of the underlying platform, since we're not trying replace the OS environment itself.

--Henry Baba-Weiss [MSFT]


> Genuine innovation on a platform many had feared was abandoned.

Are you equating .NET in general with Silverlight? Honest question.


No, but the way MS came out with WinRT and excluded existing .NET code from executing on it certainly didn't reassure anyone.


There is a lot to criticize about WinRT, but basing it on COM was the right call. In fact I would say absolutely essential for high-performance apps. Make all the costs of .NET (GC, memory usage, JIT) optional. CLR can still call into COM objects without problems, so all the C# fans can still go nuts.

It's hard not to see this as informed by how badly Longhorn failed. Microsoft tried to make .NET the basis of their OS platform during Longhorn and failed miserably, in part because it was simply not built for that. CLR belongs as a layer on top.

[Disclaimer: I used to work at MS. Did not work on the Windows Runtime. These are my personal opinions.]


That's not what I was questioning - see my other answer for why WinRT was not "reassuring". Now it makes sense, and I can safely not care about WinRT.

As far as Longhorn failing, certainly that was both a disaster in management and not just technology. After all, MS Corp felt generics were and impractical academic and theoretical idea that couldn't be properly implemented in a language like C# or the CLR.


If that's not what you're questioning then IMO don't phrase it as a .NET problem; it is broader than that. There is a pocket of the .NET community that talks as if they shafted .NET in favor of native code, but WinRT is probably more disruptive to the existing Win32/C workflow than it is for people writing C#.


> After all, MS Corp felt generics were and impractical academic and theoretical idea that couldn't be properly implemented in a language like C# or the CLR.

What?!

Generics were only added to .NET 2.0, because it wasn't going to be done on time for the .NET 1.0/1.1 releases.

There are papers from the .NET Beta days already describing the way generics could be implemented, but additional work was still needed at the time.


Generics were only added to .NET 2.0 because MSR got it implemented. Redmond would never have done it alone, and called it academic, theoretic feature. They may have ended up with a lameass implementation of generics, ala Java.

Here's a history by Don Syme, who was one of the main people on this project:

http://blogs.msdn.com/b/dsyme/archive/2011/03/15/net-c-gener...

Quotes:

  "But I do want to say one thing straight: Generics for .NET and C# in their current form almost didn't happen: it was a very close call, and the feature almost didn't make the cut for Whidbey (Visual Studio 2005)"
  
  "being told by product team members that "generics is for academics only""
  
  "It was only through the total dedication of Microsoft Research, Cambridge during 1998-2004, to doing a complete, high quality implementation in both the CLR [...] and the C# compiler, that the project proceeded."
Microsoft's C# track record shows MC Corp's commitment to higher-level programming pretty well, IMO. LINQ got added just to hit the "LINQ" target (and Erik Meijer mighta been a big force there). But even then: the C# 3.0 were only implemented to hit LINQ, not added as general language features. One big example: declaration type inference is half-assed and only exists to facilitate anonymous types. They've had years since to clean up the design and haven't shown any indication of doing so.

(I still think C#'s the best out of the "mainstream" languages, but they could be doing a whole ton better.)


I know those papers, like any one that cares about compiler design should know.

As for MSR vs MS Corp, as you put it.

It is called Microsoft Systems Research, it is still a Microsoft unit, with researchers on Microsoft's payroll.

So it is plain and simple, Microsoft.


C# was first to introduce most mainstream functional concepts into C-like languages: generics, lambdas, LINQ etc.


I never understood where people got this idea from.

You can target WinRT with the usual set of .NET languages, the only difference being the classes one uses.

Does C stop being C, if you don't use libc?


There's multiple things at play. Until the F# stdlib was ported, F# couldn't target WinRT. Then there's the ton of fanfare JS/HTML were given, and the fact that C#'s development has been moving at a snails pace (and the "face" of C# is now working on making JS suck less) -- eh, it makes people nervous. Plus, MS's messaging wasn't all that clear. Now I see WinRT as being for cutesy tablet apps and I'm not so concerned. But before Win8, there was a lot of confusion going on.


> But before Win8, there was a lot of confusion going on.

Which was cleared for anyone that cared to access the information available after the BUILD conference.

But hey, it is easier to form opinions based in twitter spread rumours, or something like that.


WinRT was definitely a screw-up, but at this point I don't see it really effecting developers that much. WinRT has pretty much failed as a platform. At this point, is it even worth developer time to deliver an app to that platform?

If you're talking about apps in the windows app store (so-called metro-style apps or whatever the current name is), then you're right that it meant throwing out a lot of code, but it's not true that all existing .NET code was excluded. It took some rejiggering. But once again, that's only if you want to deliver via the app store.


I think you might be confusing WinRT with the Surface RT tablets. They have relatively little to do with eachother. For example, the "metro-style apps" you mention are WinRT just as well.


Uh... WinRT excluded all existing third-party code from executing on it.

EDIT: asveikau has rightly pointed out I have confused two matters.


Surely you would not confuse WinRT, the API, with Windows RT, the ARM tablet product. Why, these names are totally unambiguous!

(Microsoft is not great at naming products.)


I agree :(


The other one is that it's available only on 8.1 - I wish 7 was supported too.


The compiler is not 64 bit. :)


Which compiler are you talking about? The JIT compiler is 64 bit, the C++ x64 targeting compiler is available as both 32 & 64 bit. Roslyn is 32 bit because, well, the x64 JIT is dog slow :-)


I was referring to C#, but I may have misunderstood the parent. The C# compiler isn't 64, but unfortunately it's not because of the JIT.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: