This API is for people (both program authors and users) who really know quite well what they're doing, for very rare occasions, and the alternative/status quo was to not be able to get this info from the kernel at all. I imagine someone at MS thought they did a favor to people doing performance optimization by giving them something instead of nothing, perhaps they even had to fight to include this feature in the product because obviously Windows would sell rather well without the feature. They must have had a bad day reading this criticism.
(I'm not saying "poor Microsoft" - it's definitely not poor - and I'm not saying "you should thank people for whatever API they give you" - a lot of programs/features do more harm than good - and I'm not saying someone shouldn't vent having had a lot of time burnt by some ugly API. I'm saying specifically that this here is a dark corner of the OS that the API in question sheds at least some light on and I have a hunch, perhaps an entirely mistaken one, that it was someone's pet initiative and they thought it was definitely better than nothing. "Little did they know." To take a simple example of the other kind of API: sprintf, the version that doesn't take a buffer size, is a really bad API because that's something everyone's gonna use a lot and you just shouldn't give this kind of thing to people (give them snprintf at the very least) and you certainly can't be excused by "one should know what they're doing" in this case because it's between hard and uneconomical to use safely while passing the buffer size is the obvious thing to do, also one seems entitled to care-free string formatting, definitely 100x more so than care-free kernel status monitoring on the grounds of it being something many more people do much more often.)
Normally, I'd agree with your optimistic assumption, but having spent a bunch of the early to mid 2000s writing code for Windows applications, the oddness of this API relative to the simplicity of what you're actually doing was, IMO, nearly universal across Microsoft, including for APIs (like Direct3D in the first couple of releases, DirectShow, COM in general from any non-Visual Basic language, etc) that were wholly intended to be public facing APIs by their very nature.
I can't speak to the conspiracy-theoryish idea that Microsoft wrote bad APIs on purpose to make it easier to compete with other software companies at the application level, I suspect the issues were more subtle than that (an overly bike-sheddy code review culture, maybe?), but I did find them nearly universally bad to use as a programmer for a Windows "ISV" at the time.
Having said that, most modern Microsoft/Windows APIs I've used have been quite sane. Not bashing Microsoft in general here, just their APIs from about the mid 90s to the mid 2000s.
I've used Win32 (I guess calling it that in the 64b era dates me) including some of the COM stuff, though I guess not as extensively as you did (TFA definitely brought some memories, the stuff looks a lot like the usual Win32 stuff.)
I think it's generally hard to make an easy-to-use API in C, unless the memory management involved is really trivial. (Let's ignore C++ for the moment including Microsoft's attempts like MFC and ATL, on the theory that binary compatibility issues make C that much more practical than C++ for this kind of thing.) IMO say poll or select (and the rest of the socket API, actually) aren't a picnic, either, and the only way to have a nice experience with these is by using a higher-level wrapper, and that's because everything involving variable-sized data structures, or any sort of data structure nesting, or literal initialization of this sort of thing, or lifecycles and memory management, or callbacks with private state, etc. etc. is just gnarly in C. (In C++ everything of course is just ducky and hence we never tire of articles discussing why smart pointers should really be passed by reference and shared_ptr is as good as gc except it's slower if used throughout and it doesn't work with circular references and don't use it unless you really have to etc. etc.) I haven't used the standard X C API but I glanced at it and it didn't looked very appetizing; that Microsoft's C API for GUI and OO looks worse than most of the standard Unix APIs kinda results from MS's APIs doing more, IMO, and X in particular, or Motif, which do more of what Win32 does, do not seem superior to Win32 in terms of usability, though if someone with experience in both strongly disagrees I'd take their word for it.
Now if you're saying that their C APIs of today are better than my argument is off. If you're saying that say writing a Windows program in C# is better than a C/C++/VB COM-infested program, then I think it's more of a testament to the strength of .NET/similar runtimes than anything else.
If by "worst API ever" he means "didn't do it the way I'd do it", he might be right. But if what he really means is "biggest pain in the arse to use", I give the crown to the AirWatch API. Not only do few of the functions work they way that they are documented, the sample code will never work (nor is it even close to a representation of how one might use it in the real world"), and my fave: you have to misspell the parameter name going in, but when it comes out it will be spelled right. So my code is littered with "no, that's not a typo; it's what the API requires" and "don't forget to spell the parameter correctly when referencing the return value". What amounted to maybe 100 lines of code against that API took me days of experimentation and cursing the useless docs. On our internal wiki, I documented what I was writing with a preface: "in 30 years of writing against APIs of all shapes and sizes, including ISAM data engines on IBM 370 mainframes from the 70s and open source projects written by some dude during an all nighter of coding and bong hits, the AirWatch API is by far the most steaming pile of poo I've ever had the displeasure of working with."
To me it's obvious that no one within the AirWatch organization ever used the API. They couldn't without running into the egregious bugs I have. My suspicion is that someone said "we should have an API", wrote a spec and shipped it offshore, then never checked the work that came back.
ETW isn't supposed to be written like this at all. MS has entire codegen toolchain that lets you define events in an xml manifest as part of your build process. This codegens a C header file which lets you fire events in your own code with a simple function call.
When you want to correlate events from your own code and OS events you use a tool like XPerf which knows the OS events and can read your application manifest to get strong typing on the events your application fires. This also lets you take traces on deployed software on customer machines.
If you want to roll your own event consumers you can do that too. But XPerf probably already does what you want.
This all existed back in like 2010 when I was using ETW. I'm sure the tooling has only gotten better since then. So yeah you can go write your own bad incomplete version of XPerf and not leverage your existing build process and MS tooling. But then is the API really bad? Or are you just not googling correctly?
This seems to be Microsoft's pattern. Make the APIs super low level and difficult to use, and build tooling on top to make it palatable. If you're doing simple stuff with Visual Studio, no problem, but as soon as you need to go deeper, good luck.
I think it's probably embedded in MS developer culture, leftover from the old days when the ulterior motive was to make Windows "easy to develop for, difficult to port away from".
The point is that there was no effort to use meaningful abstractions and a coherent interface at the API level. The art of building a good product is finding the right level of abstraction for your audience, and I don't think 3rd party developers are well served here, although internal Windows performance engineers might be.
I think most of the author's comments apply to the Win32 API in general. Windows game developers are probably among the few tech demographics that still have to encounter it on a regular basis. Having written an entire game to the Win16 API and then later ported it to Win32 I lost a significant chunk of my life spent in drilling into GDI and DirectDraw (I think that is what it was called then) structures and entrypoints. Pretty much the whole native API looks like his examples.
I also thought it was perhaps a tad strange to equate CSS, DirectShow, and the Android SDK as examples of tough "APIs" to master. I get his point, but those are three pretty wildly different levels of abstraction there.
Win32 API is the most horrendous collection of badly named, nonsensical types and functions I've ever encountered in my career.
Back in the mid 90s I was learning how to program and understanding a Win32 "Hello world" was a royal pain compared to everything else I was doing. Functions with more than 8 parameters were the norm, structs with tens of members had to be manually initialised before calling them. And don't get me started on the WPARAM/LPARAM idiocy.
Easy to call it idiotic today, after twenty years of evolution, but it's a little silly to do so. The windows API was written in C, and as such they needed some way to accommodate multiple dispatch in the message loop using statically typed parameters. That same entry point handled literally hundreds of different message types for every operation in the system. From my perspective it was pretty well designed for its time.
SDL is written in C, and it has a much more pleasant way of dealing with heterogeneous event types. There is a struct called SDL_Event, which you fill by calling SDL_PollEvent() until there are no more events to read. The struct is a (discriminated) union of all the possible event types. (It even wraps the Win32 API, so it is very directly comparable -- it is returning the same events!)
Most of that sample is actually how to deal with the Window API to get a window that redraws itself when resized. All that has nothing to do with Direct2D. Drawing the rectangle is just three lines, actually.
Yep, it's a difficult API, which has unfortunately meant the functionality, which works amazingly well, hasn't been learnt or adopted as much as it should have been.
However reading this, it seems the author simply wanted to collect trace information. It's probably a documentation issue, but typically you would just use tools like xperf or logman to collect and analyze traces based on in built Windows providers. There's no need in this use case to utilise the API directly at all.
> You can use tools like PerfMon to view logged information about your game, like how much working set it was using or how much disk I/O it did. But there is one specific thing that directly accessing Event Tracing gives you that you can’t get anywhere else: context switch timing.
Well, I don't know about the API, but that's certainly the worst blog software ever made, requiring me to enable JavaScript and execute code in order to read some simple HTML and CSS. Why‽
Perhaps consider that coming on here and being condescending about the impact your own browsing habits have on the web experience of your (very small, but very vocal) JavaScript-disabling demographic makes the rest of us web developers really not very interested in building sites that meet your demands.
I agree that the site shouldn't need JavaScript to function, but I don't think comments like these help the goal of helping the content creators achieve progressive enhancement.
I read the entire article on my cell phone. The only problem I had (not sure what's causing this) is that if I accidentally selected text, it would jump to the top of the page, selecting everything along the way.
Disable all Javascript? No, that does not work. That's why extensions like NoScript exist, so that you can whitelist safe domains.
This is actually quite usable, but on about 3% of websites, it doesn't work. These site are usually those horrible abomination of bloat-pages that, for no good reason whatsoever, need javascript files from 20 different domains in order to display a static page (I'm looking at you, Wired.com). In these cases, it becomes too tedious to pick the domains that should be whitelisted.
Most times, I simply close the offending website. In the rare case that I actually want to visit the site, I temporarily switch to Chrome.
I still go through the hassle of using NoScript, because it un-breaks pages that would otherwise for no good reason whatsoever decide it's OK to intercept common key combinations (CTRL+t), disable right-clicks, serve pop-ups, pop-overs, or pop-unders, start playing videos or sound without me asking for that thank you very much, or generally try to hijack my browser, my data, or my computer.
> People still disable JS in 2016? I take it that 80%+ of the web is horribly broken for you guys.
Yeah, it is — but it's better to have to enable JavaScript on a one-by-one basis when desired that to travel across the Internet executing random code and impairing one's privacy.
Some websites require JavaScript to display images nowadays. What's wrong with <img>? Others require JavaScript to use the correct font. What's wrong with CSS? Still others require JavaScript to show text. What's wrong with HTML? Still others require JavaScript to build links. What's wrong with <a>?
JavaScript is destroying the Web. What was a powerful technology for disseminating formatted text across the world has become a cobbled-together GUI held together with baling wire and twine.
> Some websites require JavaScript to display images nowadays. What's wrong with <img>?
Lazy loading. By default, browsers will load all images as soon as the page loads. If you have a long article with potentially megabytes of images, this behaviour will seriously slow down the initial load, and many readers who jump early will still download a bunch of images that they never see. That's why many sites only load images as they are about to appear, but to do that, you can't use regular <img> tags.
You're right in that JavaScript shouldn't be a requirement though. A good implementation will provide a regular <img> inside a <noscript> tag.
> Others require JavaScript to use the correct font. What's wrong with CSS?
Avoiding "flashes of invisible text"[1] when using web fonts. Unfortunately, different browsers have very different strategies for loading web fonts. Some of them will wait for the web font to load at any cost rather than showing a fallback font in the meantime. This means that you can be stuck for ages with everything in place except for the text, which is seriously irritating.
I've never implemented FOIT mitigation myself, but I assume that you could (and should) again provide a fallback in a <noscript> tag. Even without it, the text will at least still display without JavaScript, just in the second font in the font stack.
> Some websites require JavaScript to display images nowadays. What's wrong with <img>?
Because trifecta of retina (hi-dpi) displays, responsive design (use the same code for both mobile and desktop) and miserly bandwidth caps, (especially on mobile networks), means that traditional <img> tags won't do it any more.
<picture> is designed to help with this - and I'm pleasantly surprised to find that it has landed in a surprising number of browsers http://caniuse.com/#feat= picture - but you'll still need a polyfill, which is if course JavaScript.
Pages with a lot of images also use lazy loading to reduce bandwidth usage even further.
Of course a <noscript> should be included, but it's getting harder and harder to make the claim that it is economical to support people without JS.
Quite the opposite actually, it's a much better experience. I avoid many of the annoyances one can encounter on a daily basis.
Most of us are likely not outright disabling, we're white-listing, it's a big difference and IMHO the best way to browse.
It's an approach that basically considers the user experience on all websites to be hostile until you decide to grant them some trust. Given that many websites do implement hostile user experiences, it works out perfect.
A blog is a collection of documents and there's no reason to require JavaScript just to view them, the web was specifically designed for displaying documents.
It is just off by default. That doesn't mean we can't enable it.
Most of the time you enable only the domain of the url. If you really need to and you trust the page, you enable everything, including the trackers, which are blocked by other means anyway.
Ignoring the people who use some sort of blocking, everyone effectively has JavaScript disabled until it loads. I don't have JavaScript disabled by default but I notice this regularly when pages either take a very long time to render or never do because something (network, origin server) has an error:
This site worked great on my android phone. Much better than most responsive sites I've visited. Add to that it loaded quickly. I'd say it's a good design.
Downvote me as much as you want, reality doesn't change. If you use Opera with force reflow and you zoom in, text of different paragraphs overlap and you can't read anything.
Bad design as expected when trying to format HTML as if it was PDF. The designer has to adapt to unpredictable user agent settings, not the other way around.
(I'm not saying "poor Microsoft" - it's definitely not poor - and I'm not saying "you should thank people for whatever API they give you" - a lot of programs/features do more harm than good - and I'm not saying someone shouldn't vent having had a lot of time burnt by some ugly API. I'm saying specifically that this here is a dark corner of the OS that the API in question sheds at least some light on and I have a hunch, perhaps an entirely mistaken one, that it was someone's pet initiative and they thought it was definitely better than nothing. "Little did they know." To take a simple example of the other kind of API: sprintf, the version that doesn't take a buffer size, is a really bad API because that's something everyone's gonna use a lot and you just shouldn't give this kind of thing to people (give them snprintf at the very least) and you certainly can't be excused by "one should know what they're doing" in this case because it's between hard and uneconomical to use safely while passing the buffer size is the obvious thing to do, also one seems entitled to care-free string formatting, definitely 100x more so than care-free kernel status monitoring on the grounds of it being something many more people do much more often.)