Hacker News new | past | comments | ask | show | jobs | submit login
OpenGL in 2014 (tomdalling.com)
284 points by ingve on Sept 21, 2014 | hide | past | favorite | 115 comments



The multiplicity of APIs demonstrates that the problem is hard. The needs of game developers pull the APIs in a specific direction. And these requirements must be addressed, because the games market is huge and pushes the envelope.

But other users may have different needs. OpenGL is used by games, but not just games. For example, at Taodyne, we use OpenGL for real-time 3D rendering of business information on glasses-free 3D screens. I can tell you that my pet peeves with OpenGL have nothing to do with what's being described in any of the articles.

Some of the top issues I face include 3D font rendering (way too many polygons), multi-view videos (e.g. tiled videos, which push texture size limites, or multi-stream videos, that bring a whole bag of threading issues), large numbers of large textures without the ability to manually optimise them (e.g. 12G of textures in one use case).

Heck, even the basic shader that combines 5, 8 or 9 views into one multiscopic layout for a lenticular display makes a laptop run hot for a mere HD display, and requires a rather beefy card if you want to have any bandwidth left for something else while driving a 4K display.

Many of these scenarios have to do with limitations of textures sizes, efficient ways to deal with complex shapes and huge polygon counts that you can't easily reduce, very specific problems with aliasing and smoothing when you deal with individual RGB subpixels, etc.

Of course, multiscopic displays are not exactly core business right now, so nobody cares that targeting them efficiently is next to impossible with current APIs.


Sure, the problem is hard. But so are most problems worth solving. That it's hard doesn't mean it's impossible or excuse bad solutions; Direct3D actually does an extremely good job at it, in fact.

If you've not used modern Direct3D, it's hard to concisely explain how much better it solves this problem than OpenGL. It's just a cleaner, more elegant, faster API (of course, I know this wasn't always the case, but we're talking about now).

Many of the "non-game" problems you mention really have nothing to do with the API and should be solved by third party libraries or the OS. Text rendering is not a GPU feature so don't expect a graphics API to treat it as such. The problems game developers find in a 3D API affect everyone, because at its core the 3D API is just our way of talking to the GPU.

The main complaints from "non-game" applications like this are either common to everyone (texture, render target limits), or application specific things that have absolutely nothing to do with the GPU (like text rendering) and have no place in a low level GPU API.

I know it would make your life easier if the API was more like a library with exactly your use cases already implemented, but that's simply not the role of a hardware API.

P.S. You probably shouldn't be using polygons for font rendering. Nobody does it this way in practice (except rare cases perhaps). But this is another discussion; feel free to message me privately and I'd be glad to provide more info on best practices.


It seems no one has mentioned the long peaks fiasco yet, which is an important part of understanding OpenGL history and the committee(s) in charge of the standard:

http://en.wikipedia.org/wiki/OpenGL#Longs_Peak_and_OpenGL_3....

TL;DR: This is not the first time people are pissed at OpenGL. Last time when industry, developers, etc were sick and tired, around 2006-2007, and it was decided to do something about the API, an effort was initiated. Once the work was close to finishing, those who had seen the glimpse of this yet-to-be-released API were excited and were eagerly waiting for the release. Then the OpenGL committee vanished from the scene for a year or so, and when it re-appeared, it released the same old shitty API with a handful of function calls on top of that.


OpenGL might well be the "only truly cross-platform option", but it seems to me that, for games or mobile app development, getting stuff drawn on screen is only part of the problem. The rest is about doing so with the minimum use of cycles - either for better frame rates or better battery life. I can easily imagine that this is a classic 80/20 problem, with the 20% that takes 80% of the time being adequate ("butter smooth") performance.

So, given that the capabilities of the graphics hardware can vary a lot, how closely can a single, unified API like glnext approach optimal use of the hardware? And without the kinds of platform-specific code paths which are necessary under current OpenGL?


The idea isn't to completely eliminate cross platform rewriting (which may sacrifice efficiency), but to make it minimal (and flexible).


As long as there are multiple different GPUs around, there will always be a need for multiple code paths if you want to get good use of those GPUs, whether you use the same API or not.


Having a single, unified, inefficient base code path that you later optimise at hot points is invaluable to validate non-performance (i.e most) parts of your game.


I didn't say otherwise.


All the whining and complaining makes me wondering how anyone was able to write something with OpenGL at all. This is fascinating because a great amount of people were actually able to write awesome Games and Applications with this API.

Look at the whole lot of mobile devices. I have no numbers to base this statement on but I would be bold enough to claim that OpenGL is thanks to the multiplatform ability by far the most successful graphics API out there. The set of devices that brings some or another form of OpenGL support outnumbers other graphics platforms. This alone is a huge accomplishment. Heck, even Minecraft was able to run on PowerPC systems until they pushed the java version supported[1].

But now I need to look at the link and have to admit that the criticism is still correct. The API is still pretty rough and could see some improvements. I know this myself, I also played around with OpenGL at some point. There is a lot of boilerplate code that needs to be written before you can start yourself with the real game. This was always the case. This is why we always had an engine, a framework to built on.

But to say that it all is a huge pile of shit is a little bit harsh …

[1] https://help.mojang.com/customer/portal/articles/884921-mine...


Well by that logic, IE is a great browser because a lot of people got some amazing applications to run on it despite the fact that it was a nightmare to develop for...

OpenGL is almost like the IE of graphics development. You usually have to support it because it's so ubiquitous, but it constantly makes you want to tear out your hair because it does so many unexpected things and you have to memorize 5000 little caveats.


What OpenGL does is hard - to make things worth fast across multiple platforms with backward compatibility. Not like IE which was/is stagnant because of big company sluggishness/exceptionalism, but because its open source trying to do a huge amount with limited resources. There are always higher level platforms if you want something easier to work with, but you sacrifice a little speed.


> What OpenGL does is hard

If you think what browsers do is easy... :-)

Here's the thing though: most of what OpenGL screws up is the not-hard stuff. Here are the main things wrong with OpenGL right now:

* Everything operates on and modifies implicit global state, especially with the various "bind" patterns (direct state access will help a lot with this). Even a first year CS student knows global state is bad. It was sort of acceptable-if-ugly back when the fixed function pipeline was the thing, but for almost the last 10 years that's been deprecated, and with how you program a GPU now that global state and bind patterns is ridiculous.

* Error handling is unfriendly and confusing and slow. It's very easy to accidentally send in a bad parameter to a function, have OpenGL silently ignore it, only for something to blow up in your program in a completely different module because that error went silently unchecked. This is just bad. Not only that, but glGetError rarely gives you any useful information. Why can't it at least tell me what function failed, and which parameter/value it was that made it fail? A lot of times you'll see an error in a complex function like glTexImage2d that has a ton of possible parameter combinations for "invalid value". Well, which one is invalid? What inputs would be valid? The driver knows, so why can't it tell me?

> Not like IE which was/is stagnant because of big company sluggishness/exceptionalism, but because its open source trying to do a huge amount with limited resources.

Have you looked at the companies that comprise the khronos group? They're not exactly poor. The problem isn't resources, it's bad design by committee.


> with the various "bind" patterns

Nvidia have been pushing for bindless graphics.

http://developer.download.nvidia.com/opengl/tutorials/bindle...


Wasn't Valve working on the error messaging and debugging problem?


Wasn't IE the first browser to introduce Ajax?


Yes. So what?


>The API is still pretty rough and could see some improvements. I know this myself, I also played around with OpenGL at some point. There is a lot of boilerplate code that needs to be written before you can start yourself with the real game.

OpenGL is a graphics library (and a rather low-level one too), so it has very little to do with games. It's only a specification for the wrapper above the gpu hardware.


Like javascript is an awesome programming language because it is so widely used? Oh, wait...


Except other APIs offer what in OpenGL requires an engine/framework.

- Font loading

- Texture loading

- Shader loading

- Math library

- Geometry loading

- Meshes

- Integration with the OS UI

- Debugging capabilities


What other APIs would that be which offer this functionality? In D3D this used to be offloaded into a separate D3DX API, but AFAIK this is no longer supported in Windows8 (at least in the WinRT API) and has been moved into separate mostly personal projects (e.g. http://directxtex.codeplex.com/). A 3D rendering API should only be concerned about efficient 3D rendering and not burden itself with specific resource file formats. For OpenGL there's plenty of libraries to choose from, for instance:

- glm for math (http://glm.g-truc.net/0.9.5/index.html)

- gli for texture loading (http://gli.g-truc.net/0.5.1/index.html)

- assimp for general asset loading: http://assimp.sourceforge.net/index.html

- the STB headers (not OpenGL specific): https://github.com/nothings/stb

- GLFW as window system glue and input wrapper

- ...and more which I am not aware of or forgot to list

GPU vendors also have SDKs and especially debugging tools (e.g. NVIDIA nSight which integrates into VStudio). It's not in one place like in DirectX, but on the other hand, the OpenGL world has a lot more platforms and usage scenarios to cover then D3D or the various new-style APIs like Metal or Mantle (these are the actual motivation for OpenGL-Next, reducing overhead even when this means a lower-level API which is even more focuses and harder to use then before).


> What other APIs would that be which offer this functionality?

Game console APIs.

> In D3D this used to be offloaded into a separate D3DX API, but AFAIK this is no longer supported in Windows8

DirectX is now integrated into the Windows SDK and makes use of new APIs:

Math is provided via DirectXMath.

Shaders can still be combined via Effects.

Interactions with the windowing system are done via DXGI.

Image loading is done via Windows Imaging Component.

> For OpenGL there's plenty of libraries to choose from, for instance:

Yes, I had to go library hunting already a few times. It gets tiring trying to see what are the current ones for people like myself that only occasionally do graphics stuff.

For example, a few years ago DevIL was the library to go for image loading.


Game console APIs have it easy though since they only support a single GPU and a single texture compression scheme. In the OpenGL world it is more tricky, especially on mobile since each GPU vendor used to only implement their own patent-infested compressed texture formats (this useless GPU-vendor-infighting held back OpenGL back more then anything else). And most of the Windows APIs you listed are not part of D3D, but either part of (what used to be) DirectX, or some completely unrelated part of the Windows API, and with this you've locked yourself into the Windows world. From your requirements it sounds like you're best served with SDL2 or a similar medium level wrapper API, at least if you want to support platforms outside of Windows.


I'd agree with this. He doesn't want a low-level graphics API, which is what OpenGL is. He wants everythng else around it. For what it's worth, I totally want that too. I'm just not blaming OpenGL for being a leg when I really want a tail.

SDL2 is cool, and so is @icculus. It isn't everything he wants, by a long shot, but it's good at what it is. I used SDL2 for a while when writing my game engine in C++ and didn't port to Mono for any reason related to it (just a burning desire to have a scripting layer that wouldn't make me put my head in an oven.).


Now they just have to create ONE single API, instead of forcing everyone to write multiple code paths to target the various flavours, extensions and drivers workarounds.

Specific graphics APIs only matters when graphics middleware is not an option.

Which OpenGL always requires. Since the standard leaves out how image/shader/texture/fonts/GUI/math are handled.

I think the commoditization of engines will be the second coming of the OpenGL 2.0 - 3.0 stagnation, if they don't improve on these areas.


OpenGL has support for: image/shader/texture/. The standard is quite clear about how these are handled. Anyway the trend (and demand from games programmers) is away from more features and towards a simplified and lower level API.


So in which section are APIs for loading defined?

I am having some problems locating the page.


That would be an API for asset management. OpenGL doesn't do that. It does do textures and shaders though.


Which was my point, other graphics APIs do offer such support.


That's not the job of a graphics API. It's a convenience utility that is not really feasible for OpenGL because it runs on so many platforms. You're asking for platform specific features on a cross platform library.


What is platform specific about loading assets?

OpenGL is specified as a C API.

Such asset loading API could be defined just on top of ANSI C IO functions.

What about forcing every single OpenGL developer to write the same set of shader loading/compling/linking code?


Compared to what OpenGL has to accomplish, asset loading is orthorgonal. There is a number texture formats that most graphics cards support natively, the OpenGL API already exposes more than those and the Driver converts them on the fly. It is trivial to implement a bmp loader and slightly harder to load something like textures compressed with DXT, there are obviously third party libraries that do that for you. There are simply too many potential texture sources, to integrate that into an api. The same goes for 3d model loading, there are simply too many model formats and how you organize your vertex data is application specific anyways. The reason the shader loading / linking and compile code has to be written by the Developer is also fairly obvious, OpenGL does not want to support cross platform File IO and in real world projects 100s of shaders are actually dynamically linked in different combinations into render pipelines to create desired effects, the effects themselves are often creates in some kind of flow-based programming environment that is translated into shaders behind the scenes. Basic shader loading/compiling/linking on the other hand is trivial to implement.

Finally Window/Context creation is also platform specific and therefore can't really be specified.


> OpenGL does not want to support cross platform File IO and in real world projects 100s of shaders are actually dynamically linked in different combinations into render pipelines to create desired effects, the effects themselves are often creates in some kind of flow-based programming environment that is translated into shaders behind the scenes.

Which other APIs support with effects, scene description files, ....

> Basic shader loading/compiling/linking on the other hand is trivial to implement.

Hence why every single OpenGL newbie keeps copy-pasting it from somewhere.

> Finally Window/Context creation is also platform specific and therefore can't really be specified.

So why did they get to the trouble of creating EGL?


OpenGL is just for drawing. If you want asset management, you'll probably find it in SDL[0], which is also cross-platform and used heavily by almost everyone that uses OpenGL.

[0] https://www.libsdl.org/


Including myself and no, SDL doesn't do assets.


We need OpenGL as an alternative. What would Direct3D have been today without competition? But at the same time, GL is such a PITA to use directly that I don't bother without some middleware abstracting it away.


At the same level as the game console APIs.

You don't get OpenGL on those.


ps3 had opengl available, and the Wii and Wii U use OpenGL. I don't know about the ps4.

Note, the xboxs use a modified version of DirectX which is more like mantle/dx12 (low overhead)


> ps3 had opengl available

Again this urban legend.

PS3 had PSGL, which was OpenGL ES 1.0 using Cg as shading language.

This is quite different than OpenGL.

Wii uses GX and the Wii U uses GX2, they are modeled on OpenGL, which is also a bit different than being OpenGL.


Neither Wii nor WiiU use OpenGL, and on the PS3 it may as well not exist since pretty much no game uses it.


There is no reason not to have OpenGL there, besides the jerkiness of console manufacturers. Hopefully Steam Machines can put an end to such situation.


The PS3 had OpenGL. Nobody used it because it was vastly slower than libgcm. The PS4 has OpenGL. Almost nobody outside of indie-land has even looked at it because there's no reason to when libgnm is right there and you still pay a significant performance vig, in an industry where you do not get the luxury of ignoring performance, for an only marginally better API. The Xbox One, on the other hand, does provide what appears (I don't have access to any of these consoles, I just know a lot of game devs who like to drink) to be a complete DX11 implementation...and it's plagued with perf issues you don't see on a PC because the assumptions of the API don't map to the actual hardware--which, I understand, the lower-level constructs of DX12 will address.

The future of non-indie game development seems very much to be going towards hardware: Mantle, Metal, libgnm, DX12. If you're on a console, you can afford to actually develop for the console. If you're not, you don't matter. (Indies don't care, but few people care about them, so that tail wouldn't wag the dog. Nobody cares about Steam Machines, so I don't know why you'd bring them up.)


> The PS3 had OpenGL

No. It had OpenGL ES 1.0 with Cg for shaders.

> The PS4 has OpenGL

Since when? Only if a third party company is offering it.

http://develop.scee.net/files/presentations/gceurope2013/Par...


PSGL and GLES are close enough for the discussion at hand; if Cg vs. GLSL is really going to grind your gears you've already lost. You're right about the latter, though--I misunderstood a prior discussion with a friend of mine (he was speaking in hypotheticals, I thought he was speaking in specifics). Mea culpa.

It doesn't change the abject silliness of the sibling thread though.


Yeah, but by not being the same thing, in the end the result is the same. Multiple code paths, or backends to handle the differences.

> It doesn't change the abject silliness of the sibling thread though.

Agreed.

I do boring IT stuff. But once upon a time I had some opportunities in gaming (long long time ago), which I blew up for being more focused on the OSS aspects of the tooling than what really mattered, having a game.

I really learned the hard way, how the different the game development culture is from the FOSS one. Which I should have known given the similarities with the demoscene.

Even though I am not in the industry, I do follow it regularly.


It depends on where you're coming from though, yeah? Like, PC->PS3, (good) OpenGL/Cg to GLES/Cg wasn't a big step (I use Cg with OpenGL today). Nobody really used it anyway, though, because libgcm was just so vastly superior.


The fact that OpenGL on consoles has poor performance is the fault of console manufacturers, not of OpenGL itself. Also, their release schedule results in outdated hardware in comparison with regular computers. I.e. developers are forced into mentality of writing on a very low level to squeeze performance from a dated spec. More frequent updates could solve this situation and that's what Steam Machines aim to change as well. There is no valid reason why OpenGL can't be made to perform well on consoles except the lock-in mentality which plagues consoles market.


> There is no valid reason why OpenGL can't be made to perform well on consoles except the lock-in mentality which plagues consoles market.

Oh sure. Lock-in mentality is the reason. Never mind that Apple is doing Metal and AMD is doing Mantle because of defined, demonstrable issues with higher-level abstractions, both GLES and DX11 respectively. Certainly never mind that DX12 is shaping up to provide the same sort of low-level primitives as Mantle/libgnm/Metal. No, they're just hissss lock-in hissss. Can't possibly be because, like, it's a better option, can't have that. Like, do you know how many calls you have to make to accomplish stuff in some basic OpenGL extensions? And how much less you can do without the significant amounts of overhead it causes--overhead you can't remove because it's in the spec and now it never goes away?

Did you ever stop to think, for just one second amidst your "there is no reason" absolutism about something it certainly sounds like you don't know much about past laymanship...maybe they're not laymen and maybe they have a good reason for stripping away abstraction? Not even providing their own incompatible abstraction, but just getting rid of it? I mean, Snidely Whiplash wants to make his platform dev-friendly too, but instead of doing that he gives you this bleeding bare-metal API? Come on. Have you stopped to give these guys an ounce of credit?

.

And complaining about a "release schedule resulting in outdated hardware" fails to comprehend that "outdated hardware" is every nine months. That is a feature, so that normal people don't have to go buy nVidia GTX9581 Platinum Edition cards every two years. Your position frankly beggars belief. The PC gaming treadmill sucks. The mobile treadmill sucks. And you want to make people jump to it on consoles? Do you know a normal person?


> Never mind that Apple is doing Metal and AMD is doing Mantle because of defined, demonstrable issues with higher-level abstractions

Bad comparison. While AMD proposed to Khronos their Mantle as a base for OpenGL-next and in general aim to make it open, Apple didn't do any such thing. Apple does Metal precisely out of lock-in mentality. Any API that will lock developers to one platform is a dead end.

> "release schedule resulting in outdated hardware" fails to comprehend that that is a feature, so normal

Yeah, a "feature" that degrades games' quality because developers are constantly held back by requirements of console ports. It's not a "feature" to stagnate hardware for such long periods of time. Consoles are closed platforms with manufacturers in full control. It has some pluses like more stable expectations, but minuses outweigh them. Open platforms are the future and the locked-in ones will be considering some changes when stronger competition will give them a kick.


> Bad comparison. While AMD proposed to Khronos their Mantle as a base for OpenGL-next and in general aim to make it open, Apple didn't do any such thing. Apple does Metal precisely out of lock-in mentality.

If you don't think Apple has invested a shit-ton of money into GLES to make it habitable I just don't think you're living in a world inhabited by the rest of us. Apple has spent tons of man-hours and money dealing with the OpenGL process. But no, Snidely fuckin' Whiplash is over here curling his moustache and hissing "yessss, yesss, everything we do by stripping out layers of abstraction and making things faster for obvious and comprehensible reasons is eeeeeeevil."

They still support GLES. They still participate in the process. But they offer a frankly better solution too, one on top of which you can build an agnostic API that can better leverage each platform's better options than OpenGL.

> It's not a "feature" to stagnate hardware for such long periods of time.

Cool. Get normal people to shell out for a new console--you know, that thing that lives in a home theater center that you buy and stop thinking about--every two years. Don't worry, everybody else will definitely wait for the market to prove you right, they won't be over here cooking on what actually works.


Upgrades should not depend on where you place your computer. There is absolutely no logical relation between how often you want to upgrade and the type of gaming it's used for. So who said consoles have to be all locked up and controlled by Sony, MS and Co.? There is no reason to, but they do it in order to control developers. It's all about lock-in and reducing choice, not about anything else.

> If you don't think Apple has invested a shit-ton of money into GLES to make it habitable I just don't think you're living in a world inhabited by the rest of us... they offer a frankly better solution too

So, where is their proposal to make Metal into OpenGL-next? Until it surfaces, Metal will remain a lock-in attempt.


> Upgrades should not depend on where you place your computer.

They aren't computers. This is the blinkered view of people way too close to their choice of silicon that ignores what people who actually buy hardware want. Normal people don't want computers. Computers suck and are hard to use. They want predictable, turn-key solutions. I'd point to the smoking ruin that is the commercial HTPC market as evidence that, hey, maybe this actually is a thing that works...but I have this sneaking suspicion you'll just say that's definitely not a true Scotsman, no no no.

> So who said consoles have to be all locked up and controlled by Sony, MS and Co.?

Nobody, so don't put words in my mouth.

> So, where is their proposal to make Metal into OpenGL-next? Until it surfaces, Metal will remain a lock-in attempt.

If you would stop extrapolating general-purpose software to to hardware layers for just a tick, the notion might enter your head that Metal is designed around the behaviors of the guts of the A7. It doesn't make sense to generalize it because the generalization of it into "OpenGL-next" takes away the reason to use it! You can bleat 'till you're blue in the face about how terrible it is that people would trade abstractive consistency for performance, but it's what everybody's done in games--and, when you add enough zeroes to a problem, software in general--forever.

Abstractions inevitably fail. Your ideology can attempt to ignore it. It'll fail, too. And the arrogance that you display in this insistence that people have thought about this much much more than you must be acting dishonestly because they disagree with your ideological bent is simply astounding.


> Nobody, so don't put words in my mouth.

Sony and MS did. "Nobody" now is probably Valve who want to disrupt the sickening locked up consoles scene. Time will tell if they'll succeed.

> It doesn't make sense to generalize it

No, it does make sense to provide a generic cross platform API. We aren't in the dark ages of computing anymore. Forcing developers to use hardware specific code is backwards unless we are talking about drivers and assembly like development.

> Abstractions inevitably fail.

Is that Apple's argument for Metal? Abstractions don't fail. They save tons of effort for developers. Otherwise why don't you propose writing programs in machine code like in the olden days? That for sure can produce the most optimized result in theory.


> We aren't in the dark ages of computing anymore. Forcing developers to use hardware specific code is backwards unless we are talking about drivers and assembly like development.

Do you think game development followed the web off the performance-doesn't-matter cliff and went "welp, we're just going to go write everything in Ruby"? What exactly do you think game development is if not intensely performance-critical low-level software development? Do you somehow think they aren't writing whacks of assembly and adding code paths for specific versions of drivers because of reasons X, Y, and Z?

> Otherwise why don't you propose writing programs in machine code like in the olden days?

You literally must be trolling. What do you think game developers do when they run into hot spots in their code? Bunches of AAA games (and every engine vendor, thanks much) have one or more gurus around who eat and sleep the assembly language for every deployable platform. I know a guy--socially, we're not friends, but we've talked about this--whose entire job is rendering optimization for iOS. He owns a big whack of stiff C where every allocation is carefully considered and which has more than a little assembly in there. Because that's what you do when you need to wring performance out of a system. He digs Metal because it makes him better at his job--at making the hardware optimally do what he very badly needs it to do.

Stop trying to conflate web development on future computers from beyond the moon with game development because what you know does not hold. These games are not being written in Node, they are not being written in environments with garbage collection--hell, a lot of the time they don't even use inheritance in C++ because of the cost and complexity of vtables (this is one of the reasons the standard collection libraries in C++ are a template soup, as it happens). It's not the "olden days," it's today, now. That's why these low-level APIs are coming about: because the abstractions you think don't fail don't cut it.

"Abstractions don't fail" is one of the most unintentionally funny things I've read in a very long time and I can literally think of a dozen places just off the top of my head in my very much non-gamedev job where the abstraction layers other people put in place screwed me hard. Quit while you're very far behind.


> Do you think game development followed the web off the performance-doesn't-matter cliff and went "welp, we're just going to go write everything in Ruby"?

Cross platform APIs can be native in C/C++. Ruby or Node have nothing to do with it.

> It's not the "olden days," it's today, now.

Sure, it's today now on consoles which provide low quality generic APIs because no one cared to provide high quality ones. That's not a valid reason to say "fire! we now need machine code to save the gaming industry at large". That's a reason to say that console manufacturers don't care about developers besides locking them into their platforms. Luckily this is going to change. Of course you are free to use low level code still when it's really needed.

> I can literally think of a dozen places just off the top of my head in my very much non-gamedev job

We are talking about gaming. I can think of cases where low level development is needed too. That's irrelevant to the discussion about using cross platform APIs vs using platform locked ones.


If you are seriously going to attempt to reject the parallel between assembly code and low-level platform-specific APIs, I'm utterly done with you.

(And there's x86 and x64 assembly in UE3. For PCs, right now. But don't let that stop you from telling people how to do their jobs. It's a shame @ShitHNSays got canned.)


If you use Metal or similar you'll have to either show users of other platforms to the door or to support N such APIs. Most would prefer to reduce that number not multiply it.

All that cheering for Metal is not sincere. Talk to actual developers who are forced to support many APIs because no one cared to make one cross platform work well. The bottom line, we need less of lock-in APIs promoted as the way to develop games.


> Talk to actual developers who are forced to support many APIs...

This is not how the game industry works.

There are studios specialized in porting games for specific platforms.

Usually a studio focus on one specific platform and outsources the remaining platforms to such porting studios.

This is a common practice since the early days.


In some cases. In other cases engine developers support multiple back ends and for actual game developers this is simplified by using that engine. But in any case if the game wants to be inclusive rather than exclusive the burden of supporting multiple APIs will show up. Either for studio developers themselves or as expense to hire contractors who do the porting or this will fall to engine developers.


Sony asked major studios if they wanted OpenGL ES 2.0 and they didn't care.

http://sandstormgames.ca/blog/tag/libgcm/

<quote> At one point, Sony was asking developers whether they would be interested in having PSGL conform to the OpenGL ES 2.0 specs (link here). This has unfortunately never happened however, as developers seem to have mostly preferred to go with libGCM as their main graphics API of choice on PS3. This has meant that the development environment has started becoming more libGCM-centric over the years with PSGL eventually becoming a second-class citizen – in fact, new features like 3D stereo mode is not even possible unless you are using libGCM directly. </quote>

> There is no valid reason why OpenGL can't be made to perform well on consoles except the lock-in mentality which plagues consoles market.

Game studios culture doesn't care about FOSS.

What matters is making the best game on the platforms the publishers are paying in advance for.


The "major studios" are mentioned here probably because most developers simply don't target PS. I.e. indie developers aren't interested in it that much. So I wouldn't take that as an indicator that developers at large don't care about cross platform APIs. If Sony would offer an open platform without barriers to enter, those answers would be very different.


Most developers don't care about the platforms with the most reach and most invested gamers. Sure. Must be why every indie developer I know would sell their eyeteeth to get even a mid level promo package on PSN.

You're taking your ideology and trying to retrofit the world to accomodate it. Doesn't work that way. In the world of what is, rather than the world of what one might like it to be, nobody really cares. Tough.


> You're taking your ideology

You are talking about Apple's spin of Metal (which is lock-in ideology). In practice it's easier for developers to use one toolkit instead of using 20 locked-in incompatible APIs.

If developers use ready engines it becomes easier for them, but that burden is shifted to engine developers then. Someone will have to deal with that major mess. There is clear pragmatic benefit in cross platform APIs for gaming, and claiming that it's just ideology is nonsense.


> In practice it's easier for developers to use one toolkit instead of using 20 locked-in incompatible APIs.

Not when that "one toolkit" is worse.

You are conflating "lock-in" with "optimal suitability for a platform". Hardware matters. Your continual handwaving can't ignore that.


> Not when that "one toolkit" is worse.

Here it's the case of "worse because it wasn't made better", not because it can't be better. That's my point. So difference in approaches with this subject like between AMD and Apple shows who cares about making it better and who cares about locking developers into their platform.


So, your "point", in everything that I have read from you thus far, is that, if the world only worked the way you think it should work, then OpenGL would reign supreme as king. We can just ignore the many ways in which it sucks because, hey, you're talking hypotheticals!

Right. Of course, the real world works nothing like the world that you have described, but ok, good luck with that.


There's a lot of Linux-on-the-desktop-style wishcasting in this thread.


Or rather Apple-style lock-in cheerleading.


Nobody's advocating lock-in. What is being advocated is using the right tool for the job. What has been brought up--by you--is the completely laughable notion that "abstractions don't fail" when the overwhelming majority of people involved with high-performance graphics are pretty sure that the abstractions have failed. Which is why they're going exactly the other way from your ideological wishcasting and building libraries and frameworks that are tailored to the hardware rather than using a driver to overcome the impedance mismatch at the cost of performance.

You don't know what you don't know but it isn't stopping you from talking shit. Stop.


> when the overwhelming majority of people involved with high-performance graphics are pretty sure that the abstractions have failed.

Poor abstractions. I'm not convinced that there can't be a well designed cross platform graphics API that is sufficient for the majority of cases. Prove that it's impossible or stop, because otherwise your claim that you don't advocate lock-in doesn't sound sincere.


>Prove that it's impossible.

The abominable snowman exists. Prove that it doesn't or shut up.


Anything more useful to say than trolling comments? The commenter above claimed that cross platform graphics APIs aren't the way to go because they failed. I see no proof that it's not a possibility. They didn't even fail - they were quite useful in many cases but with their current downsides they didn't live up to real potential. So that can be improved by making better cross platform APIs instead of claiming that one has to run to hardware specific APIs right away and there are no other options.


I'm not trolling; I'm pointing out the fault(s) in your logic. Perhaps it is a "possibility", but it doesn't exist today, and better options are available, so why would I use an inferior implementation? Because it more closely coincides with my world view?

No, I need to get software written that runs well. Now, if my requirement is to run on N different platforms (where N > 1) then I will have to look at OpenGL. If it's not a requirement then I won't waste my time. You're comments reek of ideology and completely lack practicality. You make assumptions about a complex subject which smarter people than you or I have spent years working on and come to different conclusions.


> Perhaps it is a "possibility", but it doesn't exist today

So? It's not a reason not to make one or to claim that since it "failed" everyone needs to run to platform specific APIs. That was the point.


So indie developers are "most" developers? What planet do you live on? Do you know what the term "indie" comes from?


Indie means studios independent of major publishers, i.e. self publishing ones. Out of the recent games the majority I'm actually interested in are from indie developers and they are true work of art. The rest are mass market junk.


That's beside the point. What you are interested in and what the majority of people are interested in do not align. And where there are more consumers, there are more developers.


Did you see some studies that say that there are less indie studios that publisher funded ones?


It's actually a fault of OpenGL. The main issue is that the modern GPUs are executing draw calls concurrently. If you've done some programming you might know that you cannot really have global mutable state together with parallel execution, yet the global mutable state is the core of OpenGL. So you need to copy the state every time you launch a draw call, which is an expensive thing to do even with the hardware aids available. Definitely more expensive than just launching parallel draws each with its own state.

Not just the OpenGL is suffering from this, DirectX has very similar problem up to DX11, the DX12 is out to address this.

There are extensions that allow you to do the same in OpenGL but then what would be a point to bring the entire OpenGL if you are only going to use the extension that is nothing like the OpenGL and does the same the native API already does?


OpenGL 4.5 addresses some of concurrency bottlenecks. My point was not to say that OpenGL is perfect as is. But to say that any potential successor can't claim that OpenGL is bad because it aims to be cross platform, therefore better alternatives all need to be platform specific. That's a false logic. Cross platform API can be made better.


> OpenGL 4.5 addresses some of concurrency bottlenecks.

How so? It adds DSA which is just a different way to access global state. The global state is still the cornerstone of the 4.5. The only way it addresses performance issues is through "bindless" extension, which pretty much tosses away the entire OpenGL pipeline and leaves just a draw_indirect_multi call call with everything pushed into shaders. It's too extreme for many developers even it's been available on non-NVidia hardware.

> My point was not to say that OpenGL is perfect as is.

Never mind me then. I only replied to your claim it's not the OpenGL fault it's slow on consoles.


> How so?

I mean new flush control support.


It is not the concurrency issues I meant. It's about accumulating commands in the userland and transferring them to the kernel mode driver. The concurrency that affects performance the most is inter-draw call concurrency, as I said above. Flush control is about GPU-CPU concurrency, which, btw, still has long ways to go to reach the level of the control available in native APIs on consoles.


Hopefully OpenGL-next won't be a minor modification of the current limited design but would create a new design from the ground up. So far they indicated that it's the goal.


Nobody can forbid you from hope, but JFYI there had been two major version of OpenGL (3 and 4) already in the age when all the GPUs available had been parallel. In fact, at the time of OpenGL 2 design there had already been parallel GPUs on the market and everybody involved knew it's the way of the future (companies on the ARB either design their own GPUs or have access to the pre-release designs).


I'm not really sure why previous major versions failed to redesign the API from the ground up to fix major deficiencies in it. Was it organizational flaw or lack of good proposals?

I guess now they came to a better realization of the situation or now there are more people who actually want to improve things significantly rather than avoid such issues.


It's not a mystery to me: OpenGL is a scene description API, not a real-time graphics API. It comes from the environment where frame times under 5 seconds (not milliseconds, 12 frames per minute) are considered real-time and the compatibility is much more important than performance. Apple was the only ARB member that had been pushing for the OpenGL to become game-grade real-time mainly due to lack of any other real-time 3d graphics API on their platforms (both Mac and iOS), which is not the case any more as Apple is pushing its own API now.


> It's not a mystery to me: OpenGL is a scene description API, not a real-time graphics API.

Rather it originally was such API. I'm saying, why couldn't they draft a new API from the ground up as one of the previous major versions? It looks like only now they are doing exactly that. So why wait so long? That's what I mean I'm unsure of. Probably some organizational problem or lack of some major pushers for such change in the past.


Great article, thank you! Any news as to when we will get a WebGLNext?


We're still waiting for WebGL 2.0 to be finished, nevermind widely implemented, and that's based on a GLES version released two years ago rather than one that doesn't exist yet. Don't hold your breath.


On Linux you could in principle use the lower level hardware specific command issuing APIs as well. Mesa is not a privileged library.


We all got messed up with the transition to OpenGL 4 and now we are gonna have another OpenGL? I don't see OpenGL getting out of this funk until the language you learn today will be useful tommrow. Perhaps, a new API is a step in the right direction but things are gonna hurt bad bad for years to come, especially when OEMs don't support the API.


My current approach is to use Go and target WebGL as the lowest common denominator, but with OpenGL (and/or OpenGL ES) backends as well.

That way graphics code written once can run on OS X, Linux, Windows, browser (including iOS).


How do you target WebGL with Go? Is it possible to cross compile Go to JS?


It seems there is a cross compiler for Go https://github.com/gopherjs/gopherjs There are also bindings for WebGL https://github.com/gopherjs/webgl


As graetzer said, I use GopherJS and WebGL bindings.

For a simple example, see https://github.com/shurcooL/play/commit/cd45204c1b2d89062255....


WebGL

OpenGL is now available to more people than ever. By an exponential amount.

It is supported by all major browsers. From IE, to Firefox, to Chrome, to Android, to iOS, and more.


> ... Android, to iOS ...

On iOS, only starting with iOS 8

On Android, only with Firefox. OEM modified browsers don't do it. Chrome only enables it in specific devices.


The saying is that total rewrites are always a bad idea. It'll be interesting to see if this one would be an exception to the rule.


That saying is wrong. Total rewrites for the wrong reasons are a bad idea. Total rewrites for the right reasons and with _a right team_ can be great (i.e. better abstractions, better understood domain requirements which can lead to a more simple, better codebase, enable features that were not feasible before, and so on). Succintly: If the theory of the existing system is still understood well enough and the team understands the domain requirements and has experience of implementing similar systems then a rewrite can be a great idea (ref Naur: 'Programming as theory building' on what the theory of the system means as a concept and what are it's implications). For specific businesses, total rewrites can be startup-level risky and should be done due to specific domain requirements. I think this is usually what is meant by the 'bad idea' meme. Joel Spolsky wrote of this at some point and gave example of Netscape's rewrite attempt. His advice of "don't do it" was highly context dependent, though.

When we discuss of creating a new industry standard API, we are not discussing of a single rewrite. There are a few typical patterns how these are done: 'fostering' an existing tech stack under a public institution or developing the API publicly.The public API development usually consists of two parts - agreeing on the theory of the API and implementing one or more reference implementations of the said API at the same time by member institutions.

I'm not sure how to gauge the risk of this public API development. There are costs for member organisations but usually they are insignificant when compared to the size of an institution. There is usually no established product whose feature development and support stagnates because of this but rather the development is done to facilitate potential future products and features. Of course, the API adoption can fail (ref. for example OpenVG) but that fait can befall any new development for any number of reasons.


Very informative, thanks for sharing that.


The way I've heard it is: "Total rewrites are always a bad idea. Sometimes they are necessary."


For an API with 20 years of cruft? Rewriting is the only option.


The article gives the age as only argument for why it carries all that cruft, but it can never be the only factor. If it were, the 23 year old Linux kernel project would be rotten by now, especially with the 31 year old GNU toolkit.

Sure I understand that there might be some stuff that should be deprecated (much like with HTML elements, the old stuff can be supported while newer projects use newer and better stuff), but rewriting from the ground up also means that you have to re-educate every developer out there. They might as well go and release for Windows instead since they probably have some experience with DirectX too.

Rewriting something that many people use is a bad idea, especially when there are well-funded competitors trying to get developers onto their own platforms.


Yeah, but Linux is not afraid of removing cruft. Computers (and the x86-i386 arch) changed very little over the past 20 years, even with the 64-bit stuff

Graphical cards evolved much more in the last 15 years.

The first Direct3D versions were also bad, but they evolved and removed cruft and broke backwards compatibility.


Well, as the recent article about modernizing less showed, such old projects frequently do show their age.


Unfortunately, it also means GL has 20 yearly of usage history and large numbers of app that using it.

Rewriting those will be much more harder. A lof of them probably not open source.


Why would you want to rewrite them? The old OpenGL will just stagnate, I imagine, not go away.


They'll continue working without a hitch. Various versions of OpenGL can live with each other.


I don't do 3D programming but have benefited as a happy Steam customer from the great games on Linux now. Having said that, it's generally a bad idea to re-write software with such a massive customer base on the basis of age. The term cruft doesn't qualify this notion. Refine, update sure but re-writing seems like a major overreaction.


This is a new API, not a new codebase. New APIs happen all the time.


Well OpenGL exists in the first place as cleaned up version of IrisGL (it originally was going to be IrisGL 5)


It's not a rule though. Total rewrites are often not a good idea, but sometimes they are.


Is there any ETA for OpenGL-next?


So basically, "OpenGL in 2015" will be great!


Whoever doesn't force me to use C/C++ or JavaScript.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: