Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nvidia's new graphics cards are a big deal (theverge.com)
105 points by doener on May 8, 2016 | hide | past | favorite | 43 comments


Earlier discussion about an article that goes much more in depth: https://news.ycombinator.com/item?id=11648110


I'm really skeptical these cards will be cost/perf competitive with AMD's upcoming Polaris.

The 1080 is the only card with GDDR5X, which from what I've read has twice the bandwidth of GDDR5. The 1080 and 1070 both have 256-bit buses, meaning if the clock rates are equal, the 1070 will have half the memory bandwidth of the 1080.

In contrast, the current gen AMD R9-290, R9-390 and higher have 512-bit GDDR5 buses, and the leaked Polaris cards have 256-bit GDDR5X buses (eqivalent).

AMD cards tend to be priced lower than NVidia cards, so we're very much in a wait-and-see mode right now.

(We do datacenter level cryptographic compute and our workloads are very memory bandwidth constrained.)


You do cryptographic compute. You're about the only HPC application that benefits from AMD architecture. Most other scientific compute and deep learning applications have horrible performance on AMD compared with NVIDIA (usually for software reasons that are in principle fixable, but it is what it is).


There is a lot of applications beyond crypto that depend on an integer performance. Even in graphics, not just scientific/engineering compute.


sure, but if you're not integer performance bound, which most graphics work won't be, it doesn't usually matter.


Yet, a lot of stuff beyond crypto depends solely on an integer performance + memory bandwidth. Graphics included. 2D image processing is better done in integer or a fixed point.


In HPC?


HPC included. A whole lot of simulations fits well into a fixed point.


For general graphics usage (meaning mainly: games) this would not be new. The current R9 300 and Fury cards perform very well against Nvidias lineup, to the point that they are currently in every price point for most (non-CUDA) usages the best pick (if one does not want the strongest card for 1080p and DX11, which would be the GTX 980 Ti).

This might vary with the prices in different markets. But that AMD is the better choice in the 200€/$ price region, which for pure price performance is currently the high point, almost has tradition.

It should also be noted that the projected performance gains are qualified as for VR. Those will still be strong cards for normal graphics, but if they were as an enormous leap there as well Nvidia would have said so.


I think for many like me, the small marginal performance increase an AMD card might deliver is not worth the significant disadvantages AMD brings in software, heat, and noise.


I honestly do not get that perspective. But please note that this is from a (well, former) gamer perspective, how AMD gpus perform in ML and how the software support is there is just not my area of expertise.

For games the performance difference is not marginal. The R9 380 and especially the R9 380X is a lot faster than the GTX 960. The R9 390 is on par with the GTX 970, as long as the Vram of the latter does not become an issue or one plays on higher resolutions. For all of them there is DX12 on the horizon, which so far gives a big boost to all AMD cards and none to Nvidia.

Software just is no issue. You find that everywhere on the net, but AMDs drivers in practice just are not worse. I used both, I had problems with both and stable systems with both, there is no meaningful difference there. And professional reviews back that up (note: I'm not talking on how they are to program for. That I don't know). AMD driver for windows are not bad since many many years now, and the Crimson driver update also overhauled the UI in a good way. On linux the situation is more complicated: While Nvidia has absolutely the faster driver, it is closed source. The free AMD driver is capable of playing games now, it made enormous progress in the last year and it is a pleasure to see that. Having a free driver is a big advantage.

There is a big difference in heat coming from power usage, Nvidia cards are just better there. Noise should be the same, but since both shut their fans down on idle and since the heating systems from Sapphire & Co are quite good, that is not something where AMD cards are really bad.

I do not want to sound too pro-AMD – Nvidia makes great cards and they definitely have their place – but I do not get the negative attitude against AMD.


I own an AMD 380X and price/performance is why I chose it. However, I think I made the wrong choice, after experiencing how buggy AMD drivers are.


The deciding factor in current gen cards for me isn't cost perf (as it's reasonably close) but perf/watt which is where the 9xx is clearly better than the AMD cards.

Is AMD tackling this issue in the next generation or are they continuing to be competitive with cheap but hot chips?

Aren't the higher power requirements creating higher costs in datacenters (power, cooling) that make the AMD cards less attractive? The 390 vs 970 is at nearly a 2-to-1 power advantage for nvidia!


Hmm, good on the NSA for being cost-effective.

:-)


I worked as an intern on some stereoscopic/multiview R&D maybe 12 or 13 years ago. It was similar to the work in this video, except meant for autostereoscopic displays like the 3DS and displays with more viewing angles than 2. ( https://www.youtube.com/watch?v=acIe1vlpd84 )

The multiple projections feature is cool. Aside from saving the double geometry/draw call overhead, they're probably also clipping/transforming/rendering each stereoscopic pair of fragments at the same time to encourage cache coherency and reduce bandwidth requirements on the GPU.

Overall, the overhead of rendering the same thing twice from slightly different projections shouldn't cut your rendering capabilities in half, it's mostly the same stuff. So this feature should make a big difference for VR.

I was waiting for something like this from the Sony PS3 when they started introducing 3DTV compatible games but I guess the market wasn't there at the time for that driver development effort considering most of the games released in 3D were PS2 ports. Seems like they missed an opportunity to have a killer app for 3DTV, nearly every PS3/PS4 game could have had built-in 3D with only a 25% or so performance hit that can be compensated for with downsampling/upscaling.


After adding a 4K display and Nvidia shield to my home theater I was a bit disappointed that the recommended way to max out 4K games was two gtx 980ti in SLI. $1200 is out of my budget, but I can't fit 2 cards in my micro ATX case anyway. The 1080 is definitely going in my build.


Anyone know when the Mobile Fiji, Polaris, Zen, Pascal hardware is coming out?


All but Pascal are only best guesses:

* Pascal will arrive with GTX 1080 end of this month, this is covered in OPs article

* Polaris is scheduled for summer/autumn, but as with Pascal the flagship models will come later

* Zen will be at the beginning 2017

* Mobile Fiji – no idea, have read nothing about

Edit: Just realizing that maybe you meant to prepend mobile to all of those?


Yeah, I basically want both the CPU and the GPU to be FinFET for a mobile laptop, that is VR ready/ Skylake + nvidia 970 or better.


Was Zen 2017? Last I saw had it dropping in October of 2016. I actually decided to hold off on an upgrade for it.


I'm not actually sure. It might very well be Q4 2016.


This is cool without a doubt. However it's fairly well understood that Nvidia have gfx's tech about 5 years ahead of current market level. This allows them to eek out small increments to customers and stay ahead of the curve. It's a smart move, and model used by many of the big hardware manufactures (Intel for example).

However VR has forced their hand to release a significant upgrade. So if VR turns out to be nothing more than a trend that we forget about for another 20 years, at least it results in some amazing graphics cards.

Just not sure there's any PC focused games developers left to push these cards...


The cost of sitting on that R&D would be huge and ridiculously wasteful. Instead it takes a long time from building a proof of concept in a lab to selling millions of the things for a few hundred dollars.

Often it's the other way, they can make a few 'super ultra' cards but nowhere near enough to meet demand. So they price them as 'Halo' items and more or less instantly sell out for months. But, in theory if you can get that card it is the 'fastest' for a little while.


I've always heard these "conspiracy" rumors since highschool. They never made sense to me because of competition... I can't believe that they have the engineers formerly known as ATI that far in the rear view mirror especially since the manufacturing processes everyone in Silicon works with are the same. So where does this "fairly well understood" assumption come from?


Heres the deal, now both Intel and AMD are operating on 14nm 16nm, things are going to rebalance out in both cpu and gpu markets. After much deliberation, I have decided to wait for Zen for all my future builds, because I have this strong feeling AMD is about to be back and stronger than ever before.

Basically, Intel was ahead of AMD(ATI), but not anymore...

I welcome more competition in the market either way.


I love AMD and I'm rooting for them, but I'm not sure we are going to have another Athlon run here.

AMD is still having trouble beating out nVidia, and Intel. The Zen architecture could in theory have HBM for the system CPU/APU, but I'm guessing they won't.

Finally A lot of Intel performance comes from compiler optimization and design, and last time with Athlon it took years for that to roll out. Unless Zen is years ahead of Intel, I don't know if they will be able to edge that far ahead. Intel has been doing CPU-GPU pairs now for more than 5 years, and they are getting better.

I think for AMD to win this round, they would need to have a power ratio that allowed them to dominate the laptop space, with APUs and eliminate the GPU as a component, and right now AMD and Intel are both on 14nm, so I think it would be hard to win there, but we will see.

nVidia is on the 16nm process, so if AMDs chips are equal in performance, they might draw less power. However, for VR, nVidia has been working very closely on the software with the various companies and I think they will have a strong software advantage for a while.


For Zen, they don't have to be the best, just good enough. Frankly, for consumer Intel has been good enough since the 2600 days (except on the mobile front). So Zen should be fine in that market.


Good enough or, "we're #2!" might be all that's needed to chip off some server shares. Which is good for AMD. But for my machine I'm not sure I'd give up any single threaded performance for the promise of 8 physical cores with SandyBridge level IPC.


Because AMD cut linux support for their mid-range GPU's 2 or 3 years in after their release they can royally go fuck themselves.

They would really need to do something groundbreaking for me to even consider them.

Meanwhile im on Nvidia 460GT playing Witcher 3 on high settings (yes, sometimes cutscenes(of all the things, huh) stutter at the beginning, but otherwise for a 2010 GPU is runs ~60fps) , still getting updates, linux "Just Werks(TM)" In a year I'll prolly upgrade...


I'm dealing with this right now - I (foolishly) bought a Radeon 390 instead of an Nvidia card of the same price, and the Linux drivers for it are complete clown shoes. Hell, Ubuntu 16 doesn't even have fglrx. The open-source drivers kinda-sorta work, but they're definitely not good for gaming, and I get a whole bunch of tearing.


I'm sorry to hear that you're having issues with gaming on the open-source driver. May I ask where the issues are specifically? We're still working the rough edges of OpenGL 4.3 support, so some of the very latest games are going to have issues because of that. Apart from that, I do think we're pretty good at fixing bugs that are actually reported to us, and the (few) places where we're behind in performance are slowly getting better as well.


Tearing is the big one. As soon as a game starts playing, I get a whole bunch of horizontal lines that will continuously shift and flicker as the screen refreshes. It doesn't happen with regular video, but it happens with Kerbal Space Program (not that graphically intensive) and PCSX2 (very graphically intensive).

I don't know nearly enough to say exactly where the problem is, but one of the more frustrating aspects is that there is next to no documentation on troubleshooting such issues. It might be a problem on my end for all I know, but all that I can tell is that before updating to Ubuntu 16, fglrx worked okay, and after updating, playing games is extremely aggravating.

I would like to say that the drivers are a hell of a lot better than they were in 2011, though - last time I had to deal with the open-source drivers, it was "Display minimum functionality so that you have the ability to install the proprietary drivers," and HDMI support was horrifyingly bad. These days, it's "Works for everything except gaming," which is completely serviceable for me.


I agree that the configuration story needs work and documentation.

What you could try is to set the vblank_mode=3 environment variable to force V-Sync. If that helps, it means there's either a stale bad config file or a bug somewhere in the client API implementation...

Make sure you have no ~/.drirc, and you really shouldn't need an xorg.conf either.


I added vblank_mode=3 to my .bashrc, and then removed my xorg.conf (which was configured for fglrx, I assume from when it had the Catalyst drivers). ~/.drirc didn't exist on my system. No change. :(


Yes, the performance gains the free driver saw the last 1-2 years were incredible.


My next graphics card for my Linux box will definitely be AMD. I want to support a company that has been as supportive of open source as they have. The strides made by the AMD open souce driver lately have been very impressive. nVidia comes nowhere close. Remember Linux famously giving nVidia the finger a year or two back?


I'm all for competition, but if I'd had a dollar for everytime someone on the internet said to wait for AMD/ATI's next revolutionary product, and them not delivering... Yeah.


>because I have this strong feeling AMD is about to be back and stronger than ever before.

Sorry, but AMD is not coming back in the foreseeable future. It's all about economies of scale, and Intel has them, AMD has not.


If you think of it as a cash flow it helps. I remember the first Nvidia card I was given (as you needed to test your game on a variety of graphics cards).

This is back in the day of 3DFX, PowerGL, you know when there was actual competition in the market. The nVidia card blew them all away, they destroyed the competition. So how do you continue to grow revenue after you've destroyed the competition? Invitation, but not too much (what Apple does now, and very well).


Seems like you made some enemies with the truth. Nothing has come out for several years and the prices on the high end cards haven't moved an inch because AMD had nothing to compete with. Nvidia absolutely sits on tech and milks everything and always has - markets where there is 0 competition always do this; see local cable co monopolies for great examples.


There is no benefit to NVIDIA holding back on perf/W improvements:

* It would make all the cards more expensive with better cooling and better VRMs need.

* It would lock them out of much needed laptop wins where they face real pressure to prove they're even needed, as well as Intel with Iris Pro.

* It would harm them in their GPGPU ambitions that is very perf/W sensitive.

There is no benefit to them holding back on die area/perf because that just directly hurts them in their own margins and makes chip fabrication more risky.

The only performance I could see them leaving on the table is to make a bigger chip, and in the previous generation they covered that with the move from the GTX 980 to the GTX 980 Ti. And that was 9 months later (not 5 years) to put out pretty much the biggest 28nm die they could.


I don't buy it, if they're so far ahead of the game, why aren't they crushing AMD like Intel did with CPU's? We have AMD still largely competitive in the GPU space, making money and in important devices like both major gaming consoles, Apple Macbook Pros and iMacs.


They are crushing them. CPUs is a different market and once your general architecture is not competitive you are going to fall off harder since servers don't have as wide of a performance range to cover.

When AMD dropped off in the CPU market, no one was interested in their architecture at all because it was just inferior from top to bottom of the performance range. Versus the GPU market where they had some value still in their low to midrange products.

That said, AMD are behind in the GPU game too. It's not like things are improving. http://www.fudzilla.com/images/amd_versus.JPG




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: