Nvidia Volta Megathread

Status
Not open for further replies.

varis

Distinguished
Nov 9, 2010
400
0
18,810
Seems Volta is not really discussed much here, besides being mentioned as a Vega competitor. Time to get some info on the board?

https://www.extremetech.com/extreme/249106-nvidia-goes-ai-new-volta-architecture

This article gets me thing that Volta is not just GPU but there is an AI and general computing side as well. May sound far fetched but at may have repercussions for us. If AI is now becoming a mainstream thing in business and special applications, we might see consumer applications some five years down the road. Especially considering that players like NVidia are willing to open source their code. I mean some of you guys might end up using your Volta GPUs for AI on your rigs, eg. things like truly learning AI in games. But I think AI will have a broader application than just gaming - even on the local PC. Not to mention there are already some applications that use the GPU computing power for special applications that are not AI.

Long-term, it seems there is a lot of hype around AI currently and certainly it looks like self-driving vehicles are just around the corner. It reminds me of the 1960s when human level AI was supposed to be just around the corner. But it's quite possible that in the first half of 21st century we'll see a number of mainstream AI applications and it's going to be one of the next big technological waves. (We're still waiting for a real biotech wave too.)
 
nvidia definitely going all out with AI. and they will not going to give up the market completely to ASIC hence why they make their own Tensor Core inside their GPU. many people often only look nvidia as graphic company only. hence many expect that once nvidia secure their lead in GPU performance they will start to have it easy and start charging expensive price for very minimal performance improvement like intel. but nvidia really have no time to rest easy. because they also competing with other company like google in the field of AI for example. they were able to make their architecture more power efficient with kepler but that is not enough because in mobile (where they compete with tegra) there are more power efficient architecture exist. while their venture into smartphone and tablet did not go as plan they still benefit a lot from their experience dealing with mobile. that's why even if nvidia already ahead of AMD in GPU they still have to work harder.
 

varis

Distinguished
Nov 9, 2010
400
0
18,810
AMD and NVidia have announced their press conferences at Computex 2017: NVidia is tomorrow tuesday and AMD coming wednesday.

https://techdrake.com/2017/05/nvidia-and-amd-announced-computex-2017-press-conferences/

Overview of what's coming in the conf:

http://www.trustedreviews.com/news/computex-2017-news-launches-nvidia-intel-amd-asus-microsoft-announcements

Some reporters seem to even think NVidia would launch Volta at Computex. I'd hope for GTX20 at best...

Liveblog:

http://www.anandtech.com/show/11457/asus-computex-2017-press-conference-live-blog
 

varis

Distinguished
Nov 9, 2010
400
0
18,810
Sane thinking. So far Volta was discussed but it was all about AI and tensor cores.

Lighter gaming laptops with geforce are coming though :)
 
the main star on the gaming side of the presentation is no doubt was the new thin gaming laptop. JHH even show actual product on stage. that's is like to show off that you can get such performance on laptop right now where as there isn't something similar from "competitor".
 
I think there are 2 main takeaways for the "Volta generation" from nVidia and they won't be pure "GPU" enhancements. There will be that as well, but my take is nVLink for consumer and hardware schedulers for nVidia. Those two are really interesting to me at least, even if "brute performance" doesn't go up two-fold.

Cheers!
 

techbard

Prominent
Jun 17, 2017
8
0
510
So what exactly does AI incorporation into Volta architecture mean for gaming? I mean it's badass but I'm having issues trying to imagine the impact it will make in the pc gaming scene.
 


probably none just like how it was with pascal. though realistically FP16 can still somehow benefit gaming (this is what AMD try to do with Vega). but those tensor cores? honest i don't know. that thing is pretty much specific function designed for AI related task. but then again GV100 is designed in a way to be a compute monster. simultaneous FP32 and INT32 operation. massive FP64 (1:2 FP32 ratio), massive FP16 and the new tensor core that are more optimized for AI specific task. at this point we just don't know if those tensor core is "mandatory" to all volta design or specific to GV100 only.

 
In terms of market, Nvidia has established a generational lead over AMD (the latest AMD gaming cards are only as fast as Nvidia's second best, two year old cards). With the release of Vega, I'm looking for Nvidia to make some Volta news towards the holiday buying season. Not that they will have a new product to sell, but that they can insert some doubt into anyone looking to purchase the current generations. I think they will make maintaining that lead an important strategy.
 
well if we look at nvidia past release it is more realistic to expect Volta in Q2 2018. earliest in Q1 next year. for sure no gaming volta this year since nvidia already said so during their latest earning.
 


Oh, yes. I meant that in the gaming-plebeian context. I don't believe they'll have delays for the "pro" stuff..

Cheers!
 


actually i did not expect even the "pro" volta to be ready in late 2018 or Q1 2018. but the competition in AI space is heating up. nvidia need to make sure GPU not to be out done by chip that is build specifically for AI application only like google TPU. those that pre-ordered DGX-1 before nvdia officially announced tesla V100 being upgraded to tesla V100 without additional charge. it seems nvidia want to replace tesla P100 completely. that is very fast even by nvidia standard. just look how long nvidia keep their GK110 as their compute flagship before they replace them with P100.
 
nothing much to discuss on volta right now. especially on the gaming side of things. but on compute side of things there probably some new enhancement that will make volta a monster in mining......
 
Uhm... I'll go on a tangent here, because I don't really know the inner workings on AI development nor there is abundant information on implementation and design for it.

From the holistic design perspective, Artificial Intelligence has 2 main aspects to it: Process and Analysis.

For Process, I'm putting in the same bag the "simulation" or adaptation of thought process into a machine lang (Lisp or any lang that is actually used) where you can indeed tailor ASIC stuff without much complexity in the process-to-metal translation.

For Analysis, I'm putting in the same bag all the "computation" behind each singular point of decision and intake of information given (massively parallel is an understatement here). This is a huge bag, even bigger than process, but straight forward: you can process as much as you can calculate (input vs output to an effect), so the effective decision nodes will be a reflect of that amount of data you process in time (kind of obvious, right?). Simple in concept, a nightmare in actual implementation (those trees!) since coordinating the decisions into a single point of action is quite the challenge, depending on implementation.

So, what I'm trying to get at here is the point of nVidia being competitive in this AI landscape.

Much like you can use multipurpose CPUs to do whatever you want, you can use GPU for "general purpose" training and designing of AIs with some degree of "openness" in approaches. Specific design should come after you have chosen a certain way of designing your AI process, I would imagine and the rest is just growing on the capabilities of the analysis you want? I would imagine since the computational needs, at the end of the day, will be tailored for mass production (specific products with "AI" fed operations), you would choose that method for mass, but interim would always be "general"; or I would imagine it's the feasible way. All of that to say that I do think nVidia has a chance to grab a lot of market as long as they can improve on the analysis side as much as they can. Hence nVLink and all I/O improvements they can make to "feed the beast" will become a deciding factor to their success, in my opinion, obviously. Some of those can trickle down to consumers, I would imagine, but I don't really see how the general purpose calculation route will help "push more triangles" for games. At the end, it seems nVidia is better of making specific AI-accelerators like Google and if Volta is that first step, I won't expect huge leaps in the gaming arena TBH.

Cheers!
 
it all depends on what kind of improvement nvidia will come up with gaming volta. right now i'm suspecting nvidia will increase Volta gaming performance using the similar method they able to gain more performance with maxwell even when maxwell was build on the same 28nm process as kepler; architecture re-work. while volta will be build on 12nm from what i heard it was just some fancy name for more optimized 16nm process. GV100 is a lot bigger and faster than GP100 but power consumption wise both are about the same.
 

manleysteele

Reputable
Jun 21, 2015
286
0
4,810
Even if the full version of GV102 comes in with 4096 cores, I'm more interested in at what clock and power numbers. Right now, I'm thinking the GV102 needs to hit 2.5 GHz out of the box and reach 3 GHz when overclocked. It needs to do both things on air with a similar power budget to GP102. A large shortfall on any one of these targets would be a disappointment to me. They also need GDDR6 to deliver it's target performance on day one. Greedy of me, I know. Maybe unrealistic. I hope not.
 


rather than increasing the clock i think it is better for nvidia to improve the IPC of their architecture instead. increasing the clock further might not going to increase the performance that much. we already seeing the diminishing return with pascal. as for what memory they used i don't care much about it as long as they can get the performance target.
 

manleysteele

Reputable
Jun 21, 2015
286
0
4,810


Diminishing returns from Pascal? You're kidding me, right? Pascal buries Maxwell at the same core count and that burying is almost all due to clock. Plus it reaches those higher clocks using less power.
 


IPC wise both maxwell and pascal are identical. true pascal can be clocked higher but we have to remember architecture wise both masxwell and gaming pascal is almost outright identical. when we increase the core clocked performance increase will not going to be linear. there will be point where the increase of clock will net much smaller performance increase. also heat can be a problem at high frequency. but if you think pascal is amazing then volta probably will be more frightening lol.
 

manleysteele

Reputable
Jun 21, 2015
286
0
4,810


Your first sentence confirmed my point, nicely. On your second assertion, perhaps, but at what level of clock does this occur? We're not there yet. Heat is an ongoing problem, even at present frequencies and densities. I do expect gaming Volta make some tradeoffs between power, clock, and core count. Still, I also expect a good bump on the full performance metric, both in the middle and on the high end.
 
Status
Not open for further replies.