So, when does NVIDIA Pascal come out?

Status
Not open for further replies.

sterlin22

Honorable
May 17, 2012
74
0
10,630
Told myself that now that I have a job, I'd snag a GTX 1080, or equivalent, on the conditions that I maintain this job until then, have wiggle room for money, and on the condition that I do well in school and keep up good health (lots of conditions).

Current GPU is a 970.

Main questions I have are:

1) Do we have a price estimate for the GTX 1080 yet?

2) Do we have any idea how it compares to the GTX 900 series? I know that it's supposed to have 4 stacks of HBM2 (compared to the FuryX's 4 stacks of HBM1), and since it's a whole new architecture it's supposed to have extra transistors and stuff, but is there anything solid on the performance difference? There's leaks regarding the fact there's 16GB and 32GB VRAM Cards coming out, but my assumption is that these would be specifically for Super Computers.

3) Do we know of any rumors or leaks as to when the GPU comes out specifically?

4) Would an i7-4790k CPU at stock speeds (4ghz quad-core) become a bottle-neck when used in conjunction with a high-end NVIDIA Pascal for gaming?

5) What resolution would you assume to be the "limit" for "ultra" next-gen gaming with a single high-end NVIDIA Pascal card for sufficient frame rates(GTX 1080 or equivalent specifically)?
 
Solution
Two good rumor websites to check out are:
http://videocardz.com/
http://wccftech.com/

Here's some Pascal news:

Nvidia gets first samples of GP100 from TSMC, begins internal tests
http://www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-receives-first-samples-of-gp100-chips-from-tsmc-begins-to-test-them/
Lol how funny. I was just looking up pascal right when i saw this post.

Right now, the release date is mid 2016. We have no information about what it will be priced at. However, if it follows previous generations, then a i7 or i5 will not bottleneck top tier cards.

There won't be any more info on pascal until next year, unfortunately. I also am personally waiting for pascal.

One clue that is pretty much guaranteed though, is that pascal will be designed specifically for 4k.
 
As far as I know, Pascal should be launched as early as Q4 2016. As for the i7-4790k bottlenecking a GTX 1080 (or even a GTX 1080 Ti for that matter), not a chance. It does not bottleneck two GTX 980 Tis even at stock speed. There's a strong possibility of an 8 GB GTX 1080 and a 12 GB GTX 1080 Ti certainly cannot be ruled out. As far as resolution is concerned, 4K will be the high-end standard. For VRAM, the way I look at it:

GTX 1060 = 4 GB
GTX 1070 = 6 GB
GTX 1080 = 8 GB
GTX 1080 Ti = 12 GB
GTX TITAN (next one) = 24 GB
 
Two good rumor websites to check out are:
http://videocardz.com/
http://wccftech.com/

Here's some Pascal news:

Nvidia gets first samples of GP100 from TSMC, begins internal tests
http://www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-receives-first-samples-of-gp100-chips-from-tsmc-begins-to-test-them/
 
Solution
Sep 30, 2013
281
0
10,810
AMD will get access to HBM2 first though so I guess their cards will show up before the Nvidia cards. Or they can't abuse that to delay Nvidias release if AMD isn't ready?
 

RobCrezz

Expert
Ambassador


Why will AMD get access first?
 
Sep 30, 2013
281
0
10,810
http://techfrag.com/2015/07/16/report-amd-secures-priority-access-hmb2-memory/

"Both Nvidia and AMD are planning to introduce HBM2 based GPUs and surely, the battle to produce faster and better cards will be intense as always.

However, if a new report is to be believed, AMD is going to have the upper hand when it comes to mass-production of HBM2 based cards. How? Well.. The reports suggests that AMD’s partnership with SK Hynix, gives AMD priority access to HBM2 production capacity."

Maybe not first then but rather priority in access. I would assume it would be because they have participated in development / signed up earlier / paid more for it / got HBM the first round / ..
 

RobCrezz

Expert
Ambassador


"It’s an unconfirmed report and the source isn’t the most reliable around the digital realms, so take the news with a grain of salt."
 
Well if we look back some of the rumor we heard so far.....well part of it were mixed with AMD fanboys wishful thinking :lol:

Well few things that i think possible is AMD probably try to struck deal with SK Hynix so they will get the first priority to use HBM. And they also hope their first venture and effort in co-developing HBM will somehow give them an edge against nvidia. I think AMD probably aware nvidia capabilities by now. After all they are not cash strap like AMD and can fund their R&D better. By the time nvidia have their second generation HBM they probably able to surpass AMD implementation like they did with GDDR5.

Well then here were are with the latest rumor. I think many already take some of the rumor like how AMD will get the priority over HBM2 as a fact. But the latest rumor suggesting that AMD probably not going to use HBM2 much. Maybe this one coming from memory maker themselves. I see many people take this new rumor to their own direction. Some said that HBM 2 will be delayed and some suggesting that AMD have problem out sourcing their need for HBM2. If you ask me then it probably related to their financial problem.

I mean they can make arrangement so they will get first dib for HBM2 with SK Hynix but it doesn't mean it will be free. They still need money to pay them. SK Hynix most likely would not agree to any agreement that would put them at disadvantage. So if AMD not buying then Hynix can sell them to any interested buyer. Remember SK Hynix will not be interested to get drag by Nvidia and AMD rivalry. Doesn't matter if AMD co-develop HBM with Hynix. Hynix as a company still need to continue to operate. Also nvidia sells more discrete gpu than nvidia. There is no way Hynix want to miss that business opportunities just to satisfy AMD ego.
 
CPU and bottleneck:

First, it always varies by the game.

Secondly, we're seeing a shift towards DX12 which can:
a) use MORE of your CPU (many games max out at the equivalent of 2.5X cores... )
b) DX12 code can be much more efficient (so if you used say TWO cores for a game now a similar DX12 game in the near future might only use 1 to 1.4 cores.)

So basically a good i5/i7 Intel desktop CPU should last long enough for most DX11 games to make the transition to the more CPU efficient DX12 environment.

Thus, I expect a good i5/i7 to last a long time.

(I have an i7-3770K and expect to buy a high-end NVidia Pascal card roughly Q3 2016 with 8GB VRAM)
 
A bit off topic, but thought this was interesting from NVidia about DX12 "Do's and Don'ts).

In particular THIS caught my eye:
"Don’t rely on the driver to parallelize any Direct3D12 works in driver threads"

(It's noteworthy that some of the things that normally rely on DRIVERS, sometimes months after game release or never, are done in the game itself.)

and

"Use the DX12 standard checks to find out how many GPUs are in your system
- No need to use vendor specific APIs anymore"

(we'll see how that pans out... )

More info to skip through HERE if you can't sleep well:
https://developer.nvidia.com/dx12-dos-and-donts#enginearch
 

JUICEhunter

Honorable
Oct 23, 2013
1,391
0
11,960
980/970 replacements first or the full and cut down little Pascal, than a titan and ti or full and cut down big Pascal wil be out after that in less NVidia wants to start with the high end but they're pretty consistent about replacing their oldest GPU.

 


I'd personally prefer if they didn't go into quad digits. I think they should change the naming convention to something entirely new now. Something like "Nvidia VZ 99" and "Nvidia VZ 88" etc. :p
 
Status
Not open for further replies.