We already know that the GV100 has the tensor cores which greatly increases the performace in deep learning applications.
And for most part it seems to me that the volta architecture should have some number of tensor cores in each SKUs. However as we already know that the gaming architecture will be called ampere. What is the possiblity that those card will have some tensor cores?
I can not think future smaller Quadro and Tesla cards not having tensor cores. But will nVidia create two completely different architecture and basically different chips for Gaming and Computing? it never happened before. All the compute cards were in fact gaming cards with some features disabled in bios or driver. But to have different die for different product category it seems quite a big jump.
I am not sure myself, personally i am asking this question because in the past most of the nVidia cards has been good enough for deep learning/machine learning. However it seems to me this time either deep learning researchers will get a great push in performance from the tensor cores or they will need deep pockets to get those gains.
What are you thoughts? I know a lot of people here uses nVidia for similar stuff in tomshardware
thanks in advanvce
And for most part it seems to me that the volta architecture should have some number of tensor cores in each SKUs. However as we already know that the gaming architecture will be called ampere. What is the possiblity that those card will have some tensor cores?
I can not think future smaller Quadro and Tesla cards not having tensor cores. But will nVidia create two completely different architecture and basically different chips for Gaming and Computing? it never happened before. All the compute cards were in fact gaming cards with some features disabled in bios or driver. But to have different die for different product category it seems quite a big jump.
I am not sure myself, personally i am asking this question because in the past most of the nVidia cards has been good enough for deep learning/machine learning. However it seems to me this time either deep learning researchers will get a great push in performance from the tensor cores or they will need deep pockets to get those gains.
What are you thoughts? I know a lot of people here uses nVidia for similar stuff in tomshardware
thanks in advanvce