Welcome Guest ( Log In | Register )

Bump Topic Topic Closed RSS Feed
125 Pages « < 10 11 12 13 14 > » Bottom

Outline · [ Standard ] · Linear+

 NVIDIA GeForce Community V15 (new era pascal), ALL HAIL NEW PASCAL KING GTX1080 out now

views
     
marfccy
post Jul 26 2015, 04:22 PM

Le Ponyland!!!
*******
Senior Member
4,254 posts

Joined: Nov 2011


QUOTE(Moogle Stiltzkin @ Jul 26 2015, 03:20 PM)
okay that makes more sense.
will be interesting to see how things pan out here. worse case scenario, limited stock = higher price = i'm screwed  rclxub.gif
actually there was some chatter on very subject. like nvidia had to come out with something good rather than rest on it laurels. because amd can come out with something after fiji straight after. i think there was mention they were aiming for this though i'm not fully sure.
http://www.pcper.com/news/Graphics-Cards/R...B-HBM2#comments
i think the problem with fiji was, hbm by itself wasn't going to be a game changer where it mattered the most. or the benefits wouldn't be significantly transparent to the user (e.g. huge leaps in fps gains... for example)

if the pascal rumors are true, then they would have not only intro'd hbm, but also added other stuff that would indeed increase performance significantly that would have the wow factor.  just pray it's true  smile.gif
so the size is it still on track to be similar to the fiji ? here's the pic shown
user posted image
*
its because like all new tech, nobody is there to make use of it yet

give it time, and soon devs will start to utilise the high bandwidth capability, making the current ones near obsolete
SheepMekk
post Jul 26 2015, 04:55 PM

Casual
***
Junior Member
314 posts

Joined: Jun 2008
Hey all, any of you guys with 980ti have any idea of your fan speeds? Mine doesn't even turn on at 60C shocking.gif regulating fan speeds with afterburner at the mean time.
eugene88
post Jul 26 2015, 05:07 PM

Look at all my stars!!
*******
Senior Member
2,176 posts

Joined: Sep 2010


QUOTE(billytong @ Jul 26 2015, 04:14 PM)
last I heard 347.88 is the last best driver. The recent batch is all went crazy.

Anyway, I just heard some news about GTX950 coming out in Aug 17th, 750/750ti is getting price cut to make room for 950.

U guys have any idea how long would our local retailer to reflect the new 750/750ti pricing?  hmm.gif might need to get one soon.
*
I found out what caused the problem, it's from Duet Display
SUSHuman10
post Jul 26 2015, 06:10 PM

Look at all my stars!!
*******
Senior Member
6,774 posts

Joined: Nov 2010
QUOTE(SheepMekk @ Jul 26 2015, 04:55 PM)
Hey all, any of you guys with 980ti have any idea of your fan speeds? Mine doesn't even turn on at 60C shocking.gif regulating fan speeds with afterburner at the mean time.
*
The idle fanless mode had being around with some models of GPU by Asus, MSI, EVGA and may be more since the release of 970/980.

Nothing much to worry about, its just manufacturers are confident with the passive cooling of the heatsink and tweaked the VBios to so. You can manually flash a custom vbios to fix your own fan profile. But again afterburner is the safer mode since they only deal with software instead of bios.
marfccy
post Jul 26 2015, 07:21 PM

Le Ponyland!!!
*******
Senior Member
4,254 posts

Joined: Nov 2011


QUOTE(Human10 @ Jul 26 2015, 06:10 PM)
The idle fanless mode had being around with some models of GPU by Asus, MSI, EVGA and may be more since the release of 970/980.

Nothing much to worry about, its just manufacturers are confident with the passive cooling of the heatsink and tweaked the VBios to so. You can manually flash a custom vbios to fix your own fan profile. But again afterburner is the safer mode since they only deal with software instead of bios.
*
i dont think he needs to do so right?

normally you can just adjust as per your liking to when the fan will start spinning (lowering the temp threshold)
TSskylinelover
post Jul 26 2015, 07:46 PM

Future Crypto PlayeršŸ˜„šŸ‘ŠDriver AbamsadošŸ˜ŽšŸ˜Ž
********
All Stars
11,244 posts

Joined: Jul 2005
QUOTE(Moogle Stiltzkin @ Jul 26 2015, 03:20 PM)
so the size is it still on track to be similar to the fiji ? here's the pic shown
user posted image
*
that is the end of the days of using long ass big ass card laugh.gif

guess i selling off my huge casing 4 the new drawer casing then doh.gif
vaizard
post Jul 26 2015, 07:57 PM

Casual
***
Junior Member
304 posts

Joined: Sep 2008


hello all.
Attached Image
anyone encounter this problem before?Im keep facing this problem when im gaming. Really help your help. TQIA

This post has been edited by vaizard: Jul 26 2015, 07:58 PM
Moogle Stiltzkin
post Jul 26 2015, 08:10 PM

Look at all my stars!!
*******
Senior Member
4,451 posts

Joined: Jan 2003
QUOTE(skylinelover @ Jul 26 2015, 07:46 PM)
that is the end of the days of using long ass big ass card laugh.gif

guess i selling off my huge casing 4 the new drawer casing then doh.gif
*
will definitely be more room for the cabling :}

since i use fancy water cooling radiator, i have to stick with my full atx pc aluminium casing :}

but it would be interesting to see other peoples rigs how small it can be to now fit the new card smile.gif
Moogle Stiltzkin
post Jul 26 2015, 09:03 PM

Look at all my stars!!
*******
Senior Member
4,451 posts

Joined: Jan 2003
noticed the pascal has mixed precision fp16/32/64


so was reading on what exactly it has to do with gaming :/

QUOTE
You can deduce the difference between double precision floating point (FP64) and single precision floating point (FP32) from the name. FP64 results are significantly more precise than FP32. This added precision in the results is crucial for scientific research, professional applications and servers. And less so in video games. Even though FP64 is used in games in a very limited subset of functions, the bulk of video game and graphics code relies on FP32. As such this added precision in turn requires more capable hardware which would net higher costs by increasing the size of the chip while simultaneously increasing power consumption.


bottomline
QUOTE
So, since the GTX Titan Black has a peak of 5.1 TFLOPS single precision floating point performance, a 3:1 ratio means that double precision compute goes down to 1.7 TFLOPs. And with AMD’s Hawaii XT which has a peak of 5.6 TFLOPs of FP32 compute performance, a 2:1 ratio means that it will go down to a more respectable 2.8 TFLOPs of FP64 compute performance. This advantage in FP64 compute is why AMD succeeded in capturing the top spot in the Green500 list of the world’s most power efficient supercomputers with it’s Hawaii XT powered FirePro S9150 server graphics cards.

The FP32 to FP64 ratio in Nvidia’s GM204 and GM206 Maxwell GPUs, powering the GTX 980, 970 and 960 is 32:1. Which means the GPU will be 32 times slower when dealing with FP64 intensive operations compared to FP32. As we’ve discussed above this is mostly OK for video games but downright unacceptable for professional applications.

If Nvidia’s GM200 does end up with a similarly weak double precision compute capablity the card will have very limited uses in the professional market. However in theory the reduction of FP64 hardware resources on the chip should make it more power efficient in games and FP32 compute work. Even though I’m not entirely convinced that it’s a worthwhile trade off. Especially for a card that is poised to go into the next generation Qaudro flagship compute cards.


http://wccftech.com/nvidia-gm200-gpu-fp64-performance/



anyway with pascal will have improved fp64 as well as mixed precision.
QUOTE
With Pascal GPU, NVIDIA will return to the HPC market with new Tesla products. Maxwell, although great in all regards was deprived of necessary FP64 hardware and focused only on FP32 performance. This meant that the chip was going to stay away from HPC markets while NVIDIA offered their year old Kepler based cards as the only Tesla based options. Pascal will not only improve FP64 performance but also feature mixed precision that allows NVIDIA cards to compute at 16-bit at double the accuracy of FP32. This means that the cards will enable three tiers of compute at FP16, FP32 and FP64. NVIDIA’s far future Volta GPU will further leverage the compute architecture as it is already planned to be part of the SUMMIT and Sierra super computers that feature over 150 PetaFlops of compute performance and launch in 2017 which indicates the launch of Volta just a year after Pascal for the HPC market.

http://wccftech.com/nvidia-pascal-gpu-17-b...rrives-in-2016/

This post has been edited by Moogle Stiltzkin: Jul 26 2015, 09:10 PM
SSJBen
post Jul 26 2015, 09:14 PM

Stars deez nuts.
*******
Senior Member
4,522 posts

Joined: Apr 2006


QUOTE(Moogle Stiltzkin @ Jul 26 2015, 09:03 PM)
noticed the pascal has mixed precision fp16/32/64
so was reading on what exactly it has to do with gaming :/
bottomline
http://wccftech.com/nvidia-gm200-gpu-fp64-performance/
anyway with pascal will have improved fp64 as well as mixed precision.

http://wccftech.com/nvidia-pascal-gpu-17-b...rrives-in-2016/
*
One thing bro, don't quote wccftech too much. Their articles are all opinions (yet they think it's facts).
Minecrafter
post Jul 26 2015, 10:05 PM

ROCK N ROLL STAR
*******
Senior Member
5,043 posts

Joined: Aug 2013
From: Putrajaya


QUOTE(skylinelover @ Jul 26 2015, 07:46 PM)
that is the end of the days of using long ass big ass card laugh.gif

guess i selling off my huge casing 4 the new drawer casing then doh.gif
*
TBH,i prefer a long card,like the Sapphire's Tri-X/3 fan Vapor-X/Toxic,MSI's Lightning etc.
SheepMekk
post Jul 26 2015, 11:55 PM

Casual
***
Junior Member
314 posts

Joined: Jun 2008
QUOTE(Human10 @ Jul 26 2015, 06:10 PM)
The idle fanless mode had being around with some models of GPU by Asus, MSI, EVGA and may be more since the release of 970/980.

Nothing much to worry about, its just manufacturers are confident with the passive cooling of the heatsink and tweaked the VBios to so. You can manually flash a custom vbios to fix your own fan profile. But again afterburner is the safer mode since they only deal with software instead of bios.
*
Ahh I see. A little OCD with the idling and running temperatures so I'll just make them run laugh.gif even tried underclocking/downclocking

Although there are random spikes (to 100%) causing the driver to malfunction for a few seconds even using version 353.06 which is said to be stable in reddit.
Moogle Stiltzkin
post Jul 27 2015, 06:05 AM

Look at all my stars!!
*******
Senior Member
4,451 posts

Joined: Jan 2003
QUOTE(SSJBen @ Jul 26 2015, 09:14 PM)
One thing bro, don't quote wccftech too much. Their articles are all opinions (yet they think it's facts).
*
will the new doom game be out by the time pascal arrives hmm.gif

cause thats the game i want to be eye candy pimping on with the pascal drool.gif

also theres star citizen as well.
terradrive
post Jul 27 2015, 09:28 AM

RRAAAWWRRRRR
******
Senior Member
1,943 posts

Joined: Apr 2005


QUOTE(SSJBen @ Jul 26 2015, 09:14 PM)
One thing bro, don't quote wccftech too much. Their articles are all opinions (yet they think it's facts).
*
Yes, got one they claimed Fury Nano benchmarked but the article inside shows it's calculated doh.gif
SSJBen
post Jul 27 2015, 02:33 PM

Stars deez nuts.
*******
Senior Member
4,522 posts

Joined: Apr 2006


QUOTE(Moogle Stiltzkin @ Jul 27 2015, 06:05 AM)
will the new doom game be out by the time pascal arrives  hmm.gif

cause thats the game i want to be eye candy pimping on with the pascal  drool.gif

also theres star citizen as well.
*
I believe it would be a late Q2 or mid-Q3 2016 release, just my estimated guess following Bethesda's fiscal release. They have their Q1 covered with Fallout 4 already.

If all goes well and Nvidia stays on track, Pascal should come out by Q3 2016. There are rumors circulating that Nvidia will release big Pascal first, as oppose to what they did with Kepler and Maxwell. Just a rumor though, depends on the market as always.
Moogle Stiltzkin
post Jul 27 2015, 03:05 PM

Look at all my stars!!
*******
Senior Member
4,451 posts

Joined: Jan 2003
QUOTE(SSJBen @ Jul 27 2015, 02:33 PM)
I believe itĀ  would be a late Q2 or mid-Q3 2016 release, just my estimated guess following Bethesda's fiscal release. They have their Q1 covered with Fallout 4 already.

If all goes well and Nvidia stays on track, Pascal should come out by Q3 2016. There are rumors circulating that Nvidia will release big Pascal first, as oppose to what they did with Kepler and Maxwell. Just a rumor though, depends on the market as always.
what about this doubling of transistors. i heard something about most of that is mostly going to be for hpc compute or something rather than mostly gaming performance, so their performance estimate for gaming was somehwere between 50-60% vs titanx. any ideas :/ ?



QUOTE(SSJBen @ Jul 27 2015, 02:33 PM)
If all goes well and Nvidia stays on track, Pascal should come out by Q3 2016. There are rumors circulating that Nvidia will release big Pascal first, as oppose to what they did with Kepler and Maxwell. Just a rumor though, depends on the market as always.
do you mean their highend model will come out first ? that suits me fine. but i rather avoid a titan x model, and rather opt for a 980ti equivalent :/ i rather save money when possible xd.

This post has been edited by Moogle Stiltzkin: Jul 27 2015, 03:09 PM
SSJBen
post Jul 27 2015, 03:23 PM

Stars deez nuts.
*******
Senior Member
4,522 posts

Joined: Apr 2006


QUOTE(terradrive @ Jul 27 2015, 09:28 AM)
Yes, got one they claimed Fury Nano benchmarked but the article inside shows it's calculated  doh.gif
*
Yup.
And remember all the claims they made about Kepler before launch... lol, many of which is untrue other than the "state-the-obvious" remarks. doh.gif


QUOTE(Moogle Stiltzkin @ Jul 27 2015, 03:05 PM)
what about this doubling of transistors. i heard something about most of that is mostly going to be for hpc compute or something rather than mostly gaming performance, so their performance estimate for gaming was somehwere between 50-60% vs titanx. any ideas :/ ?
do you mean their highend model will come out first ? that suits me fine. but i rather avoid a titan x model, and rather opt for a 980ti equivalent :/ i rather save money when possible xd.
*
It makes sense, 50-60% is quite similar to that of Kepler from Fermi and Maxwel from Kepler. HBM while interesting, I don't think we will see most of its potential until 2017, when DX12 and Vulkan is much more matured. NVLink apparently will be focused for supercomputers only, not sure if it will make it to the consumer grade cards or not? There's no confirmation on this.

Yeah, there is a rumor circulating around that PK100 will make the scenes first, instead of PK104. I honestly... doubt it? laugh.gif
Moogle Stiltzkin
post Jul 27 2015, 04:01 PM

Look at all my stars!!
*******
Senior Member
4,451 posts

Joined: Jan 2003
QUOTE(SSJBen @ Jul 27 2015, 03:23 PM)
Yup.
And remember all the claims they made about Kepler before launch... lol, many of which is untrue other than the "state-the-obvious" remarks. doh.gif
It makes sense, 50-60% is quite similar to that of Kepler from Fermi and Maxwel from Kepler. HBM while interesting, I don't think we will see most of its potential until 2017, when DX12 and Vulkan is much more matured. NVLink apparently will be focused for supercomputers only, not sure if it will make it to the consumer grade cards or not? There's no confirmation on this.

Yeah, there is a rumor circulating around that PK100 will make the scenes first, instead of PK104. I honestly... doubt it? laugh.gif
*
ooo i'll google that up then.

then this nvlink ? sounds like not only do i buy the pascal gpu, but also need a new motherboard with nvlink as well ?

they say it would look this basically
user posted image

QUOTE
Coming to the final pillar then, we have a brand new feature being introduced for Pascal: NVLink. NVLink, in a nutshell, is NVIDIA’s effort to supplant PCI-Express with a faster interconnect bus. From the perspective of NVIDIA, who is looking at what it would take to allow compute workloads to better scale across multiple GPUs, the 16GB/sec made available by PCI-Express 3.0 is hardly adequate. Especially when compared to the 250GB/sec+ of memory bandwidth available within a single card. PCIe 4.0 in turn will eventually bring higher bandwidth yet, but this still is not enough. As such NVIDIA is pursuing their own bus to achieve the kind of bandwidth they desire.

The end result is a bus that looks a whole heck of a lot like PCIe, and is even programmed like PCIe, but operates with tighter requirements and a true point-to-point design. NVLink uses differential signaling (like PCIe), with the smallest unit of connectivity being a ā€œblock.ā€ A block contains 8 lanes, each rated for 20Gbps, for a combined bandwidth of 20GB/sec. In terms of transfers per second this puts NVLink at roughly 20 gigatransfers/second, as compared to an already staggering 8GT/sec for PCIe 3.0, indicating at just how high a frequency this bus is planned to run at.


user posted image

QUOTE
Multiple blocks in turn can be teamed together to provide additional bandwidth between two devices, or those blocks can be used to connect to additional devices, with the number of bricks depending on the SKU. The actual bus is purely point-to-point – no root complex has been discussed – so we’d be looking at processors directly wired to each other instead of going through a discrete PCIe switch or the root complex built into a CPU. This makes NVLink very similar to AMD’s Hypertransport, or Intel’s Quick Path Interconnect (QPI). This includes the NUMA aspects of not necessarily having every processor connected to every other processor.

But the rabbit hole goes deeper. To pull off the kind of transfer rates NVIDIA wants to accomplish, the traditional PCI/PCIe style edge connector is no good; if nothing else the lengths that can be supported by such a fast bus are too short. So NVLink will be ditching the slot in favor of what NVIDIA is labeling a mezzanine connector, the type of connector typically used to sandwich multiple PCBs together (think GTX 295). We haven’t seen the connector yet, but it goes without saying that this requires a major change in motherboard designs for the boards that will support NVLink. The upside of this however is that with this change and the use of a true point-to-point bus, what NVIDIA is proposing is for all practical purposes a socketed GPU, just with the memory and power delivery circuitry on the GPU instead of on the motherboard.



user posted image
Molex's NeoScale: An example of a modern, high bandwidth mezzanine connector


QUOTE
DIA is touting is that the new connector and bus will improve both energy efficiency and energy delivery. When it comes to energy efficiency NVIDIA is telling us that per byte, NVLink will be more efficient than PCIe – this being a legitimate concern when scaling up to many GPUs. At the same time the connector will be designed to provide far more than the 75W PCIe is spec’d for today, allowing the GPU to be directly powered via the connector, as opposed to requiring external PCIe power cables that clutter up designs.

With all of that said, while NVIDIA has grand plans for NVLink, it’s also clear that PCIe isn’t going to be completely replaced anytime soon on a large scale. NVIDIA will still support PCIe – in fact the blocks can talk PCIe or NVLink – and even in NVLink setups there are certain command and control communiques that must be sent through PCIe rather than NVLink. In other words, PCIe will still be supported across NVIDIA's product lines, with NVLink existing as a high performance alternative for the appropriate product lines. The best case scenario for NVLink right now is that it takes hold in servers, while workstations and consumers would continue to use PCIe as they do today.


Too much to quote, the rest is here
http://www.anandtech.com/show/7900/nvidia-...ecture-for-2016



anyway sounds like nvlink mobo isn't a pre-requisite to use a pascal, can still use pcie. but the question, would using nvlink for a single gpu be worth it ? or is it only going to help for multi gpu setups ? I'm not a fan of multi gpus cause of driver support issues doh.gif so just wondering if upgrading to nvlink mobo is worth it for a single gpu setup. i rather wait for a cannonlake + before i upgrade sweat.gif


This post has been edited by Moogle Stiltzkin: Jul 27 2015, 04:03 PM
arslow
post Jul 27 2015, 10:14 PM

Look at all my stars!!
*******
Senior Member
3,544 posts

Joined: Sep 2008


QUOTE(cstkl1 @ Jul 25 2015, 08:01 PM)
Gsync .. Vrr is not crap dude. U got to try it to understand it

Doubt pascal flagship can do that. Generall ure gonna need 2x performance of a Tx. Since each gen is about 40-50% gain.

So we are two gen away. Volta will be it.

Skylake interesting part is nvme. Only reason to upgrade.
Ddr4 hasnt matured to 8gb density sticks at 4800 c18-c21. No news on dp 1.3.
usb c currently in its infancy.

Z170 is bound to have more dmi issues with the pch. Z87/z97 has tons of issues already.
*
Ugh, everyday I'm getting less and less interested in replacing my 2500k with a skylake platform. I guess what I'm gonna do with my rig overhaul budget(about 4k or so) is just get a new case and go all out on gpu...maybe get a 1080ti or whatever they decide to name it lol.

My u2412m is only 3 years old now. Have promised myself to not change it till the warranty is over, but it's getting harder and harder to do so, whatnot with the existence of 27" ips 144hz WQHD monitors these days!!!!
cstkl1
post Jul 27 2015, 10:27 PM

Look at all my stars!!
Group Icon
Elite
6,799 posts

Joined: Jan 2003

QUOTE(arslow @ Jul 27 2015, 10:14 PM)
Ugh, everyday I'm getting less and less interested in replacing my 2500k with a skylake platform.Ā  I guess what I'm gonna do with my rig overhaul budget(about 4k or so) is just get a new case and go all out on gpu...maybe get a 1080ti or whatever they decide to name it lol.

My u2412m is only 3 years old now. Have promised myself to not change it till the warranty is over, but it's getting harder and harder to do so, whatnot with the existence of 27" ips 144hz WQHD monitors these days!!!!
*
haswell benefit

if u want more
faster encoding via the huge dmi bandwidth.
sata 3 native ports
usb 3 ports

skylake benefits
NVME ssd
native usb 3.1 ports ( not sure just assuming at this point)

skylake still deciding should i go nvme. definately no obvious gain other then watching ssd benchmarks.
zero benefit on gaming to be honest.

usb 3.1 seriously is there any device out there thats taking advantage of its direct access to dmi gen 1 speeds??

nvlink its not something ure gonna see beneficial on daily commercial gaming setups.
it will how ever open up to a possible octa titan pascal. also with such tech i am pretty sure nvidia has solve some multi gpu scaling here.
also interesting part about nvlink is .. the ability of gpu to access ram and vram of the other gpu directly if i am not mistaken.
the tech is gonna solve current issues with multi gpu.

This post has been edited by cstkl1: Jul 27 2015, 10:30 PM

125 Pages « < 10 11 12 13 14 > » Top
Topic ClosedOptions
 

Change to:
| Lo-Fi Version
0.0319sec    0.35    6 queries    GZIP Disabled
Time is now: 28th November 2025 - 07:01 PM