QUOTE(richard912 @ Jul 11 2016, 12:12 PM)
USD4999? Seem like have to live with 1080p for a long timeNVIDIA GeForce Community V16 (welcum pascal), ALL HAIL NEW PASCAL KING GTX1080 out now
NVIDIA GeForce Community V16 (welcum pascal), ALL HAIL NEW PASCAL KING GTX1080 out now
|
|
Jul 11 2016, 12:43 PM
|
![]() ![]() ![]() ![]() ![]()
Senior Member
933 posts Joined: Aug 2010 From: City of Light |
QUOTE(richard912 @ Jul 11 2016, 12:12 PM) USD4999? Seem like have to live with 1080p for a long time |
|
|
|
|
|
Jul 11 2016, 12:52 PM
|
![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
1,821 posts Joined: Jun 2009 From: 3°7'59"N 101°37'49"E |
|
|
|
Jul 11 2016, 01:11 PM
|
![]() ![]()
Junior Member
267 posts Joined: Oct 2007 From: Kuala Lumpur, Malaysia |
QUOTE(richard912 @ Jul 11 2016, 12:12 PM) 4K/120Hz playing CS:GO or DOTA2. Then for AAA titles, have to scale down to 1440p to reach 120 Hz. Plus needing GTX 1080 Titan SLI to even come close to using the monitor full potential. |
|
|
Jul 11 2016, 01:25 PM
|
![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
1,667 posts Joined: Jan 2003 From: The Cool Name Place |
Ermm...the reason for large increase in AMD's performance for DX12 is because their DX11 drivers has too much overhead (more specifically the driver doesn't support DX11 deferred context).. It is not because of async compute queues that just magically increase its performance by so much..
In the benchmark, you can see that in DX11, NVIDIA GPUs is capable of delivering almost 3x performance compared to AMD GPUs. And in DX12, NVIDIA GPUs will scale too. » Click to show Spoiler - click again to hide... « In games like Project Cars, the developer mentioned that AMD GPU bad performance is due to driver overhead. But fanboys obviously points it to Gimpworks / NVIDIA touched and refuse to see that bad driver is bad drivers. In Total War Warhammer, NVIDIA GTX970 can even perform similar to FuryX in 1080p » Click to show Spoiler - click again to hide... « Eurogamer also mentioned that if you're using low end CPUs, it's better to pair with NVIDIA GPUs as its performance won't be CPU-limited that much compared to AMDs. Which is kinda ironic since AMD is often viewed as the best cards for budget builds. AMD GPUs has a somewhat good hardware foundation. Their GPUs have a good compute performance compared to NVIDIA counterpart (i.e. GTX970 (3.7TFLops) vs R9 390 (5 TFlops)). This means that their GPUs are very good at shader performance. It just couldn't utilize it. DX12 games that utilize a lot of draw calls (especially RTS) will see a bigger boost in AMD HW compared to NVIDIA. DX12 games that utilize a lot of shaders for its post processing (i.e. SSR, SSAO) will excel using AMD hardware (i.e. AOTS, Total War Warhammer). DX11 games that utilize heavy shaders and not drawcall/geometry bottlenecked will excel too. (i.e. the new Hitman) From this, you can see that AMD is moving to reduce their performance difference in geometry performance. With RX480, AMD chose not to reduce it's geometry processor and remains the same count as 4 (fun fact: RX480 geometry processing performance is a bit higher than Fury X due to higher clocks). If someday AMD decides to put more focus on its drivers for DX11, then you'll see RX480 outperforming GTX970 very frequently as it's sheer performance is higher. Heck, I would not be surprised if it performs like GTX980. But when will that day comes? No one knows. As for NVIDIA, it still does what it's already doing, increase compute power each generation, increase power efficiency. Currently, there's no cards from AMD that can beat GTX1080 in gaming because GTX1080 has higher compute performance, higher geometry performance, better drivers. It is kinda funny how sometimes people mention NVIDIA is bruteforcing their way. But it is actually AMD that is brute-forcing its way to gain performance. Brute-forcing and power efficiency doesn't go hand-in-hand. |
|
|
Jul 11 2016, 03:32 PM
|
![]() ![]()
Junior Member
267 posts Joined: Oct 2007 From: Kuala Lumpur, Malaysia |
QUOTE(Demonic Wrath @ Jul 11 2016, 01:25 PM) Ermm...the reason for large increase in AMD's performance for DX12 is because their DX11 drivers has too much overhead (more specifically the driver doesn't support DX11 deferred context).. It is not because of async compute queues that just magically increase its performance by so much.. I agree when it comes to this particular Star Swarm benchmark. But it must be also noted, that this particular benchmark was done Feb 2015 ((Full details here) and was meant to preview DX12 capabilities. There's no mentioning whatsover about Async Compute, and I don't think Star Swarm has async compute. Except from this benchmarkIn the benchmark, you can see that in DX11, NVIDIA GPUs is capable of delivering almost 3x performance compared to AMD GPUs. And in DX12, NVIDIA GPUs will scale too. » Click to show Spoiler - click again to hide... « "As it stands, with the CPU bottleneck swapped out for a GPU bottleneck, Star Swarm starts to favor NVIDIA GPUs right now. Even accounting for performance differences, NVIDIA ends up coming out well ahead here, with the GTX 980 beating the R9 290X by over 50%, and the GTX 680 some 25% ahead of the R9 285, both values well ahead of their average lead in real-world games. With virtually every aspect of this test still being under development – OS, drivers, and Star Swarm – we would advise not reading into this too much right now, but it will be interesting to see if this trend holds with the final release of DirectX 12. Meanwhile it’s interesting to note that largely due to their poor DirectX 11 performance in this benchmark, AMD sees the greatest gains from DirectX 12 on a relative basis and comes close to seeing the greatest gains on an absolute basis as well. The GTX 980’s performance improves by 150% and 40.1fps when switching APIs; the R9 290X improves by 416% and 34.6fps. As for AMD’s Mantle, we’ll get back to that in a bit." So I agree somewhat that in this particular benchmark, AMD huge gains maybe attributed their driver and nothing to do with Async Compute. Now in 2016, with games coming out in both Dx11 and DX12 Asyn Compute capabilities, benchmark I saw showed that AMD gains in DX12 is mostly attributed with to Async Compute. Hitman DX12 and AoTs., which interestingly for Maxwell cards, runs slower in DX12 than in DX11, unlike what happened with the Star Swarm bench. And to say that AMD older gen cards gains is due to driver works better on DX12 than DX11 does not explain why in Raise of the Tomb Raider bench, AMD Fury performed better in DX11 than DX12. ![]() Bench March 2016 (Here) Keyword: Async Compute. Because in March, Raise of the Tomb Raider even in DX12 does not have Async Compute, Developer just announced game patch 2 days and among other patch note Adds utilization of DirectX 12 Asynchronous Compute, on AMD GCN 1.1 GPUs and NVIDIA Pascal-based GPUs, for improved GPU performance. News here. Again, keyword is Async Compute, which only AMD GCN 1.1 above and Pascal can take advantage of. So will wait for the new bench to come out and see if if Async Compute makes any difference to this game. If AMD cards get better in DX12 than DX11 after the patch, its most probably because of Async Compute. ![]() And the another bench that clearly showed effect of DX12 (Async Comp On/ Off) and DX11 on older 28nm AMD and Nvidia GPUs. Both Nvidia and AMD gain moving from DX11 to DX12. But only AMD gains when Async Compute turned on in DX12. Anyway, I think its better that we discuss impact of DX12/ Dx11 with context separation between the old 28nm cards and the newer Pascal/ Polaris architecture, rather then generalizing it as just Nvidia and AMD. But I totally agree that AMD will not have any card to rival GTX 1080, or even GTX 1070 for that matter until Vega comes out next year. Reason why I sold off my GTX 970 and got a GTX 1080. This post has been edited by adilz: Jul 11 2016, 04:38 PM |
|
|
Jul 11 2016, 04:51 PM
|
![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
1,667 posts Joined: Jan 2003 From: The Cool Name Place |
QUOTE(adilz @ Jul 11 2016, 03:32 PM) Now in 2016, with games coming out in both Dx11 and DX12 Asyn Compute capabilities, benchmark I saw showed that AMD gains in DX12 is mostly attributed with to Async Compute. Hitman DX12 and AoTs., which interestingly for Maxwell cards, runs slower in DX12 than in DX11, unlike what happened with the Star Swarm bench. Hitman already run worse in DX11 for NVIDIA cards.Also, in Guru3D Hitman Benchmark, AMD cards show slight performance decrease too in DX12 (1080p). Fury X doesn't show perf improvement, R9 390X is 1fps behind FuryX in 4K. Some benchmark shows performance regression (even on AMD cards). AOTS I suspect is compute shader bound. That's why the performance and Tflops is correlated (GTX980 having the same TFlops as R9 390 will perform the same). Edit: Also another game exhibit this performance behavior is Quantum Break, where AMD GPUs performs better. It is not using async compute too. QUOTE(adilz @ Jul 11 2016, 03:32 PM) And to say that AMD older gen cards gains is due to driver works better on DX12 than DX11 does not explain why in Raise of the Tomb Raider bench, AMD Fury performed better in DX11 than DX12. You do realize that NVIDIA's performance dips (from DX11 to DX12) to in that benchmark you linked right?Bench March 2016 (Here) QUOTE(adilz @ Jul 11 2016, 03:32 PM) Keyword: Async Compute. Because in March, Raise of the Tomb Raider even in DX12 does not have Async Compute, Developer just announced game patch 2 days and among other patch note This is the latest comparison between RX480 vs GTX970 with the newest patch applied. Looks like RX480 still behind GTX970.Adds utilization of DirectX 12 Asynchronous Compute, on AMD GCN 1.1 GPUs and NVIDIA Pascal-based GPUs, for improved GPU performance. News here. Again, keyword is Async Compute, which only AMD GCN 1.1 above and Pascal can take advantage of. So will wait for the new bench to come out and see if if Async Compute makes any difference to this game. If AMD cards get better in DX12 than DX11 after the patch, its most probably because of Async Compute. https://www.youtube.com/watch?v=CdWM7eQZnNc Cheers This post has been edited by Demonic Wrath: Jul 11 2016, 05:34 PM |
|
|
|
|
|
Jul 11 2016, 05:15 PM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
3,809 posts Joined: Sep 2007 From: Jakarta |
|
|
|
Jul 11 2016, 06:03 PM
|
![]() ![]()
Junior Member
267 posts Joined: Oct 2007 From: Kuala Lumpur, Malaysia |
QUOTE(Demonic Wrath @ Jul 11 2016, 04:51 PM) Hitman already run worse in DX11 for NVIDIA cards. Thanks Bro for pointing out the Guru3D benchmark. Something new to learn,Looks like for 1080P gaming, better stick to DX11. 1440P, DX12 could be beneficial for some AMD Cards, not for Maxwell. 4K, mixed bag of improvement, status quo and regression from Dx11 to Dx12.Also, in Guru3D Hitman Benchmark, AMD cards show slight performance decrease too in DX12 (1080p). Fury X doesn't show perf improvement, R9 390X is 1fps behind FuryX in 4K. Some benchmark shows performance regression (even on AMD cards). AOTS I suspect is compute shader bound. That's why the performance and Tflops is correlated (GTX980 having the same TFlops as R9 390 will perform the same). You do realize that NVIDIA's performance dips (from DX11 to DX12) to in that benchmark you linked right? This is the latest comparison between RX480 vs GTX970 with the newest patch applied. Looks like RX480 still behind GTX970. https://www.youtube.com/watch?v=CdWM7eQZnNc Cheers 1080P - R9 Fury: DX11 - 86 fps/ DX12 - 85 fps - R9 390X: Dx11 - 87 fps/ DX12 - 82 fps - R9 290: DX11 - 76 fps/ DX12 - 69 fps - Titan X: DX11 - 86 fps/ DX12 - 78 fps - GTX 980 Ti: DX11 - 85 fps / DX12 - 76 fps - GTX 970: DX11 - 55 fps/ DX12 - 53 fps All regressed moving from DX11 to DX12 on 1080P 1440P - R9 Fury: DX11 - 65 fps/ DX12 - 65 fps - R9 390X: Dx11 - 60 fps/ DX12 - 64 fps - R9 290: DX11 - 50 fps/ DX12 - 55 fps - Titan X: Dx11 - 65 fps/ DX12 - 60 fps - GTX 980 Ti: DX11 - 63 fps / DX12 - 58 fps - GTX 970: DX11 - 40 fps/ DX12 - 37 fps 1440P. Mixed bag. R9 390/290 gained moving from DX11 to DX12. The Furys either no difference and even regressed by 1 fps (HBM issue?????). Maxwell generally regressed. 4K, mixed bag of improvement, status quo and regression from Dx11 to Dx12. Interestingly, this also ties to the Tomb Raider patch. The Youtube video showed that at 1080P, RX 480 regressed going from DX11 to Dx12. GTX 970 did a lot better than RX 480. Where as OC3D benchmark on the Tomb raider latest patch, using R9 Fury X and GTX 980 Ti (Full Article here) ![]() The Fury gained with the latest patch, whereas 980 Ti regressed except under 1080P which at least help increase its min fps. Excerpt from the bench conclusions "With the addition of Asynchronous Compute support to Rise of the Tomb Raider the game's performance under DirectX 12 has never been better for GPUs that support it. Even GPUs without support like the GTX 980Ti have seen some large increases in minimum framerates under DirectX 12, making the game perform a lot better under the new API. Sadly Rise of the Tomb Raider is still a mixed bag as far as DirectX 12 is concerned, offering a huge performance boost for AMD's GCN 1.1 or newer GPUs while offering Nvidia's Maxwell or older GPUs a performance decrease when compared to the DirectX 11 version of the game. When testing this game it is a real shame that we do not have two matching GPUs to test on both the AMD and Nvidia sides, or a newer Nvidia Pascal or AMD Polaris GPU to test, but this is something that we are working on addressing in the near future. To summarise this testing this new update for the DirectX 12 version of Rise of the Tomb Raider actually makes DirectX 12 worth using if you own an AMD GPU, with Nvidia users (at least those with maxwell or older GPUs) being better off using the DirectX 11 version of the game for the best performance. " |
|
|
Jul 11 2016, 06:11 PM
|
![]() ![]()
Junior Member
267 posts Joined: Oct 2007 From: Kuala Lumpur, Malaysia |
|
|
|
Jul 11 2016, 06:12 PM
|
![]() ![]() ![]() ![]()
Junior Member
500 posts Joined: Oct 2015 From: Penang |
QUOTE(Demonic Wrath @ Jul 11 2016, 04:51 PM) Hitman already run worse in DX11 for NVIDIA cards. it is known that RoTTR is not a well built dx12 title, it was built from the ground up for dx11, but just adapted some dx12 like an envelope. so it should not be looked as the definitive dx12 benchmark.Also, in Guru3D Hitman Benchmark, AMD cards show slight performance decrease too in DX12 (1080p). Fury X doesn't show perf improvement, R9 390X is 1fps behind FuryX in 4K. Some benchmark shows performance regression (even on AMD cards). AOTS I suspect is compute shader bound. That's why the performance and Tflops is correlated (GTX980 having the same TFlops as R9 390 will perform the same). Edit: Also another game exhibit this performance behavior is Quantum Break, where AMD GPUs performs better. It is not using async compute too. You do realize that NVIDIA's performance dips (from DX11 to DX12) to in that benchmark you linked right? This is the latest comparison between RX480 vs GTX970 with the newest patch applied. Looks like RX480 still behind GTX970. https://www.youtube.com/watch?v=CdWM7eQZnNc Cheers Project Cars simply off loaded all of Physx calculation into the CPU (Physx does not run at all on AMD GPU) that caused the performances loss on AMD GPUs. some people also reported that despite setting it to CPU only, their dedicated Cuda/Physx GPU still show a utilization. Assetto Corsa should be better for manufacturer neutral benchmarks than Project Cars. you could say that Nvidia chose to stick with dx11 for now, while AMD is looking ahead with dx12. seems like Nvidia's risk is paying off with better power efficiency. and since nobody knows how soon will dx12 hit in full swing, you can argue that by that time, it's already time to upgrade from the 1070. for Pascal Nvidia continues to take the bet that dx12/async wont be a big thing, so they can improve on power efficiency by using pre-emption. compared to AMD that went hardware route, with ACE schedulers in the die. brute forcing just means using pre-emption instead of hardware Async. This post has been edited by svfn: Jul 11 2016, 06:13 PM |
|
|
Jul 11 2016, 06:31 PM
|
![]() ![]() ![]() ![]()
Junior Member
500 posts Joined: Oct 2015 From: Penang |
QUOTE(Demonic Wrath @ Jul 11 2016, 04:51 PM) Hitman already run worse in DX11 for NVIDIA cards. is that an early build benchmark. because in the latest 480 bench from HardwareCanucks, the performance is different. guru3d seem to give much higher FPS overall Also, in Guru3D Hitman Benchmark, AMD cards show slight performance decrease too in DX12 (1080p). Fury X doesn't show perf improvement, R9 390X is 1fps behind FuryX in 4K. Some benchmark shows performance regression (even on AMD cards). 1080p ![]() ![]() 1440p » Click to show Spoiler - click again to hide... « This post has been edited by svfn: Jul 11 2016, 06:39 PM |
|
|
Jul 11 2016, 06:40 PM
|
![]() ![]()
Junior Member
267 posts Joined: Oct 2007 From: Kuala Lumpur, Malaysia |
QUOTE(Demonic Wrath @ Jul 11 2016, 04:51 PM) You do realize that NVIDIA's performance dips (from DX11 to DX12) to in that benchmark you linked right? Yeap, I noticed that too. Pre patch 7, both regressed regressed moving from DX11 to DX12. Now that Patch 7 bench are slowly coming out, generally AMD GCN 1.1 above cards gets performance bump (minus RX 480. Another nail to RX 480's coffin?). The bump is not driver related and the RotTR developer clearly stated the patch implements DX 12 Async Compute.Cheers Am still searching for the new RotTR patch bench for Nvidia Pascal. Really hope that that Async Compute bumps up the performance for Pascal GPUs. Good to know too that new patch enables Nvidia SLIs. This post has been edited by adilz: Jul 11 2016, 06:43 PM |
|
|
Jul 11 2016, 06:49 PM
|
![]() ![]() ![]() ![]()
Junior Member
500 posts Joined: Oct 2015 From: Penang |
QUOTE(adilz @ Jul 11 2016, 06:40 PM) Am still searching for the new RotTR patch bench for Nvidia Pascal. Really hope that that Async Compute bumps up the performance for Pascal GPUs. Good to know too that new patch enables Nvidia SLIs. cant find proper patch 7 bench with/without Async for Pascal too.MSI GTX 1070 Armour OC » Click to show Spoiler - click again to hide... « R9 290: https://www.youtube.com/watch?v=BfObYNEQkE8 only about 2 FPS gain on older gen "this game has no support for dx12 from gound up. the dx12 support offered now is just an envelope that is covered on dx 11 code." This post has been edited by svfn: Jul 11 2016, 07:04 PM |
|
|
|
|
|
Jul 11 2016, 07:02 PM
|
![]() ![]()
Junior Member
267 posts Joined: Oct 2007 From: Kuala Lumpur, Malaysia |
|
|
|
Jul 11 2016, 07:07 PM
|
![]() ![]() ![]() ![]()
Junior Member
500 posts Joined: Oct 2015 From: Penang |
the game's DX12 implementation is a mess and the devs have also acknowledged that.
|
|
|
Jul 11 2016, 08:29 PM
Show posts by this member only | IPv6 | Post
#836
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,476 posts Joined: Jan 2003 |
found the benchmark i was interested in
fps benchmark » Click to show Spoiler - click again to hide... « temp, noise, power load » Click to show Spoiler - click again to hide... « http://hexus.net/tech/reviews/graphics/941...edition/?page=9 Conclusion » Click to show Spoiler - click again to hide... « PS: sadly zotac isn't in the comparison, which is a shame since that has the most availability here This post has been edited by Moogle Stiltzkin: Jul 11 2016, 08:43 PM |
|
|
Jul 11 2016, 08:36 PM
|
![]() ![]()
Junior Member
255 posts Joined: Sep 2011 |
» Click to show Spoiler - click again to hide... « just got ayam punya gtx 1070. firestrike only 12458 because using i5 4460 temp idle <50c and load 68c-72c |
|
|
Jul 11 2016, 08:39 PM
|
![]() ![]()
Junior Member
216 posts Joined: Nov 2009 |
@ssxcool
how much you got it for bro? |
|
|
Jul 11 2016, 08:40 PM
|
![]() ![]()
Junior Member
255 posts Joined: Sep 2011 |
|
|
|
Jul 11 2016, 08:42 PM
Show posts by this member only | IPv6 | Post
#840
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,476 posts Joined: Jan 2003 |
|
|
Topic ClosedOptions
|
| Change to: | 0.0329sec
0.46
6 queries
GZIP Disabled
Time is now: 23rd December 2025 - 12:05 AM |