Welcome Guest ( Log In | Register )

Bump Topic Topic Closed RSS Feed
125 Pages « < 23 24 25 26 27 > » Bottom

Outline · [ Standard ] · Linear+

 NVIDIA GeForce Community V15 (new era pascal), ALL HAIL NEW PASCAL KING GTX1080 out now

views
     
Unseen83
post Sep 1 2015, 10:18 PM

TooFAT4U 2Handle!
*******
Senior Member
2,337 posts

Joined: Dec 2008
From: KING CANNEL JB


QUOTE(Demonic Wrath @ Sep 1 2015, 08:41 PM)
Funny how NVIDIA actually losses performance when going for DX12 API at Ashes of Singularity benchmark.. maybe they optimized their DX11 drivers too well?

Support async or not, what's important is the actual FPS of the game.. if anyone is quoting Ashes of Singularity benchmark saying AMD has better implementation, check again.. R9 390X is also performing close to R9 Fury too. (source: http://www.pcgameshardware.de/Ashes-of-the...tX-11-1167997/) It is just that AMD DX11 implementation is so bad that it makes DX12 looks very good.

One thing we know for sure, currently NVIDIA has the market share (82%!). Who knows what will happen to future DX12 games, especially those GameWorks titles.
*
yeah funny indeed... but NVIDIA is telling/pressure Oxide to disable async compute/shader feature on the bench.

" The rather startling news is that Nvidia's Maxwell architecture, and yeah that would be the entire 900 range does not support it in the way AMD does. I can think of numerous scenarios as to where asynchronous shaders would help."

http://www.guru3d.com/news-story/nvidia-wa...n-settings.html

add-on: oh bro your signature correction on "MSI NVIDIA GeForce GTX970 Gaming 4GB GDDR5" is 3.5GB GDDR5 icon_rolleyes.gif

This post has been edited by Unseen83: Sep 1 2015, 10:20 PM
Demonic Wrath
post Sep 1 2015, 11:21 PM

My name so cool
******
Senior Member
1,667 posts

Joined: Jan 2003
From: The Cool Name Place

QUOTE(Unseen83 @ Sep 1 2015, 10:18 PM)
yeah funny indeed...  but NVIDIA is telling/pressure Oxide to disable async compute/shader feature on the bench.

" The rather startling news is that Nvidia's Maxwell architecture, and yeah that would be the entire 900 range does not support it in the way AMD does. I can think of numerous scenarios as to where asynchronous shaders would help."

http://www.guru3d.com/news-story/nvidia-wa...n-settings.html

add-on: oh bro your signature correction on "MSI NVIDIA GeForce GTX970 Gaming 4GB GDDR5" is 3.5GB GDDR5  icon_rolleyes.gif
*
Huh? You mean the GTX970 has physically only 3.5GB GDDR5? Ok...thanks for info

This post has been edited by Demonic Wrath: Sep 1 2015, 11:22 PM
Minecrafter
post Sep 1 2015, 11:25 PM

ROCK N ROLL STAR
*******
Senior Member
5,043 posts

Joined: Aug 2013
From: Putrajaya


QUOTE(Unseen83 @ Sep 1 2015, 10:18 PM)
yeah funny indeed...  but NVIDIA is telling/pressure Oxide to disable async compute/shader feature on the bench.

" The rather startling news is that Nvidia's Maxwell architecture, and yeah that would be the entire 900 range does not support it in the way AMD does. I can think of numerous scenarios as to where asynchronous shaders would help."

http://www.guru3d.com/news-story/nvidia-wa...n-settings.html

add-on: oh bro your signature correction on "MSI NVIDIA GeForce GTX970 Gaming 4GB GDDR5" is 3.5GB GDDR5  icon_rolleyes.gif
*
I've seen some of these for the past months since the GTX970's "RAMgate"... biggrin.gif

It has 4GB GDDR5,but 3.5GB is fast,0.5GB is slow.

No need to make a big problem out of it. wink.gif
JohnLai
post Sep 1 2015, 11:46 PM

Skeptical Cat
*******
Senior Member
3,669 posts

Joined: Apr 2006
QUOTE(Demonic Wrath @ Sep 1 2015, 11:21 PM)
Huh? You mean the GTX970 has physically only 3.5GB GDDR5? Ok...thanks for info
*
QUOTE(Minecrafter @ Sep 1 2015, 11:25 PM)
I've seen some of these for the past months since the GTX970's "RAMgate"... biggrin.gif

It has 4GB GDDR5,but 3.5GB is fast,0.5GB is slow.

No need to make a big problem out of it. wink.gif
*
Wait until you tell him that GTX970 has only 1.75 MB L2 Cache and 56 ROP instead of marketed 2MB and 64 ROP.

Because of 970 memory segmentation, the GPU VRAM bandwidth is effectively 196GB/s due to 224bit bus width.
Originally it is advertised to have 224GB/s and 256bit bus width.

Unseen83
post Sep 1 2015, 11:51 PM

TooFAT4U 2Handle!
*******
Senior Member
2,337 posts

Joined: Dec 2008
From: KING CANNEL JB


QUOTE(Minecrafter @ Sep 1 2015, 11:25 PM)
I've seen some of these for the past months since the GTX970's "RAMgate"... biggrin.gif

It has 4GB GDDR5,but 3.5GB is fast,0.5GB is slow.

No need to make a big problem out of it. wink.gif
*
oh so total 4GB but on separate partition/speed 3.5GB on faster or ads speed meanwhile 0.5gb on slow or half ads speed hmm.gif okay thanx for that info biggrin.gif

QUOTE(Demonic Wrath @ Sep 1 2015, 11:21 PM)
Huh? You mean the GTX970 has physically only 3.5GB GDDR5? Ok...thanks for info
*
naah i was wrong.. gtx 970 have total 4gb gddr5 laugh.gif
Unseen83
post Sep 1 2015, 11:56 PM

TooFAT4U 2Handle!
*******
Senior Member
2,337 posts

Joined: Dec 2008
From: KING CANNEL JB


QUOTE(JohnLai @ Sep 1 2015, 11:46 PM)
Wait until you tell him that GTX970 has only 1.75 MB L2 Cache and 56 ROP instead of marketed 2MB and 64 ROP.

Because of 970 memory segmentation, the GPU VRAM bandwidth is effectively 196GB/s due to 224bit bus width.
Originally it is advertised to have 224GB/s and 256bit bus width.
*
ohmy.gif really.. wow.. i did not know that also.. hmm imagine i almost go for GTX970 Sli in exchange R9 290x crx
JohnLai
post Sep 2 2015, 12:09 AM

Skeptical Cat
*******
Senior Member
3,669 posts

Joined: Apr 2006
QUOTE(Unseen83 @ Sep 1 2015, 11:56 PM)
ohmy.gif  really.. wow.. i did not know that also.. hmm  imagine i almost go for GTX970 Sli in exchange R9 290x crx
*
Lucky for you, you should know the weird frame time issue when GTX970 are SLI-ed..........?

http://www.pcper.com/reviews/Graphics-Card...sues-Tested-SLI

While GTX970 can use and address the 0.5GB (32bit) just fine, but using it will block the access to the faster 3.5GB(224bit).
If program reads 3.5gb (default drivers behavior is to prioritize this 224bit segment), then it cant read 0.5gb and vice versa for writing operation.

It can only access both partitions if first partition is reading while the second partition is writing.

SUScrash123
post Sep 2 2015, 12:35 AM

Getting Started
**
Junior Member
271 posts

Joined: Aug 2011
QUOTE(Ronzph @ Sep 1 2015, 05:24 PM)
This --> MSI 980TI Lightning

Any1 Getting this ???
*
To expensive and GM200 is not best overclocker card. Buy reference GTX 980ti+G10+H55 is enough if u dont like aftermarket cooler. Can get 55-60 celcius max OC load

This post has been edited by crash123: Sep 2 2015, 12:36 AM
Moogle Stiltzkin
post Sep 2 2015, 03:22 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(ngkhanmein @ Sep 1 2015, 05:55 PM)
even i'm NV fan-boy also can't denied DX12 is goin to favor for red side. previously NV driver is superior but now different story. inikali DX11 we owned but DX12 lain kali-lah..  sweat.gif
*


looking into this async compute issue.... regarding how nvidia cards would perform in dx12 games has got me thinking twice about pascal.

i'm just gonna have to wait and see what the reviewers say on that matter.

when it comes out and they test on ash singularity or another newer dx12 game, then can know whether pascal will last me a few years (with being able to play dx12 games well in mind) or not hmm.gif

if not then choice would be go for amd (which apparently was not a total flop because they actually did async compute the right way despite being a power guzzler), whereas nvidia probly did some emulation method that could do it albeit poorly hmm.gif

nvidia telling ash to disable async compute ... does not inspire much confidence sad.gif

This post has been edited by Moogle Stiltzkin: Sep 2 2015, 03:23 AM
TSskylinelover
post Sep 2 2015, 06:10 AM

Future Crypto Player😄👊Driver Abamsado😎😎
********
All Stars
11,244 posts

Joined: Jul 2005
QUOTE(Moogle Stiltzkin @ Sep 2 2015, 03:22 AM)


looking into this async compute issue.... regarding how nvidia cards would perform in dx12 games has got me thinking twice about pascal.

i'm just gonna have to wait and see what the reviewers say on that matter.

when it comes out and they test on ash singularity or another newer dx12 game, then can know whether pascal will last me a few years (with being able to play dx12 games well in mind) or not  hmm.gif

if not then choice would be go for amd (which apparently was not a total flop because they actually did async compute the right way despite being a power guzzler), whereas nvidia probly did some emulation method that could do it albeit poorly  hmm.gif

nvidia telling ash to disable async compute ... does not inspire much confidence  sad.gif
*
dammit i buying console after this laugh.gif doh.gif no DX12 racing games will be out soon hahahaha
Demonic Wrath
post Sep 2 2015, 08:50 AM

My name so cool
******
Senior Member
1,667 posts

Joined: Jan 2003
From: The Cool Name Place

Since both all graphics hardware vendor doesn't have a card that can fully support DX12, just wait till 2016 or 2017 to get a full DX12 feature support card.. but then again, maybe that time got DX12.1 feature?

Edit: For those who really needs full DX12 support, just wait till DX12 games are released first then only see benchmarks..

This post has been edited by Demonic Wrath: Sep 2 2015, 09:35 AM
Unseen83
post Sep 2 2015, 10:45 AM

TooFAT4U 2Handle!
*******
Senior Member
2,337 posts

Joined: Dec 2008
From: KING CANNEL JB


QUOTE(Demonic Wrath @ Sep 2 2015, 08:50 AM)
Since both all graphics hardware vendor doesn't have a card that can fully support DX12, just wait till 2016 or 2017 to get a full DX12 feature support card.. but then again, maybe that time got DX12.1 feature?

Edit: For those who really needs full DX12 support, just wait till DX12 games are released first then only see benchmarks..
*
but but.. AMD never Claim to fully Support DX12, meanwhile Nvidio proudly claim it's Maxwell FULLY DX12 support" but like you said.. Nvidia own 82% market so got all money so maybe they can pay off game dev to delay using dx12.. while they conjure up pascal, but at the end is about one dishonest company fool its customer/fan... sad.gif
Moogle Stiltzkin
post Sep 2 2015, 11:03 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE
Well its not as simple as just wait for a comment.

Here is the truth. Nvidia claimed Async Compute, technically they could argue since they have 32 compute engines they could be considered async however they do not actually fully work the same way people refer to on AMD's Async compute when we talk with async shaders.

Its more like Nvidia was misleading people like they did with the 4gb vram rather than straight up lie. Saying async but having only serial compute is kinda wrong to claim.

https://www.reddit.com/r/nvidia/comments/3j...nvidia_cant_do/


:/

dx11 nvidia clearly wins, but come dx12 with all sorts of performance tricks you can do, this may change doh.gif needs closer look when benchmarks are out. but judging by all the technical mumbo jumbo, it's not looking good for 980ti doh.gif

sure hardly any dx12 games out yet, but most people buy a card to last 4 maybe 5 years. so you wouldn't want to change out your card lesser than that just to play a dx12 game as well it should have been hmm.gif

maybe the performance penalty isn't too bad, or better yet pascal fixes this ? so have to wait for benchmarks doh.gif

This post has been edited by Moogle Stiltzkin: Sep 2 2015, 11:07 AM
Rei7
post Sep 2 2015, 11:09 AM

Game, anime and headphones ❤️
******
Senior Member
1,669 posts

Joined: Apr 2011



QUOTE(Moogle Stiltzkin @ Sep 2 2015, 11:03 AM)
https://www.reddit.com/r/nvidia/comments/3j...nvidia_cant_do/
:/

dx11 nvidia clearly wins, but come dx12 with all sorts of performance tricks you can do, this may change doh.gif needs closer look when benchmarks are out. but judging by all the technical mumbo jumbo, it's not looking good for 980ti doh.gif
*
980ti only? hmm.gif
eatsleepnDIE
post Sep 2 2015, 11:29 AM

Getting Started
**
Junior Member
168 posts

Joined: Nov 2007


QUOTE(Rei7 @ Sep 2 2015, 11:09 AM)
980ti only?  hmm.gif
*
all 9 series i guess.
SUSngkhanmein
post Sep 2 2015, 11:35 AM

カラカラ Karakara
*******
Senior Member
7,727 posts

Joined: Jan 2010
From: Ara Damansara, Petaling Jaya & Batu Pahat, Johor.


QUOTE(Unseen83 @ Sep 2 2015, 10:45 AM)
but but.. AMD never Claim to fully Support DX12, meanwhile Nvidio proudly claim it's Maxwell FULLY DX12 support" but like you said.. Nvidia  own 82% market so got all money so maybe they can pay off game dev to delay using dx12.. while they conjure up pascal, but at the end is about one dishonest company fool its customer/fan...  sad.gif
*
i saw ur post so many times, ini kali-lah ur statement, i fully agreed with u. NV purposely pay them to delay. i believe pascal is fully support DX12 but for maxwell users lain kali-lah.

maxwell is partially support for testing purpose on DX12 only. this is a game goin release soon which is favor AMD but look at NV performance really disaster.

i'm curious 980Ti can hit 6GB vram at the peak??? hmm.gif

i remembered NV TOM claimed above 970 is fully support DX12
Moogle Stiltzkin
post Sep 2 2015, 12:23 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(Rei7 @ Sep 2 2015, 11:09 AM)
980ti only?  hmm.gif
*
not sure about others. only cared about the best single gpu nvidia has atm doh.gif

even with this dx12 concern when it comes down to async compute which a game like rts genre that spawns many units like AOTS was demonstrating how rts could be like by increasing efficiency/performance via dx12 is possible. the gains from async compute if the developer uses it, seems to be quite good on paper :]

will come down to fps and latency, whether the difference is huge to warrant switching camps

user posted image

QUOTE
Results, Heavy
This set of benchmarks uses only the frame times and averages from the “heavy” third of the benchmark scenes. In theory, this should put more emphasis on the DX12 and CPU implementation for each combination of hardware.

user posted image

http://www.pcper.com/reviews/Graphics-Card...abs_block_tab=1


from the benchmark it just clearly shows the amd cards have huge jumps in performance in dx12 compared to dx11. so although the difference in end result is by a small margin between the 2 cards, it brings up the question, could nvidia performance had been much better had they used better async compute like amd ? nvidia async compute now being scrutinized for being inferior to amds judging by this result.

might be that in pascal the dx12 leveraging would be the same as maxwell, and this is what people waiting to be answered :/

but if the results is more or less same, dx 11 will be better than amd, and dx12 only slightly worse than amd, so nvidia may still be a better choice albeit disappointing they didn't fully leverage on dx12 async compute (cause the benchmark shows it does make a difference, if a game decided to use it e.g. rts especially).

for VR, people may prefer amd for the lower latency compared to nvidia. not sure the latency will be that big to be a problem for non vr games in regards to async compute hmm.gif



Demonic Wrath
post Sep 2 2015, 12:34 PM

My name so cool
******
Senior Member
1,667 posts

Joined: Jan 2003
From: The Cool Name Place

From AoTS benchmark, it seems that NVIDIA's DX11 without async support performs faster/similarly to AMD's DX12 with async compute. (source) edit: see frame rate by batch type.

Secondly, Anandtech (source) previously show that NVIDIA does benefit from DX12 implementation too. StarSwarm is also developed by Oxide.

This post has been edited by Demonic Wrath: Sep 2 2015, 12:35 PM
Unseen83
post Sep 4 2015, 05:45 PM

TooFAT4U 2Handle!
*******
Senior Member
2,337 posts

Joined: Dec 2008
From: KING CANNEL JB


QUOTE(ngkhanmein @ Sep 2 2015, 11:35 AM)
i saw ur post so many times, ini kali-lah ur statement, i fully agreed with u. NV purposely pay them to delay. i believe pascal is fully support DX12 but for maxwell users lain kali-lah.

maxwell is partially support for testing purpose on DX12 only. this is a game goin release soon which is favor AMD but look at NV performance really disaster.

i'm curious 980Ti can hit 6GB vram at the peak???  hmm.gif

i remembered NV TOM claimed above 970 is fully support DX12
*
or it could be lie from Oxide and AMD as they working together to bring down Nvidia... is like how Nvidia told game dev add gamework that cripple gpu performance hmm .. hmm.gif



https://www.youtube.com/results?search_query=FaWbDpEuuk

This post has been edited by Unseen83: Sep 4 2015, 05:47 PM
SUSngkhanmein
post Sep 4 2015, 06:36 PM

カラカラ Karakara
*******
Senior Member
7,727 posts

Joined: Jan 2010
From: Ara Damansara, Petaling Jaya & Batu Pahat, Johor.




nvm i still enjoy playing DX11 games. too many can't finish at all..

125 Pages « < 23 24 25 26 27 > » Top
Topic ClosedOptions
 

Change to:
| Lo-Fi Version
0.0215sec    0.70    6 queries    GZIP Disabled
Time is now: 30th November 2025 - 04:16 PM