Welcome Guest ( Log In | Register )

Bump Topic Topic Closed RSS Feed
127 Pages « < 38 39 40 41 42 > » Bottom

Outline · [ Standard ] · Linear+

 NVIDIA GeForce Community V16 (welcum pascal), ALL HAIL NEW PASCAL KING GTX1080 out now

views
     
davidletterboyz
post Jul 11 2016, 12:39 AM

Der Kaiser
*******
Senior Member
4,672 posts

Joined: Jan 2003


QUOTE(goldfries @ Jul 10 2016, 09:38 PM)
but hor, actually GTX 970 performs around the GTX 970, far from being anywhere near the GTX 980.

That aside, what you said is correct - around GTX 980 performance but doesn't mean beat it. More like in between 970 and 980.

btw to beat RX 480 it doesn't have to match GTX 980. It just has to give same or better price to performance ratio.
*
You mean RX480 performs around GTX970?
Yes it doesn't not necessarily need to match GTX980, as long as it beats RX480 lol.
XeactorZ
post Jul 11 2016, 12:40 AM

♥ PandaDog ♥
*********
All Stars
31,612 posts

Joined: Aug 2010
QUOTE(babykids @ Jul 11 2016, 12:38 AM)
my bad  doh.gif  sorry sorry
*
haha its okay, waiting amd release vega series
else it can't compete 1070 and 1080 as well laugh.gif
babykids
post Jul 11 2016, 12:42 AM

Getting Started
**
Junior Member
168 posts

Joined: Aug 2007
QUOTE(goldfries @ Jul 10 2016, 10:42 PM)
Oh I will be doing live session for the GTX 1060. I just don't have the date yet, maybe this week. Hopefully Wed / Thu.
*
what term and condition for reviewer card? what standard time only can reveal the info? Do they provide the new driver for test?

sorry for asking. biggrin.gif
goldfries
post Jul 11 2016, 01:06 AM

40K Club
Group Icon
Forum Admin
44,415 posts

Joined: Jan 2003




QUOTE(davidletterboyz @ Jul 11 2016, 12:39 AM)
You mean RX480 performs around GTX970? 
Yes it doesn't not necessarily need to match GTX980, as long as it beats RX480 lol.
*
Sorry typo there. Yes RX 480 ~ GTX 970.
goldfries
post Jul 11 2016, 01:07 AM

40K Club
Group Icon
Forum Admin
44,415 posts

Joined: Jan 2003




QUOTE(babykids @ Jul 11 2016, 12:42 AM)
what term and condition for reviewer card? what standard time only can reveal the info? Do they provide the new driver for test?

sorry for asking.  biggrin.gif
Case by case, if it's not yet in the market then have to hush hush.

Yes new driver provided IF it's required, like RX 480. We were given the driver before it was available to the public.

babykids
post Jul 11 2016, 01:08 AM

Getting Started
**
Junior Member
168 posts

Joined: Aug 2007
QUOTE(goldfries @ Jul 11 2016, 01:07 AM)
Case by case, if it's not yet in the market then have to hush hush.

Yes new driver provided IF it's required, like RX 480. We were given the driver before it was available to the public.
*
so you know how gtx 1060 doing brows.gif
khelben
post Jul 11 2016, 01:17 AM

I love my mum & dad
*******
Senior Member
6,056 posts

Joined: Jan 2003
From: Suldanessellar



QUOTE(babykids @ Jul 10 2016, 11:37 PM)
I am not feeling good for AMD this gen chip architecture. Don't hope too much for vera too.
*
AMD's current philosophy is, in my opinion, a little "neither here nor there" for PC gaming. Their GCN is very compute-optimized, good for parallel workloads (probably why they make good mining cards?) but in PC gaming, most games are not designed that way. And nVidia is architectured how PC games are designed hence they are a lot more efficient, especially since AMD would need to have more transistors to have the same output. Perhaps that is why AMD's power efficiency is no where near like nVidia's.

But console game developers do use Asynch compute to make their games run more efficient (all consoles have AMD GPUs) so AMD make chips that support it. And I am guessing that AMD wants to save cost in not redesigning a whole new architecture for the PC gaming market but adopt a lot of their stuff from their console chip. AMD after all have way lesser R&D money compared to nVidia.

DX11 as far as I know doesn't have asynchronous compute support and AMD have been pushing Microsoft to come up with an updated API that does. A bit too long so AMD made Mantle (later gave it Khronos and renamed it to Vulkan). And then microsoft went "okay okay we have DX12 now" laugh.gif

So when game developers actually use asynch compute (DX12) in their games, AMD cards perform really well. Like Hitman, Quantum Break, Total War: Warhammer, and Ashes. In guru3d's review, RX480 actually runs faster than GTX980 in those games. Rise of the Tomb Raider is an exception, read somewhere that they only apply pseudo dx12 stuff, gotta read up on that more.

So yeah it really depends on game developers but for the current market and probably at least the next 2 years, nVidia > AMD.

It'll be interesting to see how the new cards from both camps run on Vulkan.

goldfries
post Jul 11 2016, 01:20 AM

40K Club
Group Icon
Forum Admin
44,415 posts

Joined: Jan 2003




QUOTE(babykids @ Jul 11 2016, 01:08 AM)
so you know how gtx 1060 doing  brows.gif
Not yet arrive my lab.

svfn
post Jul 11 2016, 01:21 AM

On my way
****
Junior Member
500 posts

Joined: Oct 2015
From: Penang
Rise of the Tomb Raider isnt built from the ground up for dx12, so maybe is why the async only improve abit.

BF1 will be the next DX12 game, not sure got async or not, and maybe Mankind Divided.
khelben
post Jul 11 2016, 01:21 AM

I love my mum & dad
*******
Senior Member
6,056 posts

Joined: Jan 2003
From: Suldanessellar



QUOTE(letze @ Jul 10 2016, 11:38 PM)
In real life I actually meet many people with thoughts "AMD = Gaming". They believe any AMD product is related to gaming and Intel is for work/office.  doh.gif

On the other note, 1060 is a steal if priced at RM1299, consider if applied any voucher for online buying  icon_idea.gif
*
Are they old? Because AMD = Gaming is actually quite true in the early to mid 2000s laugh.gif K7 to K10 perform better in games, better than their Pentium 4 counter part. All the way to Athlon 64 X2 - Prescott days.

Tables were turned when Intel unveiled their Conroe.
adilz
post Jul 11 2016, 01:22 AM

Getting Started
**
Junior Member
267 posts

Joined: Oct 2007
From: Kuala Lumpur, Malaysia


QUOTE(stringfellow @ Jul 10 2016, 01:14 AM)
Therein lies the caveat: "games that scales on CFX/SLI". You're banking on the hope that CFX/SLI works, when it doesnt you're back with 1 card and worse still, you card is midrange. That is why one single powerful > two midrange. There is still that problem with framepacing that remains. ""It's not about how fast you go, it's how well you go fast!" You can go fast by adding two engines instead of one, but if the ride is bumpy as hell, you're not gonna enjoy it compared to a more powerful single engine. wink.gif
*
I agree. SLI/ CFX is entirely a hit and miss thing. At the end, it all boils down to what games you play. I'm just looking at this from a viewpoint of person on budget who can only afford to fork out for RX 480 or the upcoming GTX 1060, and maybe able to save a bit within 6 months to a year for an upgrade. Since GTX 1060 not going to have SLI, I'll take RX 48- as example.

user posted image
user posted image
Two games that RX 480 CFX performs decently, on par with GTX 1080 and OCed GTX 1070

user posted image
user posted image
Two games that might as well take the second RX 480 and burn it coz its totally useless.

So if that person plays the 2 games that do well in CFX, well they have an option to get another RX 480 to crossfire, or they can sell the RX 480 and get a better single GPU. If they play the 2 games that's crap at CFX, well only option to upgrade is sell it off and get a better single card. Its about options.

In the case of GTX 1060, I firmly believe that performance will be better than GTX 970, and maybe even be on par with GTX 980. And if it had SLI capability, I believe it would do even better than RX 480 CFX. But well, that option is not going to be available, because it will jeopardize GTX 1070 and 1080 sales. Profit Margin 1 - 0 Budget Consumer.

And same thing with the frame-pacing issues. Its the managing tolerance between increase fps with SLI vs frame-pacing issue. If frame pacing problem is inherent and is so bad in all games that runs SLI, I think nvidia would also forgo the SLI for GTX 1070 and GTX 1080. But some games already showed a marked improvement in frame-pacing with the new Pascal arch.

user posted image
Maxwell SLI's earth quake.

Attached Image
Pascal GTX1070 SLI.

I believe Nvidia will continue to improve their SLI implementation so that the benefit of increase fps (both avg and min) far outweighs the frame-pace issues. I heard rumors that GTX 1080 Titan will be 50% faster than GTX 1080. Based on current SLI benchmarks, I'll probably get another GTX 1080 to SLI, rather than sell my current GTX 1080 for a single GTX 1080 Titan.




babykids
post Jul 11 2016, 01:27 AM

Getting Started
**
Junior Member
168 posts

Joined: Aug 2007
QUOTE(goldfries @ Jul 11 2016, 01:20 AM)
Not yet arrive my lab.
*
you facebook link please rclxms.gif follow u
adilz
post Jul 11 2016, 01:28 AM

Getting Started
**
Junior Member
267 posts

Joined: Oct 2007
From: Kuala Lumpur, Malaysia


QUOTE(Demonic Wrath @ Jul 10 2016, 01:26 AM)
You sold your GTX970 SLI for RM 1,200 (RM 600 each)?

*
Sorry if I put it wrongly. I meant RM 1,200 discount from my purchase price. Sold both for around RM 2,400 which is pretty good since supplier are currently fireselling available GTX 970 stocks. Got myself a Gigabyte GTX 1080 G1 Gaming. Hoping for GTX 1080 Titan to come out soon, so that I can buy another GTX 1080 to SLI. I know one person that might sell his GTX 1080 once GTX 1080 Titan comes out tongue.gif tongue.gif tongue.gif
babykids
post Jul 11 2016, 01:32 AM

Getting Started
**
Junior Member
168 posts

Joined: Aug 2007
QUOTE(adilz @ Jul 11 2016, 01:28 AM)
Sorry if I put it wrongly. I meant RM 1,200 discount from my purchase price. Sold both for around RM 2,400 which is pretty good since supplier are currently fireselling available GTX 970 stocks. Got myself a Gigabyte GTX 1080 G1 Gaming. Hoping for GTX 1080 Titan to come out soon, so that I can buy another GTX 1080 to SLI. I know one person that might sell his GTX 1080 once GTX 1080 Titan comes out  tongue.gif  tongue.gif  tongue.gif
*
GTX 1080 Titan really a 4k card thumbup.gif
stringfellow
post Jul 11 2016, 01:39 AM

Ultrawide | 4K | VR
********
Senior Member
11,305 posts

Joined: Jan 2003
QUOTE(khelben @ Jul 11 2016, 01:17 AM)
AMD's current philosophy is, in my opinion, a little "neither here nor there" for PC gaming. Their GCN is very compute-optimized, good for parallel workloads (probably why they make good mining cards?) but in PC gaming, most games are not designed that way. And nVidia is architectured how PC games are designed hence they are a lot more efficient, especially since AMD would need to have more transistors to have the same output. Perhaps that is why AMD's power efficiency is no where near like nVidia's.

But console game developers do use Asynch compute to make their games run more efficient (all consoles have AMD GPUs) so AMD make chips that support it. And I am guessing that AMD wants to save cost in not redesigning a whole new architecture for the PC gaming market but adopt a lot of their stuff from their console chip. AMD after all have way lesser R&D money compared to nVidia.

DX11 as far as I know doesn't have asynchronous compute support and AMD have been pushing Microsoft to come up with an updated API that does. A bit too long so AMD made Mantle (later gave it Khronos and renamed it to Vulkan). And then microsoft went "okay okay we have DX12 now" laugh.gif

So when game developers actually use asynch compute (DX12) in their games, AMD cards perform really well. Like Hitman, Quantum Break, Total War: Warhammer, and Ashes. In guru3d's review, RX480 actually runs faster than GTX980 in those games. Rise of the Tomb Raider is an exception, read somewhere that they only apply pseudo dx12 stuff, gotta read up on that more.

So yeah it really depends on game developers but for the current market and probably at least the next 2 years, nVidia > AMD.

It'll be interesting to see how the new cards from both camps run on Vulkan.
*
It's as if AMD is encouraging "console ports" to come to PC, when PC gamers absolutely DO NOT want console ports. And because AMD is not willing to gamble on a "big chip" card now because not many games supports DX12, and if the take-up on DX12 fails and more devs go OpenGL or Vulkan, their "big chip" cards will be branded a total disaster and a waste of money for the kind of performance they give out.

QUOTE(adilz @ Jul 11 2016, 01:22 AM)
» Click to show Spoiler - click again to hide... «

*
Even if the SLI/CFX benefit is there, the earthquake-like framepacing isn't worth it for midrange cards. For both cases, SLI or CFX. SLI or CFX counts for a very niche few, less than 1% based on reports posted. That is why most people who SLI or CFX, they do that at the higher end level cards like 290X/390X/FuryX or 980/Ti/TitanX level, because if SLI/CFX doesn't work, they still have a relatively powerful single card performance to live with.

Explicit Multi Adapter is also one of the touted improvement in DX12, but again, at the current rate AMD cards are being released, it is holding things back altogether. And not a "1+1 = 2", no 100% scaling there.


goldfries
post Jul 11 2016, 01:40 AM

40K Club
Group Icon
Forum Admin
44,415 posts

Joined: Jan 2003




QUOTE(babykids @ Jul 11 2016, 01:27 AM)
you facebook link please  rclxms.gif  follow u
*
I do live video on my page https://www.facebook.com/goldfries.fanpage/

Hopefully can do GTX 1060 this week.

I'm not on Nvidia's priority list so nothing I can do. I count on AIB partners to work with me.
davidletterboyz
post Jul 11 2016, 01:55 AM

Der Kaiser
*******
Senior Member
4,672 posts

Joined: Jan 2003


QUOTE(khelben @ Jul 11 2016, 01:17 AM)
AMD's current philosophy is, in my opinion, a little "neither here nor there" for PC gaming. Their GCN is very compute-optimized, good for parallel workloads (probably why they make good mining cards?) but in PC gaming, most games are not designed that way. And nVidia is architectured how PC games are designed hence they are a lot more efficient, especially since AMD would need to have more transistors to have the same output. Perhaps that is why AMD's power efficiency is no where near like nVidia's.

But console game developers do use Asynch compute to make their games run more efficient (all consoles have AMD GPUs) so AMD make chips that support it. And I am guessing that AMD wants to save cost in not redesigning a whole new architecture for the PC gaming market but adopt a lot of their stuff from their console chip. AMD after all have way lesser R&D money compared to nVidia.

DX11 as far as I know doesn't have asynchronous compute support and AMD have been pushing Microsoft to come up with an updated API that does. A bit too long so AMD made Mantle (later gave it Khronos and renamed it to Vulkan). And then microsoft went "okay okay we have DX12 now" laugh.gif

So when game developers actually use asynch compute (DX12) in their games, AMD cards perform really well. Like Hitman, Quantum Break, Total War: Warhammer, and Ashes. In guru3d's review, RX480 actually runs faster than GTX980 in those games. Rise of the Tomb Raider is an exception, read somewhere that they only apply pseudo dx12 stuff, gotta read up on that more.

So yeah it really depends on game developers but for the current market and probably at least the next 2 years, nVidia > AMD.

It'll be interesting to see how the new cards from both camps run on Vulkan.
*
Are you sure? Everywhere I read, it's CUDA efficiency > openCL for the same workload. l'm not sure what kind of implementation bit mining programs use. AMD GPU is faster there?

Basically GPU is still a vector processor. The workload is definitely parallel in nature. Efficiency of both designs are much more complicated than that.
adilz
post Jul 11 2016, 02:05 AM

Getting Started
**
Junior Member
267 posts

Joined: Oct 2007
From: Kuala Lumpur, Malaysia


QUOTE(khelben @ Jul 11 2016, 01:17 AM)
So yeah it really depends on game developers but for the current market and probably at least the next 2 years, nVidia > AMD.
*
Yeap, probably 2 years or even longer, depends on how fast Windows 10 replaces all the Win 8 and Win 7 computers. As long as there's significant number of Win 7/ 8 gaming PC in users, game developers would have to create games for both DX11 and DX12.

My conspiracy theory is that Microsoft wants DX12 to be ubiquitous, partly for its console business. With DX12 as the underlaying API for both their Xbox One and PC games, they will entice game developers to jump into their UWP platform. Develop a game on a single platform, sell to both Xbox and PC gamer. For users, maybe something like, you have one Microsoft account, you can download a game and play it on your Xbox console or your PC. Microsoft already started trying to give PC gamers a taste of the console games, Forza Apex and Killer Instinct, which exclusive to Windows 10 only (DX12??????). Read rumors around 2 months back that Battlefield 5 could be Windows 10/ DX12 exclusive, but don't think it will be. It will be stupid for as there's still a lot of Windows 7/8 DX11 PCs (Steam statistic early this showed that 41% user on Windows 10 and there's still ~37% on Windows 7). So Battlefield 5 will have both DX11 and DX12 support.

As for game developers, they can develop a game on a single or almost similar API platform (and a low level at that too, unlike DX11), so they don't have to spend time porting it to different platform (not like right now, DX11, Xbox, Sony, PC). Future could be DX12 (both Xbox and PC) vs Sony Playstation.
stringfellow
post Jul 11 2016, 02:12 AM

Ultrawide | 4K | VR
********
Senior Member
11,305 posts

Joined: Jan 2003
But see how much hate Microsoft get for folks who refuses to update to Windows 10, not to mention how people hate Windows Store and its UWP component. I kept mentioning playing Killer Instinct but finding a local player who plays this game is like finding a diamond the size on Losmah's ring finger.
svfn
post Jul 11 2016, 02:15 AM

On my way
****
Junior Member
500 posts

Joined: Oct 2015
From: Penang
you can see here in Steam the popular GPUs used and platforms currently.
http://store.steampowered.com/hwsurvey

Windows 10 64 bit according to the graph seems on the rise.

Frostbite dev did mention long ago but unfortunately still slow. will see how with BF1 coming soon on Frostbite Engine. likely to have dx11 support at least with dx12 toggle.
QUOTE
Would like to require Win10 & DX12/WDDM2.0 as a minspec for our holiday 2016 games on Frostbite, likely a bit aggressive but major benefits
https://twitter.com/repi/status/585556554667163648

This post has been edited by svfn: Jul 11 2016, 02:27 AM

127 Pages « < 38 39 40 41 42 > » Top
Topic ClosedOptions
 

Change to:
| Lo-Fi Version
0.0298sec    0.63    6 queries    GZIP Disabled
Time is now: 23rd December 2025 - 02:21 AM