nvidia says this game is not matured enough to be considered to be a proper benchmark for dx12... so.... just take the results for now with a grain of salt.
This post has been edited by Moogle Stiltzkin: Aug 28 2015, 12:23 PM
As expected Just wondering what that DX12 can do with SLI. Don't seems to have much of videos and such on those yet.
from what i heard dx12 makes multi gpus much more compatible.
DirectX 12 Will Allegedly Allow Multi-GPU use Between Nvidia and AMD Cards
QUOTE
According to the source, the API will be able to combine different graphics resources and pool all those resources together. Rather than having multiple GPUs rendering an Alternate Frame (AFR) each, there is a new method called Split Frame Rendering (SFR) that is being introduced. With this feature, developers will be able to automatically, or manually, divide texture and geometry data between GPUs that will be able to work together on each frame and be designated a separate portion of the screen for each GPU.
Unlike AFR, that requires both cards to have all of the data in their frame buffers and leaving the user to have a 4GB frame buffer even though there are two cards with 4GB of memory each.
This will, says the source, significantly reduce latency.
Yet SFR isn’t new, as AMD’s Mantle API supports it and applications out there that do as well (see how Mantle performs against DirectX 11). What is surprising is that the source went on to say that DirectX 12 will support all of this across different GPU architecures allowing AMD Radeon and Nvidia GeForce GPUs to work together to render the same game.
However, while this sounds great, it will still be up to developers to make use of, and utilize, Explicit Asynchronous Multi-GPU Capabilities for their games and software.
yeah multi gpu more bang for your buck. but i still wouldn't go multi gpu :} expensive. not to mention i don't think this will change multi gpus being more dependent on driver updates compared to single gpu solutions.
Think wont work with nvidia Nvidia tends to do alot of firmware n architecture thingy. Hence y lower overhead on cpu before from driver.
So thats y why u see on their roadmap theres unified mem/virtual mem etc.
Dx12 feature set etc was done really quickly n i am pretty sure it threw some wrenches into nvidia roadmap plans. Since nvidia tends to do two gen forward r&d
oh that sucks :{
so.... if wont work with amd gpus... then what about other nvidia gpus ?
when i attend this event, will see if they mention on that
well it was speculated that nvidia does have async compute, but that most likely it's emulated, considering this recent issue by oxide, then that could explain their current results.
but that is still speculation, though a rather logical one judging by the results so far.
even i'm NV fan-boy also can't denied DX12 is goin to favor for red side. previously NV driver is superior but now different story. inikali DX11 we owned but DX12 lain kali-lah..
looking into this async compute issue.... regarding how nvidia cards would perform in dx12 games has got me thinking twice about pascal.
i'm just gonna have to wait and see what the reviewers say on that matter.
when it comes out and they test on ash singularity or another newer dx12 game, then can know whether pascal will last me a few years (with being able to play dx12 games well in mind) or not
if not then choice would be go for amd (which apparently was not a total flop because they actually did async compute the right way despite being a power guzzler), whereas nvidia probly did some emulation method that could do it albeit poorly
nvidia telling ash to disable async compute ... does not inspire much confidence
This post has been edited by Moogle Stiltzkin: Sep 2 2015, 03:23 AM
Well its not as simple as just wait for a comment.
Here is the truth. Nvidia claimed Async Compute, technically they could argue since they have 32 compute engines they could be considered async however they do not actually fully work the same way people refer to on AMD's Async compute when we talk with async shaders.
Its more like Nvidia was misleading people like they did with the 4gb vram rather than straight up lie. Saying async but having only serial compute is kinda wrong to claim.
dx11 nvidia clearly wins, but come dx12 with all sorts of performance tricks you can do, this may change needs closer look when benchmarks are out. but judging by all the technical mumbo jumbo, it's not looking good for 980ti
sure hardly any dx12 games out yet, but most people buy a card to last 4 maybe 5 years. so you wouldn't want to change out your card lesser than that just to play a dx12 game as well it should have been
maybe the performance penalty isn't too bad, or better yet pascal fixes this ? so have to wait for benchmarks
This post has been edited by Moogle Stiltzkin: Sep 2 2015, 11:07 AM
not sure about others. only cared about the best single gpu nvidia has atm
even with this dx12 concern when it comes down to async compute which a game like rts genre that spawns many units like AOTS was demonstrating how rts could be like by increasing efficiency/performance via dx12 is possible. the gains from async compute if the developer uses it, seems to be quite good on paper :]
will come down to fps and latency, whether the difference is huge to warrant switching camps
QUOTE
Results, Heavy This set of benchmarks uses only the frame times and averages from the “heavy” third of the benchmark scenes. In theory, this should put more emphasis on the DX12 and CPU implementation for each combination of hardware.
from the benchmark it just clearly shows the amd cards have huge jumps in performance in dx12 compared to dx11. so although the difference in end result is by a small margin between the 2 cards, it brings up the question, could nvidia performance had been much better had they used better async compute like amd ? nvidia async compute now being scrutinized for being inferior to amds judging by this result.
might be that in pascal the dx12 leveraging would be the same as maxwell, and this is what people waiting to be answered :/
but if the results is more or less same, dx 11 will be better than amd, and dx12 only slightly worse than amd, so nvidia may still be a better choice albeit disappointing they didn't fully leverage on dx12 async compute (cause the benchmark shows it does make a difference, if a game decided to use it e.g. rts especially).
for VR, people may prefer amd for the lower latency compared to nvidia. not sure the latency will be that big to be a problem for non vr games in regards to async compute
Please recomend me the lowest price nvidia but enough to play all games apps today?
the real question is what gpu can play the games you want at what quality settings.
if you don't care about medium or low graphics, you can get the cheapest gpu on the latest platform e.g. maxwell etc.
i think the cheapest is a 950 gtx ?
but considering dx12 async compute, amd might be a safer bet for better scaling performance for dx12... but this is not conclusive just yet, but it seems that way so far.
If that is true lets wait another 4 years volta TI
how much fps performance can i expect going from a 680 to a pascal ? playing Dragon age inqusition my 680 max setting cannot even handle 1080p resolution on 24'' lcd..... time for upgrade....
QUOTE(skylinelover @ Feb 7 2016, 11:53 AM)
If that is true lets wait another 4 years volta TI
if not mistaken i heard the timeframe from pascal to volta may be somewhere between 1-2 years
QUOTE
Meanwhile Volta has been pushed back and stripped of its marquee feature. It’s on-package DRAM has been promoted to the GPU before Volta, and while Volta still exists, publicly it is a blank slate. We do not know anything else about Volta beyond the fact that it will come after the 2016 GPU.
Which brings us to Pascal, the 2016 GPU. Pascal is NVIDIA’s latest GPU architecture and is being introduced in between Maxwell and Volta. In the process it has absorbed old Maxwell’s unified virtual memory support and old Volta’s on-package DRAM, integrating those feature additions into a single new product.
QUOTE
With today’s announcement comes a small degree of additional detail on NVIDIA’s on-package memory plans. The bulk of what we wrote for Volta last year remains true: NVIDIA uses on-package stacked DRAM, allowed by the use of TSVs. What’s new is that NVIDIA has confirmed they will be using JEDEC’s High Bandwidth Memory (HBM) standard, and the test vehicle Pascal card we have seen uses entirely on-package memory, so there isn’t a split memory design. Though we’d also point out that unlike the old Volta announcement, NVIDIA isn’t listing any solid bandwidth goals like the 1TB/sec number we had last time. From what NVIDIA has said, this likely comes down to a cost issue: how much memory bandwidth are customers willing to pay for, given the cutting edge nature of this technology?
QUOTE
Finally, NVIDIA has already worked out some feature goals for what they want to do with NVLink 2.0, which would come on the GPU after Pascal (which by NV’s other statements should be Volta). NVLink 2.0 would introduce cache coherency to the interface and processors on it, which would allow for further performance improvements and the ability to more readily execute programs in a heterogeneous manner, as cache coherency is a precursor to tightly shared memory.
Wrapping things up, with an attached date for Pascal and numerous features now billed for that product, NVIDIA looks to have to set the wheels in motion for developing the GPU they’d like to have in 2016. The roadmap alteration we’ve seen today is unexpected to say the least, but Pascal is on much more solid footing than old Volta was in 2013. In the meantime we’re still waiting to see what Maxwell will bring NVIDIA’s professional products, and it looks like we’ll be waiting a bit longer to get the answer to that question.
If you have any ti, just wait for a newer ti version. Make life simple
problem with this though, how much would the 1080ti be? 750usd?
even if it was more reasonably priced, i doubt it will be offered very soon after 1080..... also by the time it comes out, volta will be just over the horizon.....
and even after all this, it's just speculation that there will be a 1080ti o_o; we don't know for sure.
also architecturally the pascal seems to be a maxwell on steroids. i suspect volta will be the biggest step when they finally go hbm2. there was never originally suppose to be a pascal.
that said the pascal on overview seems to solve most of my issues
1. 10bit support 2. hevc decode/encode 3. async compute hardware for dx12 games 4. better performance than a 980ti 20% ish difference ?
i'd probably get a pascal, but most likely an AIB model because i'm not gonna pay 100usd more for founders.... especially if i am most likely going to go water cooling. founder is pretty much paying for the fan cooler which was well made, but i most likely won't be using. Not to mention AIB customs will be clocked higher, have more cable to power the card for overclocking people speculated AND 100 usd cheaper...
This post has been edited by Moogle Stiltzkin: May 18 2016, 07:36 AM
Well, unless you really want to squeeze out the remaining 30% then go ahead. But the 980ti would suffice until some new ti editions are announced. Unless you really cannot live without 120fps .....And I'm pretty sure those who can get the 980ti wouldn't have any budget issue
looking at the recent performance charts, seems that a 1080ti would mostly benefit 4k fps performance using more higher settings.
but if i'm not mistaken, the 1080 can also do 30-40fps+ with high graphics settings at 4k. but if you can hit 60 fps+ that is always going to make a difference.
you can still do 4k with lower settings or leave aa disabled (which most people often do for aa for 4k anyway). 4k may seem doable as such on a 1080 without needing to wait on a 1080ti.
my point is, at most 1080ti will mostly be of most benefit for 4k gaming. but as far as lower res below 4k is concerned, the 1080 seems to get the job done just fine. so you'd be waiting on a 1080ti if it shows up, and you'd be paying presumably much more, presumably more than a 980ti, considering that the 1080 is priced higher than 980 at launch wasn't it?
speaking from the viewpoint of someone who buys their cards to last 4-6 year before next gpu upgrade also i'm using a 680gtx which is was released in 2012, so over due for an upgrade.
PS: i think pcper pretty much summarized the pascal as being not too far off from the maxwell architecture, especially that performance wise it's a 1:1 ratio if you had the same number of parts. But they increase performance by adding on more stuff ontop of the other tweaks. (in video scroll to 1:57 he explains it better)
as such could speculate that the bigger real change will be with volta perhaps? in terms of fps performance boost, also more bandwidth with HBM2 which 4k will most likely need (right now with 1080 their using compression to deal with the narrower bandwidth, also using gddrx5 for that slightly more bandwidth needed).
but even if thats the case, i already pointed out for ultra 1080p and other resos below 4k the 1080 and even 980ti will be good enuff. Also don't forget 8gb vram is now the new standard with the intro to 1080 and this is good enough even for the highest end games. 1080 edge over 980ti is that the former may have gotten async compute done right this time around. as for image quality 1080 also has 10bit HEVC encode which will work well with my Dell U2413 10bit monitor to reduce banding when watching my 10bit videos
my 50cents
This post has been edited by Moogle Stiltzkin: May 18 2016, 09:47 AM
Then the 1080 seems to be YOUR card of choice then. JUST DO IT! I just got my 980ti last month and I can say that for now I am gonna wait for another 4-6 years before getting another ti version. 1080p gaming more than enough lah.
i'm still waiting on reviews. seeing if async compute was done right. once thats settled then wait for founders edition to fuk off before i get the cheaper aib models
early reviews seem good so far though for my specific requirements
This post has been edited by Moogle Stiltzkin: May 18 2016, 06:35 PM
ultra settings 4k resolution tom clancy's the division at 42 fps... playable? yes, but it's not 60fps lets put it that way. but at ultra settings at that reso o_o; hm nice.
1080 is 25% faster than a 980ti if we were to roughly it up in terms of performance.
This post has been edited by Moogle Stiltzkin: May 18 2016, 07:46 PM
another reason to wait for AIBs. or at least wait for them to compare
QUOTE
Capacitors are located directly underneath the GPU to smooth out any spikes. The PWM controller is on this side of the board as well (it was on the front previously). Again, that gives Nvidia's partners some freedom to add their own power boards with different PWM controllers.
Let’s get back to the voltage regulator's PWM controller, though. Nvidia’s GPU Boost 3.0 has a new set of voltage regulation requirements, resulting in significant changes. We would have expected a controller like International Rectifier's IR3536A to be paired with a pure 5+1-phase design. But that's not the case. A µP9511P is used instead, which is bad news for anyone who likes to overclock, since the interface and protocol used by tools such as MSI Afterburner and Gigabyte’s OC Guru won't work with this card. The change to a new controller, which isn’t very well documented at this point, is probably due to technical considerations.
It’ll be interesting to see how Nvidia’s GPU Boost 3.0 technology and the voltage regulation’s modified layout affect power consumption. You can bet we'll be measuring all of that shortly.
but u need to remember, that is FOUNDERS edition. which is the 100 usd marked up early adopter tax.
Or maybe becauz founder edition comes with a good fan so cost more. But AIB custom that comes slightly later, the cooling shouldn't be much worse than that would it? Also they would tend to do stuff like oc the cards for slightly more performance too.
also since i personally will most likely go down water cooling route, doesn't make sense to pay more for the stock fan i won't be using anyway
anyhow of interest to me was this analysis of ashes of singularity (which is one of the earliest dx12 games to come out)
QUOTE
GeForce GTX 1080 holds onto its lead at 3840x2160, while both Fiji-based boards trounce Nvidia’s previous-gen hardware. The 980 Ti’s frame-to-frame variance issues are still clear to see, though we also see the Titan X popping up behind those yellow spikes as well. Meanwhile, the GTX 1080 exhibits very little of the judder that’d indicate inconsistent frame delivery.
so in comparison to ATI's did the ati have any judder at all or is this a nvidia specific issue only? or is it that Nvidias async compute didn't really solve the issues fully ?
4k gaming at ultra settings. can it do it yet?
QUOTE
There’s not much else AMD or Nvidia can do to optimize their drivers for Battlefield 4—even at 4K, all of these cards facilitate smooth frame delivery. And while we’re not in love with average frame rates in the 30s or 40s, you could conceivably play at 3840x2160 on a GeForce GTX 980 Ti or Radeon R9 Fury X.
The thing is, both companies really wanted those previous-gen cards to be the first single-GPU solutions for 4K. And if you turned down certain detail settings, they were. But enthusiasts want to enjoy PC games with their quality sliders cranked up. By beating its GeForce GTX 980 Ti by almost 33%, Nvidia does make Battlefield 4 enjoyable at 4K using the Ultra preset. But does that generalization apply to newer, more taxing titles? Or are you still looking at a dual-GPU setup for fast-enough frame rates?
Founder Edition = Reference Card used by previous generation of Nvidia card.
If you want to go for watercooling best is choose those with reference PCB, EK has produced the waterblock for it & will be on market together with GTX1080.
what is the difference between nvidias founder edition vs aib reference?
would the aib reference have as good a oc potential as the aib custom designs? But to my understand the waterblocks will be less likely to be compatible with the non reference models.
any tips
PS: fan of ekb
This post has been edited by Moogle Stiltzkin: May 19 2016, 08:13 PM