NVIDIA GeForce Community V15 (new era pascal), ALL HAIL NEW PASCAL KING GTX1080 out now
NVIDIA GeForce Community V15 (new era pascal), ALL HAIL NEW PASCAL KING GTX1080 out now
|
|
May 20 2016, 01:25 AM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
All Stars
10,476 posts Joined: Jan 2003 From: Sarawak |
Stock only arrive on mid/end June
|
|
|
|
|
|
May 20 2016, 05:44 AM
Show posts by this member only | IPv6 | Post
#1542
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,473 posts Joined: Jan 2003 |
QUOTE(llk @ May 19 2016, 08:21 PM) As confirmed by many reviewers 'founder edition' actually is 'reference card' which previously known as. ty for the info.This probably the one of the aib card that used reference pcb (same as founder edition), which may priced at USD599. For other more advance custom card( eg MSI Gaming, Asus Strix, Gigabyte Extreme etc) i believe will be priced more higher than USD599. my wc seller source told me that he may have stock for 1080 with custom waterblock with together soon. » Click to show Spoiler - click again to hide... « my only issue is.... if i do get the reference design, i rather avoid founder edition, and instead get the AIB reference design that will most likely cost 100usd less. for me my budget is probably a non founder reference aib with a water cooling block. looking at the performance charts 1080p (or for me 1920x1200 ) performance at ultra settings will be fine in all games. and even 8gb vram is ample enough for my needs (with 2gb vram i keep getting crashes in tos every now and then citing gpu vram as a likely source >_>: maybe i run too many stuff in background or my gpu is spoilt). 4k seems playable but it's not a steady 60 fps, but more like 30-50fps from what i see in the charts. these are fps at 4k with ultra settings by the way. probably playable but the dream is 60 fps or more at 4k me personally, as long as i can 1920x1200 at ultra, i'm perfectly fine with. Also as a bonus the pascal comes with 10bit HEVC (i'm also assuming it does 10bit h264 as well?) support which will be great for anime viewing via madvr/hc-mpc/lav filters for reduced banding, also better 10bit colors which my monitor is capable of. for a 980ti user i doubt going to a 1080 is really worth it, but for me coming from a 2012 680gtx it's definitely worth it. probably even better if i can wait 2 more years for volta when HBM2 comes out, but not too worried about that. the only improvement i see for a volta is mostly making 4k ultra settings at playable frame rates more consistent. Other than that the pascal seems to do what i need it to do (assuming they didn't screw up on the async compute for dx12 games). QUOTE As the first high-end card of this generation to launch, NVIDIA gets to set the pace for the market. At the risk of being redundant the GTX 1080 is now the fastest single-GPU card on the market, and even at 4K it wins at every single gaming benchmark, typically by a good margin. In practice we’re looking at a 31% performance lead over GTX 980 Ti – the card the GTX 1080 essentially replaces – with a similar 32% lead over AMD’s Radeon R9 Fury X. Meanwhile against the slightly older GTX 980, that gap is 70%. On a generational basis this ends up being very close to the 74% jump in 4K performance going from the GTX 680 to GTX 980. And although the pricing comparison is not especially flattering for NVIDIA here, it should be evident that NVIDIA isn’t just looking to sell GTX 1080 as an upgrade for high-end Kepler cards, but as an upgrade for GTX 980 as well, just 20 months after it launched. QUOTE The lack of competition at the high-end means that for the time being NVIDIA can price the GTX 1080 at what the market will bear, and this is more or less what we’re looking at for NVIDIA’s new card. While the formal MSRP on the GTX 1080 is $599 – $50 over what the GTX 980 launched at – that price is the starting price for custom cards from NVIDIA’s partners. The reference card as we’ve previewed it today – what NVIDIA is calling the Founders Edition card – carries a $100 premium over that, pushing it to $699. http://www.anandtech.com/show/10326/the-nv...-1080-preview/2 This post has been edited by Moogle Stiltzkin: May 20 2016, 06:47 AM |
|
|
May 20 2016, 07:25 AM
Show posts by this member only | IPv6 | Post
#1543
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,473 posts Joined: Jan 2003 |
by the way when does AIB reference cards come out ? is it in june ?
|
|
|
May 20 2016, 07:37 AM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,349 posts Joined: Mar 2005 From: Johor Bahru, Malaysia |
|
|
|
May 20 2016, 11:35 AM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
All Stars
11,256 posts Joined: Jul 2005 |
|
|
|
May 20 2016, 12:17 PM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
2,649 posts Joined: Nov 2010 |
|
|
|
|
|
|
May 20 2016, 12:22 PM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
All Stars
11,256 posts Joined: Jul 2005 |
|
|
|
May 20 2016, 12:33 PM
|
![]() ![]() ![]()
Junior Member
400 posts Joined: Dec 2015 |
hi guys.. its worth to get gtx 750ti for rm 420..brand new?
planning playing doom 2016 with medium setting on full hd (dunno can support or not) since this gc already older.. year 2014.. |
|
|
May 20 2016, 01:15 PM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
2,649 posts Joined: Nov 2010 |
QUOTE(skylinelover @ May 20 2016, 12:22 PM) Jz graduated 2 years ago.. been using laptop ever since.. from my siggy.. After that bought laptop again, paired with the 2k display.. dayum the quality... I regret liao.. so I'm going to full desktop this year.. with dual 2k monitor setup |
|
|
May 20 2016, 03:52 PM
Show posts by this member only | IPv6 | Post
#1550
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,473 posts Joined: Jan 2003 |
in the pod cast discussing 1080 they cover whether the pascal is true async compute or not. it isn't but they do bring up a good point. at the end of the day does the game with async compute enabled, is there a situation where the 1080 would under perform allowing a weaker amd card beat it becauz of this? the answer seems to be a know. whether they are simply pandering to nvidia as fanboys i'm not sure, hence why i would rather rely on multiple sources on this subject, but thats their opinion. but to my understanding they added 2 things for nvidia's way to handle async. QUOTE Pascal improves the story dramatically for NVIDIA, though there will still be debate as to how its integration to support asynchronous compute compares to AMD’s GCN designs. NVIDIA sees asynchronous computing as creating two distinct scenarios: overlapping workloads and time critical workloads. Overlapping workloads are used when a GPU does not fill its processing capability with a single workload alone, leaving gaps or bubbles in the compute pipeline that degrade efficiency and slow down the combined performance of the system. This could be PhysX processing for GeForce GPUs or it might be a post-processing step that a game engine uses to filter the image as a final step. In Maxwell, this load balancing had to work with a fixed partitioning model. Essentially, the software had to say upfront how much time of the GPU it wanted divided between the workloads in contention. If the balance of the workloads stays in balance, this can be an efficient model, but any shift in the workloads would mean either unwanted idle time or jobs not completing in the desired time frame. Pascal addresses this by enabling dynamic load balancing that monitors the GPU for when work being added, allowing the secondary workload to take the bubbles in the system to be used for compute. QUOTE Does this mean that NVIDIA is on-par or ahead of AMD in terms of asynchronous compute? It’s hard to say, as the implementations are very different between the two architectures. AMD GCN still has the Asynchronous Compute Engines that uses asynchronous shaders, allowing multiple kernels to execute on the GPU concurrently without preemption. AMD also recently introduced Quick Response Queues in the second generation GCN products that allow developers to specify a higher priority async shaders that are time sensitive, like the Rift ATW. Source:At the end of the day it comes down to the resulting performance of each product. We are working on a couple of interesting ways to test asynchronous compute capability of GPUs directly to see how things stack up from a scientific viewpoint, but when the rubber hits the road, which GPU gets you the highest frame rate and lowest latency? That we can test today. https://www.pcper.com/reviews/Graphics-Card...al-Gamers/Async This post has been edited by Moogle Stiltzkin: May 20 2016, 04:14 PM |
|
|
May 20 2016, 04:01 PM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,523 posts Joined: Apr 2006 |
QUOTE(xcxa23 @ May 20 2016, 12:17 PM) Me too.. I'm planning for 2k resolution. These days I've been meddling with Ultrawide gaming. 3440x1440 is just too immersive to give up, 4k can rot for now. Add to the fact that gsync works at this res, it's just pure gaming bliss.2k to 4k seems like no difference to me.. lol.. I assume 2016 year end/early Jan 2017? |
|
|
May 20 2016, 04:10 PM
Show posts by this member only | IPv6 | Post
#1552
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,473 posts Joined: Jan 2003 |
QUOTE(SSJBen @ May 20 2016, 04:01 PM) These days I've been meddling with Ultrawide gaming. 3440x1440 is just too immersive to give up, 4k can rot for now. Add to the fact that gsync works at this res, it's just pure gaming bliss. but even for people with no monitors that support 4k resolution, they can use the mode that converts 4k sources to their monitor native resolution. the benefit being an affect similar to AA ? is it worth using 4k in that way, or to just simply use SMAA at 1080p without doing that? |
|
|
May 20 2016, 04:16 PM
|
![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
1,667 posts Joined: Jan 2003 From: The Cool Name Place |
QUOTE(Moogle Stiltzkin @ May 20 2016, 04:10 PM) but even for people with no monitors that support 4k resolution, they can use the mode that converts 4k sources to their monitor native resolution. the benefit being an affect similar to AA ? is it worth using 4k in that way, or to just simply use SMAA at 1080p without doing that? If AA is available, it is better to use AA rather than bruteforcing it.When DSR is enabled, I personally think that it is rather blurry (33% smoothness in NVCP). So, it seems like FXAA, except the texture is sharper. But the performance cost is not worth it. |
|
|
|
|
|
May 20 2016, 04:37 PM
|
![]() ![]()
Junior Member
241 posts Joined: Jan 2003 From: Miri, Sarawak |
QUOTE(defaultname365 @ May 19 2016, 11:33 AM) So... doing the math Could be that markup price is for early adopters like smartphones? If this is the case, the price when stock is available will be close to the announced mrsp.$699 Founders Edition price tag = RM2,850, selling for RM3,300, total increase of ~RM500 ($122 or 17% more) Which could mean that for GTX 1070 Founders Edition, $449 = RM1,830, could sell for an increase of 17% or $78, ~RM320 more, puts it at a selling price of RM2,150 Since GTX 1070 is poised to be on par/slightly better than Titan X = in terms of value/performance, it will be the top choice |
|
|
May 20 2016, 04:39 PM
|
![]() ![]()
Junior Member
267 posts Joined: Oct 2007 From: Kuala Lumpur, Malaysia |
QUOTE(Moogle Stiltzkin @ May 20 2016, 03:52 PM) in the pod cast discussing 1080 they cover whether the pascal is true async compute or not. it isn't but they do bring up a good point. at the end of the day does the game with async compute enabled, is there a situation where the 1080 would under perform allowing a weaker amd card beat it becauz of this? the answer seems to be a know. whether they are simply pandering to nvidia as fanboys i'm not sure, hence why i would rather rely on multiple sources on this subject, but thats their opinion. but to my understanding they added 2 things for nvidia's way to handle async. Source: https://www.pcper.com/reviews/Graphics-Card...al-Gamers/Async ![]() Full details: http://wccftech.com/nvidia-gtx-1080-async-compute-detailed/ |
|
|
May 20 2016, 05:19 PM
Show posts by this member only | IPv6 | Post
#1556
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,473 posts Joined: Jan 2003 |
QUOTE(Demonic Wrath @ May 20 2016, 04:16 PM) If AA is available, it is better to use AA rather than bruteforcing it. oo then sounds like 4k is not relevant to me then since my monitor doesn't natively do 4k anyway.When DSR is enabled, I personally think that it is rather blurry (33% smoothness in NVCP). So, it seems like FXAA, except the texture is sharper. But the performance cost is not worth it. personally i didn't experience 4k yet... but wasn't 4k suppose to benefit big screen tvs in the living room? Whereas for pc monitor where you sit closer, for sizes like 24'' to maybe like 36'' 1080p was good enuff ? only the bigger pixel sized screens would mostly benefit from 4k i thought? |
|
|
May 20 2016, 06:01 PM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,523 posts Joined: Apr 2006 |
QUOTE(Moogle Stiltzkin @ May 20 2016, 04:10 PM) but even for people with no monitors that support 4k resolution, they can use the mode that converts 4k sources to their monitor native resolution. the benefit being an affect similar to AA ? is it worth using 4k in that way, or to just simply use SMAA at 1080p without doing that? Depends on the game in my experience.There are still games which don't have proper DPI scaling above 1080p. So downsampling from 4k will make text way too small or the UI too puny. QUOTE(Moogle Stiltzkin @ May 20 2016, 05:19 PM) oo then sounds like 4k is not relevant to me then since my monitor doesn't natively do 4k anyway. Not really. 1080p at 36" with a viewing distance of 2-3 feet is simply not feasible to me. It's blurry, it's too easy to see the pixel gate on screen.personally i didn't experience 4k yet... but wasn't 4k suppose to benefit big screen tvs in the living room? Whereas for pc monitor where you sit closer, for sizes like 24'' to maybe like 36'' 1080p was good enuff ? only the bigger pixel sized screens would mostly benefit from 4k i thought? This post has been edited by SSJBen: May 20 2016, 06:04 PM |
|
|
May 20 2016, 06:42 PM
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]()
All Stars
11,256 posts Joined: Jul 2005 |
QUOTE(xcxa23 @ May 20 2016, 01:15 PM) Jz graduated 2 years ago.. been using laptop ever since.. from my siggy.. Haha ohAfter that bought laptop again, paired with the 2k display.. dayum the quality... I regret liao.. so I'm going to full desktop this year.. with dual 2k monitor setup Good luck in getting then |
|
|
May 20 2016, 06:54 PM
Show posts by this member only | IPv6 | Post
#1559
|
![]() ![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
4,473 posts Joined: Jan 2003 |
QUOTE(adilz @ May 20 2016, 04:39 PM) Pascal will never have have true Async compute as its not built on hardware architecture. Nvidia has to do it on the software side by load balancing and pre-emption. Though GTX 1080 does way better than GTX 980 Ti when it comes to Async-Compute, they still lack behind AMD Fury X when it comes to Async Compute. And Fury is the previous gen AMD architecture. The gap could be even bigger when AMD comes out with 14 mm FinFet Vega GPUs (touted as the successor to AMD Fury series). Future games will run on DX12. Already rumours some games like Battlefield 5 could exclusively be on DX12/ Windows 10 only. Guess I'll wait for to see how things pan out before deciding on my GTX 970 SLI upgrade wow just saw this![]() Full details: http://wccftech.com/nvidia-gtx-1080-async-compute-detailed/ » Click to show Spoiler - click again to hide... « judging by this ... QUOTE In summary AMD’s R9 Fury X saw much greater benefit at all resolutions from the DirectX 12 API itself and from Async Compute compared to the GTX 1080 and 980 Ti. The 1080 showed modest gains with Async Compute and DirectX 12 but did not exhibit any performance regression like the GTX 980 Ti. So the improvements introduced with Pascal definitely helped but were not quite enough to close the gap that exists between Nvidia’s and AMD’s hardware here. In fact, It’s quite eye opening to see the GTX 1080 – which is 30% faster than the R9 Fury X at 4K – only managing to squeeze past the Fury X by 9% in DirectX 12. pascal is maxwell on steroids pretty much but refined. except that async compute though improved, still is brute force and not the hardware solution like amds we were hoping for. if compared to amd which shows performance gains with async compute and dx12, the pascals wasn't that impressive. but at the end of the day i'm sure all that matters is the FPS regardless of the inefficiencies present. so i feel that peoples gripe with 1080 async implementation is had they followed a route similar to amds for it, they would have managed to make the pascal even more efficient and thus living up to the dx12 dream or reduced overheads = increase fps. maybe volta will indeed have those architectural changes to fix this issue once and for all, but theres no info if that is the case or not. |
|
|
May 20 2016, 08:36 PM
|
![]() ![]() ![]() ![]() ![]() ![]()
Senior Member
1,667 posts Joined: Jan 2003 From: The Cool Name Place |
QUOTE(Moogle Stiltzkin @ May 20 2016, 06:54 PM) wow just saw this I don't understand why it is regarded as "brute force" when it is performing better using less resources? I'm sure they're actually very efficient since they can perform better than AMD with lower count of cores and power consumption. You wouldn't say Intel CPU (having less core) is inefficient compared to AMD's CPU, right?» Click to show Spoiler - click again to hide... « judging by this ... pascal is maxwell on steroids pretty much but refined. except that async compute though improved, still is brute force and not the hardware solution like amds we were hoping for. if compared to amd which shows performance gains with async compute and dx12, the pascals wasn't that impressive. but at the end of the day i'm sure all that matters is the FPS regardless of the inefficiencies present. so i feel that peoples gripe with 1080 async implementation is had they followed a route similar to amds for it, they would have managed to make the pascal even more efficient and thus living up to the dx12 dream or reduced overheads = increase fps. maybe volta will indeed have those architectural changes to fix this issue once and for all, but theres no info if that is the case or not. |
|
Topic ClosedOptions
|
| Change to: | 0.0337sec
0.70
6 queries
GZIP Disabled
Time is now: 12th December 2025 - 04:39 PM |