Welcome Guest ( Log In | Register )

Bump Topic Topic Closed RSS Feed
6 Pages « < 3 4 5 6 >Bottom

Outline · [ Standard ] · Linear+

 NVIDIA GeForce Community V15 (new era pascal), ALL HAIL NEW PASCAL KING GTX1080 out now

views
     
Moogle Stiltzkin
post May 20 2016, 05:44 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(llk @ May 19 2016, 08:21 PM)
As confirmed by many reviewers 'founder edition' actually is 'reference card' which previously known as.

This probably the one of the aib card that used reference pcb (same as founder edition), which may priced at USD599.

For other more advance custom card( eg MSI Gaming, Asus Strix, Gigabyte Extreme etc) i believe will be priced more higher than USD599.
*
ty for the info.

my wc seller source told me that he may have stock for 1080 with custom waterblock with together soon.

» Click to show Spoiler - click again to hide... «



my only issue is.... if i do get the reference design, i rather avoid founder edition, and instead get the AIB reference design that will most likely cost 100usd less.

for me my budget is probably a non founder reference aib with a water cooling block.



looking at the performance charts 1080p (or for me 1920x1200 ) performance at ultra settings will be fine in all games. and even 8gb vram is ample enough for my needs (with 2gb vram i keep getting crashes in tos every now and then citing gpu vram as a likely source >_>: maybe i run too many stuff in background or my gpu is spoilt).

4k seems playable but it's not a steady 60 fps, but more like 30-50fps from what i see in the charts. these are fps at 4k with ultra settings by the way. probably playable but the dream is 60 fps or more at 4k smile.gif

me personally, as long as i can 1920x1200 at ultra, i'm perfectly fine with. Also as a bonus the pascal comes with 10bit HEVC (i'm also assuming it does 10bit h264 as well?) support which will be great for anime viewing via madvr/hc-mpc/lav filters for reduced banding, also better 10bit colors which my monitor is capable of.

for a 980ti user i doubt going to a 1080 is really worth it, but for me coming from a 2012 680gtx it's definitely worth it. probably even better if i can wait 2 more years for volta when HBM2 comes out, but not too worried about that. the only improvement i see for a volta is mostly making 4k ultra settings at playable frame rates more consistent. Other than that the pascal seems to do what i need it to do (assuming they didn't screw up on the async compute for dx12 games).



QUOTE
As the first high-end card of this generation to launch, NVIDIA gets to set the pace for the market. At the risk of being redundant the GTX 1080 is now the fastest single-GPU card on the market, and even at 4K it wins at every single gaming benchmark, typically by a good margin. In practice we’re looking at a 31% performance lead over GTX 980 Ti – the card the GTX 1080 essentially replaces – with a similar 32% lead over AMD’s Radeon R9 Fury X. Meanwhile against the slightly older GTX 980, that gap is 70%.

On a generational basis this ends up being very close to the 74% jump in 4K performance going from the GTX 680 to GTX 980. And although the pricing comparison is not especially flattering for NVIDIA here, it should be evident that NVIDIA isn’t just looking to sell GTX 1080 as an upgrade for high-end Kepler cards, but as an upgrade for GTX 980 as well, just 20 months after it launched.


QUOTE
The lack of competition at the high-end means that for the time being NVIDIA can price the GTX 1080 at what the market will bear, and this is more or less what we’re looking at for NVIDIA’s new card. While the formal MSRP on the GTX 1080 is $599 – $50 over what the GTX 980 launched at – that price is the starting price for custom cards from NVIDIA’s partners. The reference card as we’ve previewed it today – what NVIDIA is calling the Founders Edition card – carries a $100 premium over that, pushing it to $699.


http://www.anandtech.com/show/10326/the-nv...-1080-preview/2

This post has been edited by Moogle Stiltzkin: May 20 2016, 06:47 AM
Moogle Stiltzkin
post May 20 2016, 07:25 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
by the way when does AIB reference cards come out ? is it in june ?
Moogle Stiltzkin
post May 20 2016, 03:52 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003


in the pod cast discussing 1080 they cover whether the pascal is true async compute or not. it isn't but they do bring up a good point.

at the end of the day does the game with async compute enabled, is there a situation where the 1080 would under perform allowing a weaker amd card beat it becauz of this? the answer seems to be a know.

whether they are simply pandering to nvidia as fanboys i'm not sure, hence why i would rather rely on multiple sources on this subject, but thats their opinion.

but to my understanding they added 2 things for nvidia's way to handle async.



QUOTE
Pascal improves the story dramatically for NVIDIA, though there will still be debate as to how its integration to support asynchronous compute compares to AMD’s GCN designs. NVIDIA sees asynchronous computing as creating two distinct scenarios: overlapping workloads and time critical workloads.

Overlapping workloads are used when a GPU does not fill its processing capability with a single workload alone, leaving gaps or bubbles in the compute pipeline that degrade efficiency and slow down the combined performance of the system. This could be PhysX processing for GeForce GPUs or it might be a post-processing step that a game engine uses to filter the image as a final step. In Maxwell, this load balancing had to work with a fixed partitioning model. Essentially, the software had to say upfront how much time of the GPU it wanted divided between the workloads in contention. If the balance of the workloads stays in balance, this can be an efficient model, but any shift in the workloads would mean either unwanted idle time or jobs not completing in the desired time frame. Pascal addresses this by enabling dynamic load balancing that monitors the GPU for when work being added, allowing the secondary workload to take the bubbles in the system to be used for compute.


QUOTE
Does this mean that NVIDIA is on-par or ahead of AMD in terms of asynchronous compute? It’s hard to say, as the implementations are very different between the two architectures. AMD GCN still has the Asynchronous Compute Engines that uses asynchronous shaders, allowing multiple kernels to execute on the GPU concurrently without preemption. AMD also recently introduced Quick Response Queues in the second generation GCN products that allow developers to specify a higher priority async shaders that are time sensitive, like the Rift ATW.

At the end of the day it comes down to the resulting performance of each product. We are working on a couple of interesting ways to test asynchronous compute capability of GPUs directly to see how things stack up from a scientific viewpoint, but when the rubber hits the road, which GPU gets you the highest frame rate and lowest latency? That we can test today.
Source:
https://www.pcper.com/reviews/Graphics-Card...al-Gamers/Async

This post has been edited by Moogle Stiltzkin: May 20 2016, 04:14 PM
Moogle Stiltzkin
post May 20 2016, 04:10 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(SSJBen @ May 20 2016, 04:01 PM)
These days I've been meddling with Ultrawide gaming. 3440x1440 is just too immersive to give up, 4k can rot for now. Add to the fact that gsync works at this res, it's just pure gaming bliss.
*
but even for people with no monitors that support 4k resolution, they can use the mode that converts 4k sources to their monitor native resolution. the benefit being an affect similar to AA ? is it worth using 4k in that way, or to just simply use SMAA at 1080p without doing that? hmm.gif
Moogle Stiltzkin
post May 20 2016, 05:19 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(Demonic Wrath @ May 20 2016, 04:16 PM)
If AA is available, it is better to use AA rather than bruteforcing it.

When DSR is enabled, I personally think that it is rather blurry (33% smoothness in NVCP). So, it seems like FXAA, except the texture is sharper. But the performance cost is not worth it.
*
oo then sounds like 4k is not relevant to me then since my monitor doesn't natively do 4k anyway.

personally i didn't experience 4k yet... but wasn't 4k suppose to benefit big screen tvs in the living room? Whereas for pc monitor where you sit closer, for sizes like 24'' to maybe like 36'' 1080p was good enuff ? hmm.gif

only the bigger pixel sized screens would mostly benefit from 4k i thought?
Moogle Stiltzkin
post May 20 2016, 06:54 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(adilz @ May 20 2016, 04:39 PM)
Pascal will never have have true Async compute as its not built on hardware architecture. Nvidia has to do it on the software side by load balancing and pre-emption. Though GTX 1080 does way better than GTX 980 Ti when it comes to Async-Compute, they still lack behind AMD Fury X when it comes to Async Compute. And Fury is the previous gen AMD architecture. The gap could be even bigger when AMD comes out with 14 mm FinFet Vega GPUs (touted as the successor to AMD Fury series). Future games will run on DX12. Already rumours some games like Battlefield 5 could exclusively be on DX12/ Windows 10 only.  Guess I'll wait for to see how things pan out before deciding on my GTX 970 SLI upgrade

user posted image

Full details: http://wccftech.com/nvidia-gtx-1080-async-compute-detailed/
wow just saw this

» Click to show Spoiler - click again to hide... «



judging by this ...
QUOTE
In summary AMD’s R9 Fury X saw much greater benefit at all resolutions from the DirectX 12 API itself and from Async Compute compared to the GTX 1080 and 980 Ti. The 1080 showed modest gains with Async Compute and DirectX 12 but did not exhibit any performance regression like the GTX 980 Ti. So the improvements introduced with Pascal definitely helped but were not quite enough to close the gap that exists between Nvidia’s and AMD’s hardware here. In fact, It’s quite eye opening to see the GTX 1080 – which is 30% faster than the R9 Fury X at 4K – only managing to squeeze past the Fury X by 9% in DirectX 12.


pascal is maxwell on steroids pretty much but refined. except that async compute though improved, still is brute force and not the hardware solution like amds we were hoping for. if compared to amd which shows performance gains with async compute and dx12, the pascals wasn't that impressive.

but at the end of the day i'm sure all that matters is the FPS regardless of the inefficiencies present. so i feel that peoples gripe with 1080 async implementation is had they followed a route similar to amds for it, they would have managed to make the pascal even more efficient and thus living up to the dx12 dream or reduced overheads = increase fps.

maybe volta will indeed have those architectural changes to fix this issue once and for all, but theres no info if that is the case or not.


Moogle Stiltzkin
post May 20 2016, 09:40 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(Demonic Wrath @ May 20 2016, 08:36 PM)
I don't understand why it is regarded as "brute force" when it is performing better using less resources? I'm sure they're actually very efficient since they can perform better than AMD with lower count of cores and power consumption. You wouldn't say Intel CPU (having less core) is inefficient compared to AMD's CPU, right?
*
i did read and they did explain the differences. you'll just have to google the sources like i did.

but bottomline the result of async on/off performance. this picture pretty much sums why it matters
user posted image


but at the end of the day, if graphics a fps is still higher despite being less efficient compared to product b's better async compute method, guess who wins? hmm.gif

but... if amd can achieve a fps result quite close to pascal which costs a lot more.... then people may go for team red if you get more bang for your bucks (assuming they aren't already locked down with a gsync monitor). though their gonna have to wait for amds new cards to come out, so nvidia has the better timing hmm.gif


*update

found someone who explained some of the more techie stuff if your interested

QUOTE
May 17, 2016 | 02:43 PM - Posted by Anonymous (not verified)
"Similarly for compute tasks, Pascal integrates thread level preemption. If you happen to be running CUDA code, Pascal can support preemption down the instruction level!"

So what they may be saying is that its improved, but that it's not fully hardware based, and that single instruction preemption needs CUDA code to be of any help for debugging at the single instruction level(AKA single stepping through code in debugging mode)! Most certainly Nvidia has improved on some thread level graphics/compute partally in hardware scheduling and that will result in better GPU hardware execution resources utilization than the previous Nvidia generations.

I do not like the sounds of that “happen to be running CUDA code” as that smacks of a vendor specific proprietary solution that forces others into the CUDA ecosystem in order to obtain the ability to look at things at the instruction level. How is this going to play out for Vulkan/other API debugging, as well as OpenCL, or other cross platform open code/graphics APIs/other code that may not be using CUDA.

There is going to have to be a serious comparison and contrast of the in hardware async-compute features of both Polaris/Vega, and Pascal/Volta and it cannot wait for the Hot Chips Symposium white papers and other professional trade events.

Any GPU processor thread scheduling/dispatch done in software is just not going to be as responsive to any sudden asynchronous events that might occur at the hardware/instruction level as that which is done fully in the hardware buy specialized in hardware GPU processing by a hardware based thread/instruction scheduler/dispatch and context switching unit. No amount of trying to hide latencies for asynchronous events in software can result in as efficient and as rapid of as response to an asynchronous GPU processing thread event as that which in fully implemented in GPU's/any processor's hardware! Without the fully in hardware asynchronous compute processor thread scheduling/dispatch and context switching there will be idle execution resources, even with work backed up in the processor’s thread scheduler queues! Most software based scheduling, for lack of fully in hardware based units, has an intrinsic deficiency in the software's ability to respond at the sub-single instruction level to any changing event in a GPU processing units execution pipelines(FP, INT, and others) like having the fully in hardware async-compute units does.

Read up on Intel's version of SMT(HyperThreading) to see how async compute is done fully in hardware, and async compute done fully in a GPUs processor thread dispatch/scheduling/context switching units has a large advantage over any software, or partially in software processor dispatch/scheduling/context switching for asynchronous compute. The fully in hardware based asynchronous compute has the fastest response to any asynchronous events, and the best processor execution resources utilization possible!



QUOTE
P.S. true hardware based asynchronous compute is fully transparent to any software(except the ring 0 level of the OS Kernel, Mostly for paging/page fault events and other preemptive multitasking OS context switching/hardware interrupt handling events) and is fully implemented in the processors hardware for CPU/GPU hardware processor thread scheduling/dispatch/context switching!

For a discrete GPUs the OS is in the card's firmware(mostly) and GPU drivers, and runs under control of the system's main OS/driver/OS Driver API(WDDM for windows, Kernel drivers for Linux) software stack.


source:
https://www.pcper.com/reviews/Graphics-Card...al-Gamers/GPU-B



so in layman terms what does this all mean?

1. amds hardware async compute is the way to go, and is the right way to do it.
2. the performance chart clearly shows the performance difference between async compute enabled/disabled. see which card has the better gains when the feature is enabled? so that itself goes to show amd did it right.


that said it still seems that even with it's inefficiencies, the 1080 is still doing more fps regardless. so i'd still get a pascal when upgrading from my 680. But the question here is, if amd new card comes out, and if it's cheaper, will it be able to out perform or get close to fps of a 1080 at a cheaper price point ? thats the million dollar question hmm.gif nvidia would lose out on the bang for bucks title if that were the case.

if you had a 980 or 980ti, it's probably better to skip over pascal and just wait for a volta imho. but for me with a 680 upgrading now is fine.

This post has been edited by Moogle Stiltzkin: May 20 2016, 11:21 PM
Moogle Stiltzkin
post May 21 2016, 04:34 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003



Moogle Stiltzkin
post May 21 2016, 06:45 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
this is exactly what i was worried about

a premium (founder edition) ontop of a premium (for waterblock)
http://www.techpowerup.com/222608/colorful...-1080-fe-bundle


is better i wait for the cheaper aib version with support for a third party water block. i'd pay premium for waterblock (water cooling pc rigs is expensive wut to do console.gif ), but not also for the stock fan cooler i won't be using puke.gif fuk nvidias early adopter tax shakehead.gif


QUOTE(stringfellow @ May 21 2016, 05:54 AM)
Is there a text version of that video review? I'm too "distracted" by the dialect to take him seriously. tongue.gif Plus, he's collating from reviews and making his own review out of those reviews? Reviewtiful!
*
well i've heard worse thick accents then that in scotland. his to me is understandable. some scottish people, their accents are so thick it's hard to understand at times laugh.gif anyway the bottomline is whether you understood what he was saying or not despite his accent. cauz after all if u did understand and he told you your about to drive off a cliff yet u ignored the warning and you died in a car accident, guess who pays the price for ignoring the facts? smile.gif i'm not saying you have to believe every word he says, but if the whole basis for derailing everything he said on little inconsequential stuff, then thats a bit silly to put it politely wink.gif

yeah he was relying on other reviewers stats, but if they are credible sources, i see nothing wrong with that. as long as he reported their stats correctly (it's up to the reader to verify his sources if they feel it's incorrect or suspicious). also he at least reference multiple sources who have slightly different results to get a better idea of the performance... so i didn't see anything wrong with that doh.gif personally i try to read multiple reviews to see the general consensus if possible.


i'll give it a week or month see what other consumers think about it first hmm.gif

even hardocp came down hard on critisizing nvidia, so got to wait sweat.gif
http://hardocp.com/article/2016/05/18/gefo...per_launch_ever



This post has been edited by Moogle Stiltzkin: May 21 2016, 07:02 AM
Moogle Stiltzkin
post May 21 2016, 01:01 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(stringfellow @ May 21 2016, 07:13 AM)
I see his gripe as being more on the fact that the 1080 isn't THAT much more powerful than a heavily overclocked 980Ti and when overclocked, the 1080 throttles down heavily to the point that those who have heavily overclocked 980Ti should keep it because the difference is minimal.

I fall into that category, and yet I'm still excited about the 1080 because of what it can do at stock versus my mildly overclocked 980Ti. But my case is niche, a lot of the excitement from those who have held back from upgrading 2-3 generations back and are now prime for a replacement, and for someone who has been stuck with a 680 and 780/Ti, the 1080 is a valid enough purchase to get DESPITE this AdoredTV's claim that its overclocking capability is borked, according to him. The price/performance for this compared to Maxwell high end options cannot be overlooked.

At the risk of making me sound like an Nvidia shill, the 1080 is targeted for those who had held back from upgrading. That's why the performance delta is compared against a 980, NOT a 980Ti. AdoredTV's case is argued using the point of comparing the 1080 against the 980Ti, and a heavily overclocked one at that, which at that point isn't a favorable or sensible upgrade decision after all.


don't get me wrong it definitely still beats a 980ti, but the gains arent that earth shattering. if i had a 980ti i would not waste money on the upgrade. if i had money to waste why not, when your rich you don't give a shit about the cost kek.


for me coming from a 680gtx, getting 1080 isn't too big a deal. EXCEPT when it comes to this FE which i feel may most likely be a rip off considering they marked it up 100usd more than their own msrp.



QUOTE
In short, AdoredTv is disappointed because the performance increase isn't substantial enough (when compared with 980Ti) to his liking. The performance delta IS substantial enough if you compare to those who have held back from upgrading and is waiting for a price point good enough for them to jump in to 1440p or 4K. That was previously the province of 980Ti and Titan X which are not accessible to these potential buyer of 1080 because of the exorbitant (to them) pricetag. The 1080 is targeted at the 980 and below users, not the 980Ti. AdoredTV should've waited for the 1080Ti or the Titan Pascal if he wants that "significant enough" performance delta he craves, but.......from the way he worded himself (as much as I am amused by his dialect), he's also looking for affordability. He did say there are no free lunch in this world when he was talking about power consumption during the 1080 overclocking in his review, I now tell that same thing to him as well. You gotta pay to get that performance margin. There are no free lunches.


but the thing is, the pricing is too close to that of a 980ti .. so even spec wise we can already see it was positioned as a direct replacement for the 980, it's still going to be compared to 980ti if only cauz it cost nearly as much. Just imagine how much would a 1080ti would cost mega_shok.gif thats why i'm not going to hold my breath waiting on that one.



QUOTE
The argument that "you will get better options and cheaper price with better performance if you wait" is ETERNAL, why wait a few more weeks for the AIB non-reference/non-FE version when you can wait a few months more for the 1080Ti version? Why wait for that when you can wait even longer for Vega? What wait for Vega when you can wait a little longer for Volta and HBM2? Some lines were drawn where this is where most of those who waited with their 980 and below cards are ready to jump in. To them, getting a Titan X like performance for a non-Titan-X price is where they draw the line for them to jump in. Sure, they can always wait for a cheaper and better performance card on the horizon, but many have fallen into that trap and continued waiting, to the point where they never upgraded and play Dota and Counterstrike all their waiting life and not bothering to upgrade after that. tongue.gif
yes i agree with you the eternal wait for something better is just a never ending quest. but if the wait only 1-2 months to get a more reasonable deal on a better longer term investment, for me that is worth the wait (obviously i can't speak for others who just want the card now. go for it.) but i'm not gonna wait like 5months + because that will be too late, by then there will be a 1080ti, or even worse it will be too close to the volta which presumably may be the true new architecture for that 60fps gaming at ultra max settings for 4k resolution (and maybe even finally a true async compute), but most definitely it will have HBM2 which will be in 8gb vram at minimum.


Moogle Stiltzkin
post May 21 2016, 04:25 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(skylinelover @ May 21 2016, 04:22 PM)
haha just fkin get what you can afford n@0w because we YOLO and we got no time 2 wait any longer laugh.gif get now and forget the rest rclxms.gif that is my principle in upgrading and i still with kepler after going through the longest waiting time yet doh.gif normally i upgrade every 2 years but now i need 2 wait 3 years 2 upgrade thanks 2 the TI line rclxub.gif
*
i'm also still on kepler. planning on going pascal after a short wait for things to cool down after launch.

you planning on going pascal too? or you gonna wait for the 1080ti or volta? hmm.gif just wondering.
Moogle Stiltzkin
post May 21 2016, 05:11 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
fair enuff.

yeh as a budget minded gamer, i tend to skip over sli. heck even nvidia is forcing users to settle with 2, cauz gaming performance doesn't scale well with 3-4 sli cards. only 2 seems to have reasonable gains even then.... single gpu has the best value :]
Moogle Stiltzkin
post May 21 2016, 11:06 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
someone asked me earlier what async is all about. found a video that explains the situation




bottomline
- nvidia async is inefficient compared to amd
- but at end of day if the card fps performance still wins regardless of any inefficiencies, then this shouldn't be too big an issue. only tech wise people wish it did have proper hardware async compute similar to amds, so we can get good performance and latencies using that feature.

Moogle Stiltzkin
post Jun 6 2016, 02:11 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
which is the cheapest best deal 1080 that has water cooling? preferably third party water cooling block rather than AIO water cooling.


The Asus Strix 1080 was said by EK to be supported by one of their water blocks. And this is the performance review for the card. pricing for the strix is also cheaper than the founders by 100usd supposedly, and also when compared to the other brands.
http://videocardz.com/60631/asus-rog-strix...or-overclocking

http://www.tomshardware.com/news/asus-stri...aled,31905.html


there are 2 versions for this asus strix, the oc edition being clocked higher
-ROG Strix GeForce GTX 1080 OC Edition
-ROG Strix GeForce


somebody however pointed out that EVGA 1080 non founders reference card could be even cheaper. but i have no idea if it has a compatible water block
http://www.evga.com/Products/Product.aspx?pn=08G-P4-5180-KR


QUOTE(stringfellow @ Jun 4 2016, 06:53 PM)
Do all these physical mods to your GTX 1080 at your own risks, warranties are definitely voided. The reality is, the card has hard physical limitations on voltage it takes on ALL boards, FE or AIB.

https://xdevs.com/guide/pascal_oc/#voltsc

Namely this:
Now that the truth is out, join the discussion here. Lots of them are questioning the need to go AIB boards, since they are as hard-limited as the FE board themselves. You get slightly better cooling and slight better stock overclock out of the box, both of which can be alleviated properly even on the FE card by the end user himself, hell even the stock overclock of a mere 100Mhz can be done easily on the FE and saved as a profile on MSi Afterburner/EVGa Precision.

https://www.reddit.com/r/nvidia/comments/4m...it_explanation/

Consensus: if you really hate the FE that much, get the cheapest coolest AIB board you can afford. Me? I dont want the internals of my case to be so hot to cook an egg inside, I've faced that with my previous GTX690. And that's with a larger case than my current RVZ01. Blower style cooler pushes all that hot air nastiness out of the case. At this point, to me, liquid-cooling is useless, because of that voltage hard limit, and the smaller process architecture being sensitive to voltage delivery/instability.
*
i too am beginning to wonder that reference (non founder preferably...) AIB if cheapest to go for that. i'd still do water cooling because for my pc usage i leave on for long hours. Also water cooling doesn't only keep it cool temps, it also reduces fan noise. personally i still find water cooling good for me still.

This post has been edited by Moogle Stiltzkin: Jun 6 2016, 05:26 AM
Moogle Stiltzkin
post Jun 7 2016, 12:24 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(Skylinestar @ Jun 6 2016, 07:30 AM)
I've chat with a local seller who said that only ACX3 SC and FTW will be brought in.  cry.gif
*
that sucks hard sad.gif


QUOTE(GOPI56 @ Jun 6 2016, 07:47 PM)
GTX 1080 TI is in the works by Nvidia according to a rumour.It would be 50 percent more powerfull than GTX 1080.
*
yes this is likely to be the case. but considering pricing and the time to release a 1080ti, and the time soon after that a volta gets released, to be honest i'm not bothered waiting on ti. at most what people can expect out of a 1080ti is 4k 60fps+ on average with ultra quality settings, but other than that the 1080 is very capable, even at 4k it can do ultra at 30-45 ish fps which isn't too bad if not perfect.

they're estimating a pascal titan is 999 usd minimum... with roughly an estimated 40 to 50% better than GP104 in terms of performance.
http://wccftech.com/nvidia-pascal-gp102-gp...-graphics-card/



Anyway my sources tell me there will be stock for asus strix in 2-3 weeks. so i'm gonna be waiting on that one hmm.gif

This post has been edited by Moogle Stiltzkin: Jun 7 2016, 04:02 AM
Moogle Stiltzkin
post Jun 7 2016, 04:03 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
is palit brand good? i'm aware in the past they made some aibs with lots of vram on them, but other than that hmm.gif hm....

is there a chart yet comparing all the aibs out there together ?



some more info about the asus strix 1080

QUOTE
ASUS has opted for a custom PCB for its 1080 Strix, and also – again, like MSI and other vendors – mounts an additional 6-pin power header to the board. That grants fully another 75W for the GPU, but still doesn't resolve VBIOS / voltage limitations. The GTX 1080 Strix will use a custom VBIOS, but we were not told at interview time if this would allow noteworthy overvolting.


so theres something about pascal vbios capping voltage. so i wonder if the asus custom vbios lifts this limitation? otherwise what then is the point going 2 pin (8+6) power connector ?

QUOTE
The card uses ASUS' DirectCU III cooler and the “wing-blade” fans. We're not sure on the thermal performance over reference. Sort of unique, ASUS also has PWM fan ports on the right side of the card, which allows the GPU to control the fans (rather than the CPU) based on need.


i heard this could be useful for watercooling systems, as the card won't get throttled if you configure it a certain to use this pwm fan ports or some such? hmm.gif

there will be 2 models for this, one is OC edition and the other is not. I'll get OC version since i rather not do that myself.





asus strix 1080 reviews and other sources
http://videocardz.com/60631/asus-rog-strix...or-overclocking

http://www.computerbase.de/2016-05/asus-ge...-strix-oc-test/

http://www.pureoverclock.com/2016/05/asus-...aders-say-what/

http://www.tomshardware.com/news/asus-stri...aled,31905.html

This post has been edited by Moogle Stiltzkin: Jun 7 2016, 04:27 AM
Moogle Stiltzkin
post Jun 7 2016, 04:19 AM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(goldfries @ Jun 7 2016, 04:07 AM)
I'm not particular with AiB models.

To me card selection goes based on '
- price
- appearance

Warranty wise, most of them already have some 3 - 5 years. No issue.

In the end they all work the same, clock variation doesn't result in much speed difference.
*
from the info i gathered


ZOTAC GTX 1080 AMP! Exteme is best looking in my personal opinion (if your interested in fan cooling gpu)
user posted image


best performing aib, not sure yet......

best warranty/upgrading/customer service is the EVGA 1080. In comparison i did not hear anything good about Asus in this area.

cheapest aib brand, Asus Rog Strix (custom), and evga reference (non founder). Maybe there is cheaper but i'm not aware which.


the best bang for buck award goes to 1070, but for those of us who want to get whats best available theres the 1080. But at this rate founder edition = sucket edition. So lots of us waiting on the aib models that are cheaper. That said founder edition as far as i know is a blower type fan cooler, so for some people may opt for that if it's more suitable for their rigs i suppose. But if going water cooling, may as well get the cheaper AIB models instead since won't be using the provided fan.

For the Asus Strix at least, i know 2 brands of water cooling brands going to provide the water blocks for it.


here is the full list of all 1080 variants available
https://www.techpowerup.com/gpudb/2839/geforce-gtx-1080



This post has been edited by Moogle Stiltzkin: Jun 7 2016, 04:21 AM
Moogle Stiltzkin
post Jun 7 2016, 03:05 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(CPURanger @ Jun 7 2016, 11:24 AM)
With all this discussion about 1070 & 1080. I wonder how many of you are going to upgrade to 1440p or 4K monitors ?

Or stick back to 1080p monitors.  tongue.gif

I am still using a non 1080p mon (22" 1680x1050). Upgrading 1080p seems not too much difference, where as 1440p or 4K quite expensive > RM2.5K. If I buy 1440p mon + GTX 1080, it will cost me around RM 6K  sweat.gif
*
4k i don't think theres enough video content for it to be worth much. even for small size screens like 24'' ish 1080p works just fine.

maybe if people use bigger screens that will justify 1440p and eventually 4k..... the thing is, can your table top fit a bigger monitor? also the bigger it is, you may need to sit back further for such a bigger monitor. not to mention more pricier monitor the bigger you go.... also gpu needs to be pretty good to get good fps for those higher reso monitors.... so can the pascal do 60fps average at ultra settings for gaming? hmm.gif

Another consideration, the videos i watch are mostly 1080p, so watching on higher res screen the quality becomes more rubbish because it's not native reso..... if u try watch 1080p source on a different reso...

but for gaming some support 4k reso output.... so gaming there is an option for it (for some newer titles), but if it's for tv series/movies most of what i've seen is still 1080p hmm.gif

bottomline, 1440p gaming on a 1080p max settings definitely playable with good fps. still questions about 4k ultra hmm.gif going to reso's above 1080p has other considerations to look at whether it be the content you watch or the price tag that comes with going to a bigger screen.


PS: if you do get new monitor, those acer predator gsyncs look damn good. ah-ips with high refresh rates, low input latencies, low ghosting etc.... nod.gif

good monitor review site
http://www.tftcentral.co.uk/



*update

just saw the review for this 4k monitor it says
QUOTE
3840 x 2160 resolution can support full 1080 HD resolution content and also Ultra HD "4k" content natively.


does that mean you can watch 1080p content on the 4k monitor without image quality reduction? hmm.gif
http://www.tftcentral.co.uk/reviews/acer_xb271hk.htm




This post has been edited by Moogle Stiltzkin: Jun 7 2016, 03:13 PM
Moogle Stiltzkin
post Jun 7 2016, 06:29 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
QUOTE(adilz @ Jun 7 2016, 03:28 PM)
Assume you one one 27" 1080P monitor and another 27" 4K monitor, well the 1080P video will look the same in both. But if you run 4K content, it will look sharper on the 4K monitor than it will be on 1080P monitor. Sharpness depend on monitor size and res, and the of course the resulation of the content.

My 5.5inch phone screen with 1920x1080p res looks sharper than my Samsung 28" 3840 x 2160p monitor. Why? Coz the phone crams 400ppi whereas the monitor only 153ppi.  A 28" FHD will only hacve 78.68 ppi. You can calculate ppi for any given screen size and resoution from this website https://www.sven.de/dpi/
*
only 4k i saw was on a 51'' HDTV becauz was considering whether to get 4k tv or not, whether any point for astro content hmm.gif Looked nice but i'm sure the demos were running 4k content.

so watching on 1440p 1080p stuff won't add more black bars will it? hmm.gif

i'm using a dell u2413 24'' GBR monitor. colors look great, though for gaming it lacks gsync so stuck at 60hz refresh using triple vsync buffering....

also my table top can't fit a bigger monitor, as well as the existing 24'' is plenty big enough for my vision... so i don't particularly feel the need for going up to 27'' and above.... also the other reason, 1440p required a more stronger card to run in ultra settings at native reso, but for pascal this is not an issue any longer. even a 980ti was pretty acceptable, but i did not hop on that bandwagon.


QUOTE(SSJBen @ Jun 7 2016, 04:45 PM)
3440x2160 is an EXACT 4 times increament of resolution over 1920x1080. All a monitor or TV need to do is to quadruple the FHD image into UHD without any further calculations. This is different to when 480p was upscaled to 1080p, or 720p going to 1080p. Neither 720p or 480p were linear increase in pixel count when being upscaled to 1080p, which is why 480p often looks like horseshit in FHD (even with the best post-processing scaler).


1080p looks like 1080p on a "4k" screen because it's a linear increase, not because your "screen looks smaller". You can play a 1080p image on a 120" screen through a 4k projector, you'll actually see next to no difference between that and a 1080p projector to a 120" screen assuming all post-processing are disabled. Of course, the 4k projector would have much more advanced technologies to further boost a 1080p image to look even better than it originally did. This goes the same for most of the high-end "4" (UHD) TVs on the market now a days.


did not know this hmm.gif but does resolve some of my concerns going for higher reso...


QUOTE(SSJBen @ Jun 7 2016, 04:45 PM)
All a smaller screen does is that it has much higher ppi/dpi, which will help mask aliasing issues, giving a perception things look sharper and better. The latter of which is actually down to panel quality as small panels are much more easier to produce in high quality than big panels are.
this i did know. and was why i stuck to a reso closer to 1080p because of this very reason.


QUOTE(SSJBen @ Jun 7 2016, 04:45 PM)
As a side note, instead of aiming to go for 4k60p, I heavily recommend people to go for a 1440p (QHD) at much higher refresh rates if gaming on PC is the main target. 1440p at 120hz (or 144hz) is a whole new level of improvement IMO. Even better if one can have Gsync, that's godlike upgrade there.

I'll keep drilling this point, 1440p @ 120hz (or 144hz) with Gysnc is the best sweetspot for PC gaming now. 4k60p is just 1080p60 with sharper visuals, of which does almost nothing to improve gameplay.
yeah i forgot this point. as far as i know 4k is capped at 60... compared to 1440p reso monitors that reach up to 144hz... if not mistaken hmm.gif

for a gamer they may prefer faster refresh.... i would anyway sweat.gif




Moogle Stiltzkin
post Jun 7 2016, 06:53 PM

Look at all my stars!!
*******
Senior Member
4,454 posts

Joined: Jan 2003
here is a review of the 1070 that explores the question, should the bang for buck user skip over the 1080 and go for the 1070?

read here
http://www.hardocp.com/article/2016/06/06/..._edition_review

QUOTE
The Bottom Line

The most important aspect to take away from the launch of the NVIDIA GeForce GTX 1070 Founders Edition is the fact that NVIDIA has taken what use to be $649 video card performance (GTX 980 Ti/R9 Fury X) and brought it down to $379-$449 price point. That is at least a $200 reduction, for performance that just a month ago was the fastest you could buy.

Bringing that level of performance down to a more affordable price for everyone allows more people to jump on the high-performance bandwagon. For the new games coming out this year, that gameplay experience improvement will be welcomed.

AMD doesn't yet have a response to the GeForce GTX 1070 Founders Edition at this price segment. Sometimes we give the opinion that you should wait and see what the competition brings. However, with AMD focusing on the $200 price segment with the Radeon RX 480 at the end of the month, it leaves a wide gap open right now and NVIDIA on the top side of gaming performance.

If you are coming from an older GPU, the GeForce GTX 1070 Founders Edition or add-in-board partner video cards might be right up your alley for a video card that performs, but won't break the bank at the same time. The only thing we have to do now is wait and see on June 10th if availability and price gouging are an issue like we have seen with the GTX 1080.


This post has been edited by Moogle Stiltzkin: Jun 7 2016, 06:55 PM

6 Pages « < 3 4 5 6 >Top
Topic ClosedOptions
 

Change to:
| Lo-Fi Version
0.0280sec    0.31    7 queries    GZIP Disabled
Time is now: 30th November 2025 - 04:06 PM