Outline ·
[ Standard ] ·
Linear+
AMD Radeon™ Discussion V4, Cut Cut Cut!~ Go Red !
|
tech3910
|
Oct 22 2010, 09:41 PM
|
|
those reviews kinda confusing actually.
in som review, 6870 is on par, if not slightly better den gtx470.
in som review, 6870 is just barely better den gtx460 & loosing out in a lot fo the cases.
|
|
|
|
|
|
tech3910
|
Oct 22 2010, 09:53 PM
|
|
QUOTE(8tvt @ Oct 22 2010, 09:48 PM) yup different games also all varies.. donno la what actually affected those.. driver, cpu power, game patches.. Added on October 22, 2010, 9:49 pmor bias reviewers.. lol guru3d clearly shows 6870 is somehow a little faster den gtx470, while bit-tech reviews shows dat 6870 barely on par with gtx460 1GB.
|
|
|
|
|
|
tech3910
|
Oct 27 2010, 08:17 PM
|
|
the hotfix is not to fix any major problem, rather just performance improvement in games & application.
wat r u guys complaining about den?
|
|
|
|
|
|
tech3910
|
Oct 29 2010, 05:46 PM
|
|
QUOTE(Jet23sky @ Oct 29 2010, 09:23 AM) Try not to compare ATI 6000 series with 400 series, don't forget, Nvidia 500 series is going launch on December. It is not to far from what we have seen now. But of course, now nvidia's bullet is just 400 series. XD I hope there are more competitives on December, especially if... and only if Nvidia releases 560. now current high end stream is 6850<5850<6870<5870<6950<6970<5970<6990. The news stated the 6970 performance will be much similiar or a bit higher than 480, 6950 will be btw 5870 and 6970. Those who own 5970 can really skip this generation as it still beat almost entire 6000 series (except 6990) out.  pls.... i dun buy the story dat gtx580 gonna launch in December. even if it launch, it will definitely a paper launch wit real availability on next year. no way nvidia could get 580 into production so soon. it must b @ least late q1 2011. the whole leak thing on nvidia website, i said dey did it on purpose to tek a little spotlight out of Barts launch. This post has been edited by tech3910: Oct 29 2010, 05:47 PM
|
|
|
|
|
|
tech3910
|
Oct 29 2010, 09:02 PM
|
|
i think shud b able to handle 6850 without problem.
just an advice, OC ur processor to 3GHz, easily done wit even stock cooler. 3.0 quad is a gaming sweet spot.
|
|
|
|
|
|
tech3910
|
Oct 31 2010, 10:54 PM
|
|
QUOTE(magnumrolez @ Oct 31 2010, 10:41 PM) i wonder why ati released 6000 series to kill 5850 and 5870, they are selling much more cheaper and forced 5000 series to drop price massively. are they ignoring any consequences just to compete against gtx460? now our 5000 series card resale value are like piece of shit it's fair enough given the fact dat 5k series price was solid for a year. in fact, it went up.
|
|
|
|
|
|
tech3910
|
Nov 1 2010, 07:08 AM
|
|
u guys meks me sad...... owning 5850 still complain butt-hurt...... 4870 users like me shud just suicide....
|
|
|
|
|
|
tech3910
|
Nov 2 2010, 11:35 AM
|
|
 QUOTE Following up on previously leaked specifications, Chinese website eNet has filled in some missing information - notably TDP and TMU count. The TDP of GeForce GTX 580 is at 244W, slightly lower than the GeForce GTX 480. The texture fillrate had been viewed by many as one of the bottlenecks for GF100, and eNet reports that GF110 effectively doubles the TMU count to 128 TMUs. Apart from this substantial improvement in TMU, the GF110 is a "full revision" and fixed version of GF100.
The net result is a performance improvement of between 15%-20% over the GTX 480, while using less power. It must be noted that the GTX 480's 250W TDP figure was controversial as they reflected typical load power for gaming applications, whereas stress benchmarks like Furmark used close to 300W, and this is the reference AMD uses for TDP ratings. It is unclear whether the 244W figure refers to gaming load or absolute TDP in stress applications.
In addition to eNet's report, purported benchmarks for the GTX 580 has leaked on the Chinese enthusiast community.
The benchmarks show an average improvement of ~17% for the GTX 580 over the GTX 480. The maximum increase is in 3D Mark Vantage - ~35%, while the minimum increase is in Resident Evil 5 ~5%. Against the Radeon HD 5870, the GTX 580 wins comfortably by an average of ~45%. The gaps are massive (~2x) in tessellation oriented benchmarks, building on GTX 480's strengths, while the DX10 benchmarks narrow the gap considerably. Unfortunately, according to these benchmarks, the GTX 580 will end up slower than AMD's previous-gen Radeon HD 5970, on average, let alone AMD's upcoming flagship - Antilles / Radeon HD 6990. GTX 580 against Cayman / Radeon HD 6950/70 is the real battle here. Naturally, we would advise you to take any such leaks with a grain of salt.
The GTX 580 is set to release in one week's time, on November 8th/9th (depending on where you live), but the jury is still out on the extent of availability on release. While strong rumours suggest virtually no availability on launch, eNet insists that AIC partners will have GTX 580s on sale on November 9th. However, the quantity of cards for sale on November 9th is not mentioned. http://vr-zone.com/articles/report-nvidia-...aked/10202.html
|
|
|
|
|
|
tech3910
|
Nov 2 2010, 11:57 AM
|
|
StoneGiant & Heaven r DX11 benchmark tools. fermi r better @ tessellation compared to evergreen.
metro2033 is dx1 game which is heavy on tessellation. Lost Planet 2, well....done by capcom which uses the engine which always run better on nvidia card. but no worry, coz LP2 engine is far from capable enough to produce well over acceptable frame rate even on 5770 with max setting on dx11 mode, full hd.
|
|
|
|
|
|
tech3910
|
Nov 2 2010, 07:26 PM
|
|
dis is yielding. in chip world, every chip has @ least 1 "younger brother". it is to improve yield.
the "younger brother" chip is basically cripple version of the original chip.
simply put it dis way, 6870 has 1120 sp, 6850 has 960. wen ati order this chip from TSMC, impossible TSMC can produce 100% perfect chip. those perfect chip will b made into 6870, while those wit som leakage, or problem, the problematic sp will b disabled & made i into 6850.
only those chip which has <960 working sp will b thrown away. like dis, less chip is wasted.
|
|
|
|
|
|
tech3910
|
Nov 2 2010, 08:23 PM
|
|
it's a win for consumer too. coz if there is no such practice, yielding would b much lower, hence cost of chip will b higher & v consumer has to pay more. just hav to identify which is good & bad 1. in the case of Cypress, 5850 is good, but 5830 is just totally rubbish.
& speaking of sapphire, dey r redying their own OC s/w named TriXXX
|
|
|
|
|
|
tech3910
|
Nov 3 2010, 09:20 PM
|
|
if wanna check out the EyeSpeed, dl the freaking demo & check it out. http://www.amd.com/us/products/technologie...ges/gaming.aspxwarning, i' not in a good mood 2day & will report any troll/spammer.edit:in gaming, apart from the physic calculation, it can also do AI computation. so i guess if games were to utilized this, it will result in smarter, more responsive AI. @ the same time, the game could throw more smart AI @ player in a single map @ a given time.. theoretically speaking of coz. This post has been edited by tech3910: Nov 3 2010, 09:30 PM
|
|
|
|
|
|
tech3910
|
Nov 3 2010, 09:43 PM
|
|
QUOTE(8tvt @ Nov 3 2010, 09:32 PM) i tried to get the info outside the AMD region.. but still not many care to explain about it.. they just put the demo.. but is it really benefit the current game? currently, i guess not beneficial @ all. h/w technologies is always ahead of s/w, wat can u do about it? even not really beneficial at the moment, there is reason y AMD put these new features in. otherwise, it will not b a new generation, rather re-branding card......*cough*....gtx580....*cough* but dis is a glimpse of the future. i'm still waiting physic from AMD......... This post has been edited by tech3910: Nov 3 2010, 09:44 PM
|
|
|
|
|
|
tech3910
|
Nov 3 2010, 09:47 PM
|
|
QUOTE(speedhunter @ Nov 3 2010, 09:45 PM) my post got deleted?  asking for question that related to pricing and point to where i found it is also call advertising? come on, at least let me know if i'm wrong  it has been pointed out a couple of pages bck, & also in nvidia thread.
|
|
|
|
|
|
tech3910
|
Nov 4 2010, 01:09 PM
|
|
QUOTE(don^don @ Nov 4 2010, 12:29 AM) guys, how did you adjust the rpm of the gpu's fan? using MSI afterburner? simple would b just use the CCC. on the OC page, u can manually adjust the fan speed wen u gaming. or, use s/w like MSI afterburner or ATT to create profile. u can readjust the fan speed vs temperature graph & create a profile.
|
|
|
|
|
|
tech3910
|
Nov 4 2010, 09:30 PM
|
|
QUOTE(moshpit21 @ Nov 4 2010, 09:27 PM) Are you kidding me? bro do more research carefully, on stock clock 6870 is still slower than stock 5850. 6870 becomes SLIGHTLY better than stock 5850 after it is overclocked (yeah 6870 is not overclock friendly). But if you want to compare OC 6870 VS OC 5850, man 5850 win way too far, remember 5850 is overclockable to 5870 performance.  stock vs stock
|
|
|
|
|
|
tech3910
|
Nov 5 2010, 02:19 PM
|
|
QUOTE(moshpit21 @ Nov 5 2010, 07:30 AM) thx for the heads up, but what is the source of this chart? what is your GC? post your score please  dat chart is from techpowerup. i love it dat techpoweup shows result for each resolution coz frankly speaking, if u're buying 6870 or 5850, u're aleady owning a >1680*1050 resolution monitor. my gc is 4870..... score a wee bit over 10k in vantage, P.
|
|
|
|
|
|
tech3910
|
Nov 5 2010, 07:13 PM
|
|
QUOTE Antilles dual card packs two Caymans
Still scheduled for Dec 2010 Good news for AMD fans, the Antilles dual chip card will end up with two Cayman chips, something that promises that the card can end up significantly faster than dual-chip Radeon HD 5970.
Two Barts (Radeon HD 6850 or 6870) class chips would not be enough to provide a sufficient performance lead for the new Radeon HD 6990 X2 card. This is why ATI is using two Cayman chips and we also learned that the chip looks more like the Radeon HD 6950.
The funny part is that we still don’t exactly know what is behind Cayman, but we are doing our best to find out before November 22nd, the alleged launch date.
We expect TDPs around 300W but this is something that can only be confirmed at a later date. It will be very fast, that is certain, but we still don’t know who will win this winter graphics update showdown. http://fudzilla.com/graphics/item/20751-an...cks-two-caymansif u think Barts scale good in crossfire, Antilles with 2x Cayman (not Barts) is gonna blow u away.
|
|
|
|
|
|
tech3910
|
Nov 5 2010, 07:29 PM
|
|
QUOTE(jeopardise @ Nov 5 2010, 07:22 PM) Cayman HD 6970 has 2GB 256-bit RAM More info http://www.fudzilla.com/graphics/item/2074...970-has-2gb-ramAntilles (dual cayman) will be 4GB? not sure. coz 6990 might end up to be 2x 6950.
|
|
|
|
|
|
tech3910
|
Nov 7 2010, 01:09 PM
|
|
QUOTE(law1777 @ Nov 7 2010, 01:01 PM) im using i5 750 x 5870.. just overclock your 750/760 to 3.5-4.0ghz then im sure u dont have any problem playing any type of games sweet spot for i5 750/760 is 3.0GHz anything higher wont yield much improvement. http://www.tomshardware.com/reviews/game-p...eneck,2737.html
|
|
|
|
|