Welcome Guest ( Log In | Register )

Bump Topic Topic Closed RSS Feed

Outline · [ Standard ] · Linear+

 AMD Radeon™ Discussion V12, Latest - 14.12 | WHQL - 14.12

views
     
empire23
post May 3 2015, 04:56 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(skylinelover @ May 3 2015, 03:44 AM)
nice story bro laugh.gif rclxms.gif

retro review yoh cool2.gif  cool.gif


*
It still remember the 9800SE version, I softmodded mine to a total of 16 pipelines. It was awesome. Then it was replaced with a Geforce 6800.
empire23
post May 4 2015, 04:44 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(S4PH @ May 3 2015, 03:56 PM)
I use to own a 9600 pro that card was so good i never replaced it till it got spoilt lol  thumbup.gif , i remember a 9800 SE a friend of mine use to have it im nt sure what are the disabilities from the normal 9800  hmm.gif
*
It think it had 8 pipelines instead of 12. So you got a near 50 percent boost when you softmodded the card. Dem nais.

QUOTE(Najmods @ May 3 2015, 05:06 PM)
This was my pride and joy back then, X1800GTO. Costs me RM1000 back then. I just love reference designed PCB, there is some sort of elegance into it. Back then I was foolish, I overclocked it way too much that it reached over 100C for quite some time and killed it laugh.gif

[attachmentid=4434531]
*
I had an X1800XT, bought it when it first came out, it cost about 1300 ringgit. Dem expensif, but very song play games. But then like yours, mine overheated, artifacted and then mati katak.
empire23
post May 8 2015, 08:17 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(Unseen83 @ May 7 2015, 04:49 PM)
HBM !! boom! First GPU with HBM... (finger cross is not FLOP)

[attachmentid=4439404]
*
HBM is interesting. But knowing how they dealt with the challenges of DRAM stacking is even more interesting.

But here are my predictions;

- Memory overclocking will be quite shit. Simply the issue of multi layering DRAM makes routing and vias a huge issue and signal integrity and timing will be tight. But this is a trade of for shorter delays.

- Thermal management. Sharing the same substrate is an issue as thermal density increases, also proximity to the core which has higher leakage current than the mem poses thermal management issues.

- Price. Yields will be crap, I do not expect prices to do down until a process improvement takes place and that will take longer than the usual cycle.

Waiting for a beyond3d or Arstechnica article, but not holding my breath haha!
empire23
post May 9 2015, 04:37 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(Unseen83 @ May 8 2015, 05:24 PM)
hmm.gif  if your Prediction come true.. than it become Flop... sad.gif i really hope that not gonna happen but that is just been in denial.. hm i just finger cross... and hope  prediction is minimize... eh...  we end user really need..  competitor toward Nvidia... or we be paying 1,000-3,000usd gpu... end this year..

*
Not necessarily. The performance would be pretty good out of the box, just that OCing would be an issue. I personally don't overclock most of the time as I don't see a point to it.

Above all the multiple die single package system would save a lot of board space and increase density, which means interesting prospects for SFF systems.
empire23
post May 9 2015, 05:28 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(Unseen83 @ May 9 2015, 04:53 AM)
True too.. but most end user who spend RM3K+ on Gpu would like to test it limit and max capability, and if GPU done badly in pushing the OC... it may get minus score point on Review.. public view of AMD Flagship or been call gpu that OC max on factory default just to out performance it competitor  but now is just Speculation.. assumption... so better wait...  take note is early NEW tech or first Gen HBM and AMD get it First! and is COMING!! biggrin.gif  the size capability of HBM no doubt is with high expectation and even it competitor are planing to used it  on 2nd Gen HBM on it next GPU...

hm naah if you dun like to OC than Porsche not for you...  laugh.gif
*
I never have and probably never will take into context OC performance when it comes to any review. A card that has a lot of OC potential means that production quality control isn't tight (thus the high spread) and thus reliability is more variable. Because if manufacturers could reliably clock their chips higher and bin the higher performance parts, they'd do it because it means higher profits.

When the original GTX reference cards came out and people found that they had significant headway, Nvidia started allowing most AIB partners stock OC the cards as yields and quality tightened up. Now all cards are overclocked from the box when compared to the original speed.

I feel no need to OC, if I wanted a faster Porsche, I'd just buy the next faster model. Much like if I feel my M235i is too slow, I'd order an M4 laugh.gif
empire23
post Jun 7 2015, 06:55 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(JohnLai @ Jun 7 2015, 12:58 AM)
Can somebody help me to explain the stuff to kizwan?

EDIT: Formatting.
*
It's not black and white, as depending on the inherent efficiency of the rendering interface and any additional abstraction steps that the driver may or may not have to make, a choice of Mantle or DX can affect things.

Graphics is a lot more complex than just a simple SIP pipeline. For example a driver might have issues with texture calls and transfers being the most cycle heavy, thus the use of a CTM (Close to Metal) like Mantle where you can actively manage hierarchy can reduce CPU usage as a whole even on the driver side of things. This is dependency in action.
empire23
post Jun 7 2015, 10:46 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(JohnLai @ Jun 7 2015, 10:34 AM)

Still, the driver needs to expose the low level interface first.

*
True. But at the end of the day, if by reducing abstraction and increasing efficiency with Mantle say reduces certain flow on effects that the driver might not be efficient with, then yes, there are effects.

Quoting Mike Houston (now a chief engineer at NVidia) when commenting on API performance overheads.
QUOTE
The OS and the driver architecture can still sometimes get in the way. For example, if you look at the "small batch problem", this doesn't exist on the console to nearly the same degree as the PC despite the underlying HW being extremely similar. Multi-GPU support doesn't exist in very useful ways yet in either GL or DX. The tension is really how to maintain portability and system stability while still getting out of the way.

His comment on the small batch problem generally stems from SIMD Vector widths I believe, which an API mostly abstracts, but with a CTM API can optimize.

TL;DR, it's all related at the end of the day. You can change something high up the chain like a compiler, but the effects will be felt everywhere.
empire23
post Jun 7 2015, 02:30 PM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
Dunno why ppl until dai dai wanna argue driver overhead on CPU, just go buy 8 core Xeon, kaotim laugh.gif
empire23
post Jun 8 2015, 05:20 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(Minecrafter @ Jun 7 2015, 03:19 PM)
Or the 18 cores and 36 threads Xeon E5-2699.. laugh.gif
*
Hmmmm, might be a good purchase lol

QUOTE(kizwan @ Jun 7 2015, 03:43 PM)
Well if you can afford it, why not.  laugh.gif But it will be expensive solution.  laugh.gif I don't mind if anyone want to buy high-end CPU for their gaming rig. I actually encouraging it. No need Xeon IMO becuase it's too expensive for everyone, Haswell-E CPUs are powerful enough for this.

Well in my opinion, if you can afford a pair of high-end cards, you should be able to afford 1440p monitor at least. A pair of 290/290X are too powerful for 1080p anyway.

I just want to point out that people should be able to differentiate between driver overhead issue and driver optimization issue. Even if the API properly coded & the driver itself coded properly to utilized the API, you may still not get good performance in the games if the driver itself is not optimized for that games. This is the problem with AMD drivers especially with Nvidia-optimized games.
Well I'm not arguing but just discussing, if you're referring that to me but probably not. I always keep in mind that forum can't conveys tone.
*
I always use the FMUL and MADD feature as a good analogy on how optimization can have a flow on effect from drivers to final processing.

For example, you have an instruction that states you want the answer to 4x4, processor A can complete a FMUL in 4 cycles while it can finish a MADD in 1 cycle. Instead of using a single MUL, you can just use 4 ADD and get the same answer in half the cycles (4+4+4+4 = 16 wat)

But what if using that many add cycles saturated the mem bus? or overtaxed the queue? That's why I believe the discussion about driver/api overhead is slightly moot without an understanding of the underlying principles of computer engineering.

QUOTE(cstkl1 @ Jun 7 2015, 04:17 PM)
empire23
Personally what i fear the future will be one sided

Maxwell is akin to time of core2duo for intel. Everybody thought amd will challenge soon.

Amd better not sit on their laurels. They need to make more efficient gpu.
They have lost the desktop cpu, then notebook cpu n igpu and nowhere to be found on small form factor pc/tablets/ultrabook etc.

290x was ok because alot of ppl ignored the power requirement it used.

If they dont challenge nvidia in all levels this time.. I fear whats gonna happen with pascal. Maxwell is a truly incredible jump from kepler.

390x n Fury X or whatever the naming scheme better rise up. Nobody even cares or even bothers on amd cpu refreshes n updates but all know intels roadmap. Lets pray this does not happen with discreet gpu.
*
I do not believe that all is lost, given Carizzo has interested me greatly I see that it has great potential for optimization and class beating integrated graphics.

What I think they're doing is "buying time", much like how Intel bought time post Merom to get the processor enhanced and well built enough for the desktop environment by bumping up the raw speed on the Prescott Pentium 4 processors.

AMD know where they want to go with GCN, they just need more time. That's why it's an R9 390 and not an R10 wink.gif

empire23
post Jun 17 2015, 07:29 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(shikimori @ Jun 17 2015, 06:53 AM)
Can anyone answer me with simple layman terms ,will 4gb HBM will be enough for 4k gaming or game that uses a lot of vram like gta v without stuttering/performance hit ?
*
Quick and dirty answer, no idea. It depends on a lot of things, BUT more RAM is always nicer.

The most likely answer is that AMD have calculated that high performance nature of HBM and the higher bandwidth offered by processor mediated DMA and PCI-E 3.0 should be enough to cover for things.

Technical explanation.
QUOTE
If I were an AMD Engineer, I'd just tier it for the high performance cards. That's essentially engineer speak for adding a traditional and slower non HBM memory to store less used textures to augment HBM.

Also given that DirectX 12 is around the corner and developers have access to memory page locations and more granularity, it might be a calculated gamble by AMD, banking on more efficient programming and a strong preference for cache-like memory rather than just brute storage capacity.

empire23
post Jun 17 2015, 04:04 PM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
Hmmm everything I see is a card that is very much a gamble on DX12's CTM (Close to Metal) philosophy, which I feel is a natural progression of graphics card architecture, much like Mike Houston predicted a few years back when he was still at AMD.

But what is going to be interesting is their ability to pull it off given that DX12 games are still in the pipeline and optimization always takes time. ATI was first to market with built in tessellation with the R5xx series but it didn't do jack shit for performance simply because devs needed another gen to catch up via optimization.

My bet though is because most game devs are quite in tune with optimization for GCN due to AMD's omnipresence in the console market that AMD is betting that any performance gains will come fast and scale quickly, coupled with the increased memory efficiency gained from GCN 1.2. It's a bold gamble, not sure it'll work though.

You guis tell me if you get any review samples! laugh.gif
empire23
post Jun 17 2015, 04:25 PM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(cstkl1 @ Jun 17 2015, 04:10 PM)
Agreed. Heck go before that with dx10. Same story with the pathethic 2900 . Sometimes its frustating but nvidia philosophy of gpu for the now seems to be better gamble.

None dude. Normally can get. This time i even asked few others. Nobody has it. Even hwlabs dude is not saying anything. I think amd trying to control the initial reviews. Titan X & 980ti had so many variance. Alot of ppl chasing website hits trying to be the first.
*
ATI has always been pretty innovative with a lot of their architecture enhancements, they were first with AVIVO where the GC took up video processing and started the trend, but for 1 year plus only 1 piece of software supported it, they were first with non-linear pipeline architecture and a lot of other things both companies use as the norm these days. NVidia has always bided its time when comes to these things, preferring market maturity. That's why Maxwell debuted in the GTX 750 TI for more than a year before being scaled up.

It's like how AMD foresaw the need for even more Compute performance and wasted a significant amount of die space while NVidia cut it down in Maxwell. Compute performance is the way of the future if you ask me, but AMD is jumping the gun a bit early for mass market.

AMD has always been crap with samples and if you gave them a shit review, you'd be magically bumped to the bottom of the sample list next time around. They sure knew how to make enemies back then. But that was like 8 years ago. Maybe their marketing dept is a little smarter now. laugh.gif
empire23
post Jun 17 2015, 04:29 PM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(khelben @ Jun 17 2015, 04:15 PM)
Dem, where are the capacitors?
*
Looks liek Tantalum Polymer capacitors near the power capacitors (eg; Panasonic/Sanyo SMD POSCAP). Might be wrong.

Also the increased compactness of the board and a superior VRM might negate the need for bulk capacitance.
empire23
post Jun 17 2015, 04:35 PM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(Unseen83 @ Jun 17 2015, 04:30 PM)
AMD R9 Fury  so Sexy and Superior gpu  tongue.gif  wub.gif
*
That remains to be seen. wink.gif

QUOTE(Minecrafter @ Jun 17 2015, 04:32 PM)
Is that true? laugh.gif LOL!
*
It was true when I did it for money. Nowadays computers and gaming are one of my side hobbies, I like discussing it more than I like reviewing it, I prefer spending my money on cars and guns.
empire23
post Jun 17 2015, 04:40 PM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(khelben @ Jun 17 2015, 04:36 PM)
Yeap. But there's one big thing that nVidia brought to the market first, multiple cards (SLi, crossfire). ATi's knee-jerk reaction was a bit funny, the hideous DVI cable connecting both cards outside biggrin.gif
Ah. I'm so not up-to-date. I was looking at the new cards and thought, "so, where do I connect my DVI port?"
*
Naw brah, 3DFX brought that in with the Voodoo 2, so when they were bought over by NVidia, NV had access to all their expertise. But yeah, ATi's solution was effin retarded.

I oso not up to date, once in a while when something new comes out, I come out of my tech cave and find out that I'm generations behind in hardware. My last PC lasted me from 2008 to 2014 before that sial DFI mobo meletop laugh.gif
empire23
post Jun 18 2015, 07:35 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(Unseen83 @ Jun 18 2015, 06:11 AM)
were not blame the shop, but there shop need to be taken as example.. we buyer electronic not to be taken advantage No problem paying 6% GST + 10-20% service charge for shop/seller just having issues with  Seller/shop/distributor seling/charging excessive 70-80% on top of GST...  what this Merc-benz ?  got excessive Charges ?  smile.gif

so please made Complaint send email  icon_rolleyes.gif  .. i already msg officer working KPDNKK he says check it out next week as currently they are doing Tour checking price Over profit shop..

http://eaduan.kpdnkk.gov.my/eaduan/main.php?lang=2
*
It's not a controlled item and thus they can set any price they want ler.
empire23
post Jun 18 2015, 08:41 AM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(Unseen83 @ Jun 18 2015, 07:53 AM)
hmm.gif  well i ask buddy working there... and i know gov need extra income for.. country eco boost..  so gonna try use "The Price Control Anti-Profiteering Act 2011"  What is Application of PCAP?  "Under the PCAP, it is an offence for a person/retailer to make an unreasonably high profit ("profiteering") from the sale or supply of goods or services in the course of a trade or a business.  icon_rolleyes.gif
*
I know the law, but the law also requires that the defined "price controller" in this case a ministry officer make full review of the "Goods" under review, it can only apply to 1 item at a time and can be subject to review because computer items are not under a "gazette"

By the time this process is done, the R9 400 series will be out already haha! laugh.gif

You're just better off waiting 3 months for the price to normalize than the 1 year + it'll take for a review.
empire23
post Jun 19 2015, 02:21 PM

Team Island Hopper
Group Icon
Staff
9,417 posts

Joined: Jan 2003
From: Bladin Point, Northern Territory
QUOTE(cstkl1 @ Jun 19 2015, 01:42 PM)
Time to choose. Epeen or efficiency
Part of my brain say go epeen.

But cpu combi atm all boring n so is skylake with ddr4 so immature.

Dp1.3. Y cant amd put this in
*
When it comes to buying graphic cards, I only have 1 rule,

1) If it can run the games you want at the speed you want it at, it's all good.

Worse comes to worst, can just jual biggrin.gif

Topic ClosedOptions
 

Change to:
| Lo-Fi Version
0.0485sec    0.74    7 queries    GZIP Disabled
Time is now: 10th December 2025 - 04:33 PM