QUOTE(skylinelover @ May 3 2015, 03:44 AM)
It still remember the 9800SE version, I softmodded mine to a total of 16 pipelines. It was awesome. Then it was replaced with a Geforce 6800.AMD Radeon™ Discussion V12, Latest - 14.12 | WHQL - 14.12
AMD Radeon™ Discussion V12, Latest - 14.12 | WHQL - 14.12
|
|
May 3 2015, 04:56 AM
Return to original view | Post
#1
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
|
|
|
|
|
|
May 4 2015, 04:44 AM
Return to original view | Post
#2
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(S4PH @ May 3 2015, 03:56 PM) I use to own a 9600 pro that card was so good i never replaced it till it got spoilt lol It think it had 8 pipelines instead of 12. So you got a near 50 percent boost when you softmodded the card. Dem nais.QUOTE(Najmods @ May 3 2015, 05:06 PM) This was my pride and joy back then, X1800GTO. Costs me RM1000 back then. I just love reference designed PCB, there is some sort of elegance into it. Back then I was foolish, I overclocked it way too much that it reached over 100C for quite some time and killed it I had an X1800XT, bought it when it first came out, it cost about 1300 ringgit. Dem expensif, but very song play games. But then like yours, mine overheated, artifacted and then mati katak.[attachmentid=4434531] |
|
|
May 8 2015, 08:17 AM
Return to original view | Post
#3
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(Unseen83 @ May 7 2015, 04:49 PM) HBM is interesting. But knowing how they dealt with the challenges of DRAM stacking is even more interesting. But here are my predictions; - Memory overclocking will be quite shit. Simply the issue of multi layering DRAM makes routing and vias a huge issue and signal integrity and timing will be tight. But this is a trade of for shorter delays. - Thermal management. Sharing the same substrate is an issue as thermal density increases, also proximity to the core which has higher leakage current than the mem poses thermal management issues. - Price. Yields will be crap, I do not expect prices to do down until a process improvement takes place and that will take longer than the usual cycle. Waiting for a beyond3d or Arstechnica article, but not holding my breath haha! |
|
|
May 9 2015, 04:37 AM
Return to original view | Post
#4
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(Unseen83 @ May 8 2015, 05:24 PM) Above all the multiple die single package system would save a lot of board space and increase density, which means interesting prospects for SFF systems. |
|
|
May 9 2015, 05:28 AM
Return to original view | Post
#5
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(Unseen83 @ May 9 2015, 04:53 AM) True too.. but most end user who spend RM3K+ on Gpu would like to test it limit and max capability, and if GPU done badly in pushing the OC... it may get minus score point on Review.. public view of AMD Flagship or been call gpu that OC max on factory default just to out performance it competitor but now is just Speculation.. assumption... so better wait... take note is early NEW tech or first Gen HBM and AMD get it First! and is COMING!! I never have and probably never will take into context OC performance when it comes to any review. A card that has a lot of OC potential means that production quality control isn't tight (thus the high spread) and thus reliability is more variable. Because if manufacturers could reliably clock their chips higher and bin the higher performance parts, they'd do it because it means higher profits. hm naah if you dun like to OC than Porsche not for you... When the original GTX reference cards came out and people found that they had significant headway, Nvidia started allowing most AIB partners stock OC the cards as yields and quality tightened up. Now all cards are overclocked from the box when compared to the original speed. I feel no need to OC, if I wanted a faster Porsche, I'd just buy the next faster model. Much like if I feel my M235i is too slow, I'd order an M4 |
|
|
Jun 7 2015, 06:55 AM
Return to original view | Post
#6
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(JohnLai @ Jun 7 2015, 12:58 AM) It's not black and white, as depending on the inherent efficiency of the rendering interface and any additional abstraction steps that the driver may or may not have to make, a choice of Mantle or DX can affect things. Graphics is a lot more complex than just a simple SIP pipeline. For example a driver might have issues with texture calls and transfers being the most cycle heavy, thus the use of a CTM (Close to Metal) like Mantle where you can actively manage hierarchy can reduce CPU usage as a whole even on the driver side of things. This is dependency in action. |
|
|
|
|
|
Jun 7 2015, 10:46 AM
Return to original view | Post
#7
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(JohnLai @ Jun 7 2015, 10:34 AM) True. But at the end of the day, if by reducing abstraction and increasing efficiency with Mantle say reduces certain flow on effects that the driver might not be efficient with, then yes, there are effects. Quoting Mike Houston (now a chief engineer at NVidia) when commenting on API performance overheads. QUOTE The OS and the driver architecture can still sometimes get in the way. For example, if you look at the "small batch problem", this doesn't exist on the console to nearly the same degree as the PC despite the underlying HW being extremely similar. Multi-GPU support doesn't exist in very useful ways yet in either GL or DX. The tension is really how to maintain portability and system stability while still getting out of the way. His comment on the small batch problem generally stems from SIMD Vector widths I believe, which an API mostly abstracts, but with a CTM API can optimize. TL;DR, it's all related at the end of the day. You can change something high up the chain like a compiler, but the effects will be felt everywhere. |
|
|
Jun 7 2015, 02:30 PM
Return to original view | Post
#8
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
Dunno why ppl until dai dai wanna argue driver overhead on CPU, just go buy 8 core Xeon, kaotim
|
|
|
Jun 8 2015, 05:20 AM
Return to original view | Post
#9
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(Minecrafter @ Jun 7 2015, 03:19 PM) Hmmmm, might be a good purchase lol QUOTE(kizwan @ Jun 7 2015, 03:43 PM) Well if you can afford it, why not. I always use the FMUL and MADD feature as a good analogy on how optimization can have a flow on effect from drivers to final processing.Well in my opinion, if you can afford a pair of high-end cards, you should be able to afford 1440p monitor at least. A pair of 290/290X are too powerful for 1080p anyway. I just want to point out that people should be able to differentiate between driver overhead issue and driver optimization issue. Even if the API properly coded & the driver itself coded properly to utilized the API, you may still not get good performance in the games if the driver itself is not optimized for that games. This is the problem with AMD drivers especially with Nvidia-optimized games. Well I'm not arguing but just discussing, if you're referring that to me but probably not. I always keep in mind that forum can't conveys tone. For example, you have an instruction that states you want the answer to 4x4, processor A can complete a FMUL in 4 cycles while it can finish a MADD in 1 cycle. Instead of using a single MUL, you can just use 4 ADD and get the same answer in half the cycles (4+4+4+4 = 16 wat) But what if using that many add cycles saturated the mem bus? or overtaxed the queue? That's why I believe the discussion about driver/api overhead is slightly moot without an understanding of the underlying principles of computer engineering. QUOTE(cstkl1 @ Jun 7 2015, 04:17 PM) empire23 I do not believe that all is lost, given Carizzo has interested me greatly I see that it has great potential for optimization and class beating integrated graphics. Personally what i fear the future will be one sided Maxwell is akin to time of core2duo for intel. Everybody thought amd will challenge soon. Amd better not sit on their laurels. They need to make more efficient gpu. They have lost the desktop cpu, then notebook cpu n igpu and nowhere to be found on small form factor pc/tablets/ultrabook etc. 290x was ok because alot of ppl ignored the power requirement it used. If they dont challenge nvidia in all levels this time.. I fear whats gonna happen with pascal. Maxwell is a truly incredible jump from kepler. 390x n Fury X or whatever the naming scheme better rise up. Nobody even cares or even bothers on amd cpu refreshes n updates but all know intels roadmap. Lets pray this does not happen with discreet gpu. What I think they're doing is "buying time", much like how Intel bought time post Merom to get the processor enhanced and well built enough for the desktop environment by bumping up the raw speed on the Prescott Pentium 4 processors. AMD know where they want to go with GCN, they just need more time. That's why it's an R9 390 and not an R10 |
|
|
Jun 17 2015, 07:29 AM
Return to original view | Post
#10
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(shikimori @ Jun 17 2015, 06:53 AM) Can anyone answer me with simple layman terms ,will 4gb HBM will be enough for 4k gaming or game that uses a lot of vram like gta v without stuttering/performance hit ? Quick and dirty answer, no idea. It depends on a lot of things, BUT more RAM is always nicer. The most likely answer is that AMD have calculated that high performance nature of HBM and the higher bandwidth offered by processor mediated DMA and PCI-E 3.0 should be enough to cover for things. Technical explanation. QUOTE If I were an AMD Engineer, I'd just tier it for the high performance cards. That's essentially engineer speak for adding a traditional and slower non HBM memory to store less used textures to augment HBM. Also given that DirectX 12 is around the corner and developers have access to memory page locations and more granularity, it might be a calculated gamble by AMD, banking on more efficient programming and a strong preference for cache-like memory rather than just brute storage capacity. |
|
|
Jun 17 2015, 04:04 PM
Return to original view | Post
#11
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
Hmmm everything I see is a card that is very much a gamble on DX12's CTM (Close to Metal) philosophy, which I feel is a natural progression of graphics card architecture, much like Mike Houston predicted a few years back when he was still at AMD.
But what is going to be interesting is their ability to pull it off given that DX12 games are still in the pipeline and optimization always takes time. ATI was first to market with built in tessellation with the R5xx series but it didn't do jack shit for performance simply because devs needed another gen to catch up via optimization. My bet though is because most game devs are quite in tune with optimization for GCN due to AMD's omnipresence in the console market that AMD is betting that any performance gains will come fast and scale quickly, coupled with the increased memory efficiency gained from GCN 1.2. It's a bold gamble, not sure it'll work though. You guis tell me if you get any review samples! |
|
|
Jun 17 2015, 04:25 PM
Return to original view | Post
#12
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(cstkl1 @ Jun 17 2015, 04:10 PM) Agreed. Heck go before that with dx10. Same story with the pathethic 2900 . Sometimes its frustating but nvidia philosophy of gpu for the now seems to be better gamble. ATI has always been pretty innovative with a lot of their architecture enhancements, they were first with AVIVO where the GC took up video processing and started the trend, but for 1 year plus only 1 piece of software supported it, they were first with non-linear pipeline architecture and a lot of other things both companies use as the norm these days. NVidia has always bided its time when comes to these things, preferring market maturity. That's why Maxwell debuted in the GTX 750 TI for more than a year before being scaled up.None dude. Normally can get. This time i even asked few others. Nobody has it. Even hwlabs dude is not saying anything. I think amd trying to control the initial reviews. Titan X & 980ti had so many variance. Alot of ppl chasing website hits trying to be the first. It's like how AMD foresaw the need for even more Compute performance and wasted a significant amount of die space while NVidia cut it down in Maxwell. Compute performance is the way of the future if you ask me, but AMD is jumping the gun a bit early for mass market. AMD has always been crap with samples and if you gave them a shit review, you'd be magically bumped to the bottom of the sample list next time around. They sure knew how to make enemies back then. But that was like 8 years ago. Maybe their marketing dept is a little smarter now. |
|
|
Jun 17 2015, 04:29 PM
Return to original view | Post
#13
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
|
|
|
|
|
|
Jun 17 2015, 04:35 PM
Return to original view | Post
#14
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(Unseen83 @ Jun 17 2015, 04:30 PM) That remains to be seen. QUOTE(Minecrafter @ Jun 17 2015, 04:32 PM) It was true when I did it for money. Nowadays computers and gaming are one of my side hobbies, I like discussing it more than I like reviewing it, I prefer spending my money on cars and guns. |
|
|
Jun 17 2015, 04:40 PM
Return to original view | Post
#15
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(khelben @ Jun 17 2015, 04:36 PM) Yeap. But there's one big thing that nVidia brought to the market first, multiple cards (SLi, crossfire). ATi's knee-jerk reaction was a bit funny, the hideous DVI cable connecting both cards outside Naw brah, 3DFX brought that in with the Voodoo 2, so when they were bought over by NVidia, NV had access to all their expertise. But yeah, ATi's solution was effin retarded. Ah. I'm so not up-to-date. I was looking at the new cards and thought, "so, where do I connect my DVI port?" I oso not up to date, once in a while when something new comes out, I come out of my tech cave and find out that I'm generations behind in hardware. My last PC lasted me from 2008 to 2014 before that sial DFI mobo meletop |
|
|
Jun 18 2015, 07:35 AM
Return to original view | Post
#16
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(Unseen83 @ Jun 18 2015, 06:11 AM) were not blame the shop, but there shop need to be taken as example.. we buyer electronic not to be taken advantage No problem paying 6% GST + 10-20% service charge for shop/seller just having issues with Seller/shop/distributor seling/charging excessive 70-80% on top of GST... what this Merc-benz ? got excessive Charges ? It's not a controlled item and thus they can set any price they want ler.so please made Complaint send email http://eaduan.kpdnkk.gov.my/eaduan/main.php?lang=2 |
|
|
Jun 18 2015, 08:41 AM
Return to original view | Post
#17
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(Unseen83 @ Jun 18 2015, 07:53 AM) By the time this process is done, the R9 400 series will be out already haha! You're just better off waiting 3 months for the price to normalize than the 1 year + it'll take for a review. |
|
|
Jun 19 2015, 02:21 PM
Return to original view | Post
#18
|
|
Staff
9,417 posts Joined: Jan 2003 From: Bladin Point, Northern Territory |
QUOTE(cstkl1 @ Jun 19 2015, 01:42 PM) Time to choose. Epeen or efficiency When it comes to buying graphic cards, I only have 1 rule,Part of my brain say go epeen. But cpu combi atm all boring n so is skylake with ddr4 so immature. Dp1.3. Y cant amd put this in 1) If it can run the games you want at the speed you want it at, it's all good. Worse comes to worst, can just jual |
|
Topic ClosedOptions
|
| Change to: | 0.0485sec
0.74
7 queries
GZIP Disabled
Time is now: 10th December 2025 - 04:33 PM |