Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

 AMD Bulldozer & Bobcat

views
     
dma0991
post Oct 18 2011, 11:57 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(ben_panced @ Oct 18 2011, 09:21 AM)
the problem with bd is
although the front end got like 4 instruction decoder, but it is shared by two cores, and the decoder can only service 1 core at a time, contributing to its slowness in ST..
furthermore, each core only has 2 integer ALU compared to 4 in intel cpus starting fro C2D.. so you get the point why this BD performance is very underwhelming..
couple it with a long instruction pipeline and a very high latency cache..
and you got your self a very slow CPU
*
AMD claims that the third integer ALU in Phenom II is not used therefore it is removed in BD. How would that affect the performance significantly is not known but from the benchmarks it is clear that something is lacking with BD when it loses to its predecessors and Intel. The long instruction pipeline is meant so that BD could achieve a higher clock speed but at the cost of its IPC, as to why AMD wants this I wouldn't know as Netburst has already proved that this method is not effective. AMD's L3 cache is a huge chunk of 8MB cache shared by all of the 8 cores, so latency is to be expected. Intel's approach is four 2MB LLC not shared and with a ring bus to connect all of them so latency might not be bigger with smaller caches per core.

QUOTE(bai1101 @ Oct 18 2011, 09:32 AM)
Confuse with all the technical talk.

For the first time I see a product that create so many discuss.
*
That's what we're here for. rolleyes.gif
dma0991
post Oct 18 2011, 09:08 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(fcuk90 @ Oct 18 2011, 09:07 PM)
saw the fx4100 with asrock 990fx at c3x today , fx4100 worth ?
*
How much is CEX selling for that bundle?
dma0991
post Oct 25 2011, 04:11 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(Kizarh @ Oct 25 2011, 03:44 PM)
Cuz they don't do enough market survey and expect the Software Devs to program their software to run multithreaded(why bother change the code to multithreaded? when the CPU single Threaded performance is strong enough anyway? lolz), what a fool doh.gif
*
The current market will be heading towards multicore and single threaded performance is no longer relevant to the current standards. This is not to say that single threaded performance is not important, it does definitely contribute to a stronger multithreaded performance but what I mean is that having a single core is no longer feasible. I can imagine that SB with that die size at 32nm but instead of 4 cores they make into a single huge core will not perform as well as the current SB would in majority of tests. There is a diminishing return for more cores and there is also a diminishing return for increased core/architecture complexity.

Like I said before, BD is a server CPU where multithreaded applications are custom made to take full advantage of the hardware but it will definitely not be good for the normal consumers. It still falls short of being a good server CPU though as I think the 16C Interlagos will barely beat or similar to the 12C Magny Cours.

QUOTE(zamx @ Oct 25 2011, 03:50 PM)
so its better to choose phenom II X6 1090 than a amd fx 6100?is it SSE 4.1,SSE 4.2 in amd fx more benefit other phenom II?i'm in dilemma to choose...coz i have bought asus m5a97...so sad that amd fx not perform well
*
The Phenom II X6 will be a better choice no doubt. You could probably get a decent overclock as well with a proper heatsink. Don't bother too much about the new instruction sets, current programs wouldn't be benefiting from it unless it is recompiled.
dma0991
post Oct 25 2011, 05:17 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(Kizarh @ Oct 25 2011, 05:02 PM)
Too many thread will actually make the software slow too, to me 20(Intel just need 10 cores for this to work lolz) threads good enough for most multithreaded applications including servers, anything more than that is not necessary unless you're dealing with super Floating Point stuffs which better leave it to FPGA and GPUs.
*
As long as the software is able to make full use of the parallelization, you can have as many cores you want and cluster computing and HPC does just that with thousands of similar cores working in tandem. Most cluster computing will use GPU as a form of parallel compute but the K computer does not have any GPU and relies totally on the CPU power itself but the only reason GPU is being used is that the power required to achieve a higher GFLOPS is much more efficient than using CPU alone. The K computer is at the top currently but it's power consumption is quite bad compared to clusters based on CPU + GPU.
dma0991
post Oct 25 2011, 10:20 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(Kizarh @ Oct 25 2011, 05:33 PM)
And most of these HPC computers are built for Floating Point(simple) stuffs, and like I said b4, you still have to use CPU to do super complex job like single AI and simulations with too many thread will slow them lolz


Added on October 25, 2011, 5:37 pmFor eample, in a Corperation(company), you don't wanna hire labourer(they obviusly can do simpler job faster cuz they have bigger muscle but they don't train to do something than require some thinking) to do marketing that why we have trained marketing experts lolz
*
Simulation does benefit from multithreading. It depends more on the software end if it can make use of the extra threads. Depending on the job requirement, many employers wouldn't pay 4x to a single person when they can pay the same amount to get 4 person to do the job much faster unless the job requirement is necessary in terms of creating new ideas. In the case of real work, there is the need for more laborers than the need for creative workers, it is just the normal hierarchy and somebody has to execute what the creative worker has thought of.

QUOTE(xcen @ Oct 25 2011, 10:00 PM)
Currently there are no thin and light laptops with Llano inside, don't know if that will be the case with Trinity. I think AMD should really capitalize on the thin and light market, not just the netbook market.

A lot of laptops nowadays are going towards lighter and thinner. Just look at XPS 14z, Series 7 etc.
*
Laptops are getting thinner and lighter but the Ultrabook is still somewhat a gimmick and its success is still unknown since it is priced quite high. In the case where I am looking for an ultra light notebook like that I would rather get the MacBook Air instead which comes to about the same price anyway. Also the thin and light form factor is not what makes tablets and smartphones popular, it is more of the software capabilities and the method of interaction with the device which is totally different from what a traditional notebook can provide.
dma0991
post Oct 26 2011, 02:24 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


Analyzing Bulldozer: Why AMD’s chip is so disappointing
QUOTE
user posted image
Statistically, we’d need to push the FX-8150 to around 5.5GHz to match Sandy Bridge’s 3.4GHz performance in this test. (Single-threaded performance)


dma0991
post Oct 27 2011, 07:58 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(Boldnut @ Oct 27 2011, 09:52 AM)
Well as long as laptop doesnt have battery life of a phone/or a day, then it isnt revolutionary. it is sad that battery technology arent progressing much.  doh.gif

I am still waiting for a light weight laptop that can last for a full day on a single charge without me carrying additional battery to get that. Even the so call 8-10hours battery life netbook these days claimed by manufacturer are base on the dimmest LCD brightness + everything @ complete idle. marketing gimmick ftw.  doh.gif

When u take a those and use as ur everyday task like standard brightness, web browsing/word processing, batterylife goes back down to 5hours.  doh.gif
*
Actually phones don't fare that much better than laptops would or at least that is true for smartphones and not the type of phones that has simple features and meant for long standby time. Phones are suffering a similar problem where there is an increase in performance but there is no increase in battery capacity. From what I know the iPhone 4 has longer battery life than the iPhone 4S due to the addition of the dual core A5, more performance = more juice. Battery technology is progressing but very slow with the use of carbon nanotubes to increase battery capacity but what we have now is only a proof of concept and not a finished and viable product. Some manufacturers also wouldn't make it unless it is cheap enough to produce so that the majority could buy it. In other words we might not be seeing any improvement for at least another decade.

Laptops did improve quite a lot over the years in terms of battery lifespan. Take for example my C2D @ 65nm laptop would only last at best 2 hours or 1 1/2 hours currently because the battery degraded a little. These days you have newer laptops which does run more efficiently and would easily be more than 5-8 hours average for simple tasks. You may not have noticed this change because you're used to seeing comparisons between more up to date laptops but when you have something like mine to compare with you could see the difference in battery life has improved over the years and it will get better with IB and with Intel pushing for more efficient displays like the LG self refresh, it could have small gain in battery life.

QUOTE(Thrust @ Oct 27 2011, 06:33 PM)
DamN!! I accidently bent 3 pins on my Phenom II 1055T. Now I can't even OC sad.gif Normal boot up works fine but whenever I increase the core clock as well as DRAM frequency, the system won't even boot up.. sad.gif

My next system will be Intel already. Damn AMD should get rid of those useless pin... Gggrrr...
*
AMD should have made FMx into a LGA socket already. They have Socket C32 in their server range so they should make LGA sockets for the desktop as well.
dma0991
post Oct 27 2011, 10:43 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(Boldnut @ Oct 27 2011, 08:43 PM)
6hours tops for texting thats for netbook. about 4-5hours for web surfing that is non-youtube.

The 8 hours are claimed by manufacturer is NOT realistic because the LCD brightness are dimmed so low that you would normally never use that kind of brightness, then it is assume that the laptop are left idle + with most other devices set disabled such as wifi. Check this article, you'll get the idea why ur C2D only last 2 hours top. When it is selling that time manufacturer claim to last 3-4hours. I have my Pentium M where manufacturer claim to last 5hours, when it is barely over 2hours when I use for web surfing with flash only.

http://www.xbitlabs.com/articles/mobile/di...om_8.html#sect0

For laptop to reach complete mobility we need to reach at least a day under normal brightness, web surfing with Wi-Fi. This assuming we human go to sleep and left our laptop charging for tomorrow usage.

I sometimes wonder, AMD bobcat may sound a new approach but despite its low TDP, it seems its power throttling % are not as aggressive as Sandy bridge ones.
*
Laptop manufacturers when I bought the C2D did not place emphasis or market my laptop with the number of hours it would last. If I recall correctly the marketing strategy last time was the weight of the laptop, it is the current version where more emphasis is being placed for battery life. It is definitely very unlikely that you would ever achieve the manufacturer's 8 hours because their tests are in a controlled situation and does not account for degradation of parts over time. It is normal for the marketing team to hyperbola something normal to something that is beyond the product could achieve, it is similar to what happened with Bulldozer where the marketing was too aggressive and made the product sounded better than it should be and when the actual product came out it fell flat on the ground.

My C2D is only able to push out 2 hours and there were no marketing to back up saying that it would last 5 hours. Anything that has to do with light will eat up battery no doubt. That is why an eReader with an eInk display would easily outlast a tablet with a conventional backlit panel. 5 hours average battery life is quite good I'd say and more would definitely be better but to have higher capacity we would need bigger batteries and would mean bigger laptops which is not what Intel wants with Ultrabook. The battery life of an Ultrabook would probably be the same with the current notebooks despite the improvement in the processor because the battery has to shrink to fit the slim form factor.
dma0991
post Nov 21 2011, 10:14 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(Boldnut @ Nov 21 2011, 11:57 AM)
After listen some discussion online among my friend's chatting, surprisingly AMD 8 core @ 3.6Ghz actually look "better" in common customer eyes because 8 core @ 3.6GHz SELL.  lol? doh.gif

Are we back into P4 era again?
*
It is pretty normal actually for the general public that is less aware of what's going on. They do not rely on review sites or are not too bothered with being having an informed decision when it comes to buying a PC. They would just rely on the more/bigger is better concept because it is almost true but not in the case of BD. It is the same with those here who buys a Core i7 2600K/2700K for a gaming/office/multimedia rig. Compared to the Core i5 2500K, the Core i7 2600K has 4 more threads and applying the more is better concept, they come to a conclusion that the Core i7 2600K is superior. It is without a doubt better if they can utilize 100% of it but they're better off with a lower end model that suits their purpose and save some money.

We'll never go back into the P4 era again because Intel controls the development of the x86 CPU market. BD is pretty forward thinking in terms of feature sets which even SB does not have but if Intel's future CPU says that they won't be using that particular feature and opt for something else then AMD is at the mercy of the larger majority of the market.
dma0991
post Nov 23 2011, 03:12 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


AMD cancels 28nm APUs, starts from scratch at TSMC

Not good news for Bobcat which is still quite successful compared to other AMD products. They're cancelling 28nm GloFlo in favor of 28nm TSMC which is what they should have done in the beginning since the 40nm Bobcat was originally manufactured by TSMC. A lot of the problems with BD that is associated with poor performance, low yield and high power consumption can be traced back to how well GloFlo does with their design and if BD had been made under TSMC instead for example, the results could be totally different. If AMD continues with GloFlo for a 28nm Krishna/Wichita, Bobcat could it can possibly have the same fate as BD. Luckily all 28nm GPUs from AMD will still be manufactured under TSMC's 28nm process. AMD's luck improving performance lies with GloFlo improving their process node.
dma0991
post Nov 30 2011, 09:18 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(tech3910 @ Nov 30 2011, 08:50 PM)
dun a fanboy.

1) bulldozer is a fail architecture. it is comparable to nvidia 1st gen fermi, 400 series.

2) bulldozer even loses badly out in server market.

3) it's not a little down, it's a lot down on gaming front.
*
1. It is bad currently but with Fermi, it gets better over time when it gets minor tweaks that gives substantial improvement. There is a possibility that BD can be the same.
2. Unless there are new statistics of the server market, that is a very bold statement you're making. It will remain false till proven otherwise.
3. I already clarified even before BD launch that it is not meant for a gaming rig. I mentioned that BD will do much better if the programs could utilize 8 all threads.
dma0991
post Nov 30 2011, 09:28 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(ALeUNe @ Nov 30 2011, 09:24 PM)
Server market?
Based on these articles, 60% of servers are x86, which are either Intel or AMD.
Intel has more than 90% of server market share.
http://www.fool.com/investing/general/2011...t-stronger.aspx
http://www.itproportal.com/2010/08/19/inte...s-amds-expense/
*
That is based off the old statistics where there are only Magny Cours in the server market. It does not reflect what impact Interlagos has on the market. I want future statistics to prove otherwise.
dma0991
post Nov 30 2011, 11:03 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(tech3910 @ Nov 30 2011, 10:44 PM)
even if the statistic is slightly old, it is impossible that say AMD grab 10% market share from intel.

hav u actually seen bulldozer server chip review?
poor on the performance per watt.
as u we all know, power usage is big deal in server market.

saying BD is slightly behind in gaming is just plain ignorance & denial.
coz fact is that BD is far off in gaming.

even in heavily threaded app, BD only win SOMETIMES.
& that SOMETIMES is not by much also. fact.
*
It is very unexpected that AMD will make a huge comeback in such a short time. I suppose AMD might gain some share in their niche(virtualization and HPC) that their aiming. The Interlagos review done by AT is somewhat accurate to the extent of standardizing tests across multiple platforms. For real world tests it might prove better where core density is important. Even in the Facebook Open Compute Project, not all servers are made equal and some tasks are given to Intel and some are given to AMD as a memcache.

It is undeniable that they have poor performance/watt and even more unforgiving in the server space but we'll see what the next iteration will bring about. For now it is up to AMD to sort out with GloFlo because the root of the problem with power consumption lies with GloFlo's 32nm node. I've not used the word 'slightly' or 'far off' in my statements regarding BD with games. I know for a fact that BD was not meant for games. First one must understand the nature of current games where they can't utilize the most out of 8 cores, 4 strong cores are better suited for gaming. Never have I said that BD will pawn SB in games.

Considering that BD is 2 billion transistors is one thing but a single BD module is still the same size as a single SB core. The most of the transistors budget goes to the L2 & L3 cache which is much denser compared to regular transistors so AMD is not at a disadvantage yet. If AMD in some tests have similar or worse results compared to a Core i7 2600K in multi threaded tests, this is your answer. As to why AMD decided to with more slower cache rather than smaller faster cache is all up to their design, which in this case it backfired.

QUOTE(ALeUNe @ Nov 30 2011, 10:56 PM)
Not to forget, the Xeon processors (that Interlagos head-to-head with) are the Nehalem based processors.
Intel Sandy Bridge Xeon was not used.
*
Intel currently has Westmere in their server line which is a die shrinked Nehalem to 32nm. So theoretically with Nehalem and SB-EP being made on the same manufacturing process we could probably see a small difference at best. Think of it like X58 vs x79 where X79 definitely has some improvement but not huge.

This post has been edited by dma0991: Nov 30 2011, 11:12 PM
dma0991
post Nov 30 2011, 11:23 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(tech3910 @ Nov 30 2011, 11:13 PM)
architecture design start years bck & impossible to to som major chances not wasting few more years.
now, AMD is following it's GPU division strategy. thats y they r going to focus on mobile market a lot more.

unless BD is comparable to intel in clock for clock performance, it will nvr win.
*
We'll never know how things might turn out with some minor tweaks in the architecture being made which could improve performance. AMD was betting on Fusion, with its CPU cores to handle integer workloads while the on die GPU cores would handle floating point calculations. It is definitely a gamble which I'm pretty sure it wouldn't end well when a dedicated GPU array would have better floating point performance because they have their own TDP budget. The mobile market strategy is nothing big, it is a known fact that most of the money made now is with servers and mobile. Even if they were to completely give up on desktop, they could focus on the server and mobile market and when they have improved, they could reenter the market again.

If we're measuring IPC relative to the size of the core(an inaccurate measurement), BD will definitely be behind when it comes to IPC. That is why many questioned whether IPC would increase or decrease. Obviously with enough logic many concluded that IPC will decrease but AMD reps says that IPC will increase and 50% throughput with 33% increase in core count. That is obviously marketing speak and even if it were true that BD is bad, his job is to polish a turd. AMD is betting on having more cores instead of more IPC, so single threaded performance suffers for a gain of some multithreaded performance.

QUOTE(DrBlueBox @ Nov 30 2011, 11:18 PM)
And thus WHY AMD says, don't make it AMD vs Intel anymore. Both have different priorities now. As I said earlier, AMD's choice of going towards mobile might be the best choice in the long term as more and more people choose to get smaller, more mobile devices that can be brought everywhere
*
I have to search back for the article which showed that the sales of mobile processors is much more profitable than desktop processors so it is not the end of the world if AMD decides to abandon the desktop market altogether. However I already mentioned that AMD is shifting their aim to be more progressive to mobile and not quitting the x86 market. I never really liked Softpedia, no credibility. link

This post has been edited by dma0991: Nov 30 2011, 11:27 PM
dma0991
post Dec 2 2011, 07:03 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


AMD Revises Bulldozer Transistor Count: 1.2B, not 2B
QUOTE
user posted image


-----------------------------------------------------------------------------------------------------------------------------------------------------------
QUOTE(tech3910 @ Dec 2 2011, 12:21 PM)
it's ironic that how AMD graphic division mock nvidia power consumption while their CPU division is f***ing power hungry.
*
BD was made under GloFlo's 32nm and AMD's GPU's are made under TSMC's 40nm and 28nm, so it is possible that GloFlo is still not very good at what they do. Even Bobcat was made under TSMC's 40nm and it is not power hungry.

dma0991
post Dec 2 2011, 07:58 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(chenwah88 @ Dec 2 2011, 07:54 PM)
tsmc is the "father" of foundry (taiwan)
according from my previous intern, a engineer told me quality of tsmc is better than such as other fab (not intel, arm, and nvidia)
*
It depends but in a direct comparison between TSMC and GF, I'm pretty sure TSMC is much better. That is also the reason why Krishna and Wichita, replacement of Brazos was scrapped under GF's 28nm in favor for TSMC's 28nm. Intel's foundry is probably one of the best for what they do.
dma0991
post Jun 5 2012, 04:11 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(+Newbie+ @ Jun 5 2012, 02:59 PM)
After reading that Anandtech article, do you think the weaknesses highlighted in the article will be fixed by AMD? It's definitely not in Piledriver. Some, like increasing clockspeed might happen, but others, like implementing a micro op cache I think will require too much re-engineering, right?

I only see AMD planning to continue their focus on parallel processing and all that heterogenous computing with shared memory addresses between cpu and gpu and all that funky stuff.
*
Let's just say that BD's problem runs very deep and a simple tweak isn't going to fix it. It literally needs to be scrapped and rethink their market strategy. As it stands now, BD is aimed at the server space where multicore approach would have benefit as there are many concurrent process. It doesnt work too well with mainstream because most program's on the consumer level prefers less threading and more single threaded strength.

What AMD thought was build a platform and hope that others will adopt them when the actual one is backwards.
dma0991
post Jun 17 2012, 03:09 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(djlah @ Jun 17 2012, 02:55 PM)
the review show A10 better than FX-8150, if overall are true, meant confirm beat intel i5 quad core. then I can jump ship once both A10 and FM2 A85 mb available here.
*
Nope. Piledriver cores (A10) are better than Bulldozer cores (FX-8150), it will only mean that it will only match up to Stars in terms of performance. Stars performance isn't exactly competitive either compared to Intel's Sandy Bridge or Ivy Bridge. Simply put, in terms of single threaded performance;

Sandy Bridge/Ivy Bridge > Stars >= Piledriver > Bulldozer
dma0991
post Oct 28 2012, 02:46 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(tanalvis @ Oct 28 2012, 01:53 AM)
usage just light/mid gaming(low/med settings and gonna try all the new games 4-5 years) but want try 1080 resolution perhaps
*
The better option would be to go with the Core i3 3220 but since you don't upgrade very often from what I can tell, its better to get a quad core. Doesn't have to be the most expensive Core i5 3570K, the Core i5 3470 should be sufficient. Of course this will require a RM200 bump in the budget but in the long term, its well worth it.

8 Pages « < 6 7 8Top
 

Change to:
| Lo-Fi Version
0.0206sec    0.67    7 queries    GZIP Disabled
Time is now: 28th November 2025 - 08:01 PM