Welcome Guest ( Log In | Register )

8 Pages « < 4 5 6 7 8 >Bottom

Outline · [ Standard ] · Linear+

 AMD Bulldozer & Bobcat

views
     
dma0991
post Jul 28 2011, 04:01 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(ALeUNe @ Jul 28 2011, 12:16 PM)
IMO, it is not only about the utilization of cores.
It is about the efficiency of cores too.

More cores but inefficient implementation would give you bad performance nevertheless.
Highly efficient 4-cores might run faster than inefficient 8-cores.
That's my point.

By the way, APU = CPU + GPU.
Fusion optimized?
You mean AMD CPU + Radeon GPU optimized software?
Software developers have been doing it. I think it's nothing new.
i.e. CUDA vs AVIVO, it's already there.
*
I don't know how would you define AMD having an inefficient cores compared to Intel. There never was a comparison between inefficiency vs efficiency. A single BD core does not take as much of a die space as a single SB core. If you were to compare die space which is a rough measurement of transistor count, a single SB core will be as big as a single BD module (2 cores) minus L3 cache. So therefore I can safely say that if a process can only use single thread, SB is a clear winner here but if it could use two, BD has an advantage.

Of course some might claim that it is just splitting transistors to make it two cores but if it is never that simple to do that. There are downsides of having more cores definitely but I'd rather get a SB that has 4 cores instead of using the same transistor budget for one huge SB core. There are diminishing returns if you want to make a processor more complex. Designing a smaller core and copy paste it 4 times is also much more economical than designing a single huge monolithic core.

What AMD wants with BD is that it is like Intel's HT but with HT you're looking like a <20% gain in a best case scenario. There is no replacement for physical cores and AMD's approach is to have two cores that perform 80% as good due and have them work in parallel. So in a two threaded workload, a module should be better than a single SB core with HT. I won't say that AMD's approach is good or bad but there will be a market where parallel workloads are important, mostly involving servers. The statement below is not written by me but it should give you a clear idea of what I mean.
» Click to show Spoiler - click again to hide... «

Not Fusion optimized, Avivo is dealing with videos and what I mean by OpenCL or what AMD calls it AMD APP, you can use the GPU for just about any parallel workloads and it does not have to be video based only.


dma0991
post Jul 28 2011, 05:00 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE
AMD used to charge USD$1000 for the FX-60 years ago. That is when they were highly profitable. wink.gif
They have fallen behind quite a bit but we cannot compare what most consumers are willing to pay nowadays since most are quite thrifty after the recession.

QUOTE
Based on the leaks which I would consider viable, I don't think so even in heavily threaded applications. I'll have to reserve judgement until some real benchmarks and reviews are published. Many things could have happen after that (e.g. bug fixes, clock speed fixes, etc)
I don't have much info about compatibility of IB with the current Panther Point but most do suggest that there is compatibility but I'll wait till Intel gives the green light.

QUOTE
Not true. Some compilers and programming APIs already support multi-core (including ICC). Intel is also trying to introduce MIC (Many Integrated Cores) to simplify porting of applications (easily by adding a few lines of pragmas and directives). However if you try to port applications to OpenCL or CUDA, that would require a major re-write. That is why the number of GPGPU applications for normal tasks are so few, while you will find most of them in specific HPC applications (mostly in CUDA). 
C language is not optimized for multithreaded so a rewrite to OpenCL is definitely necessary. They are few now but I'm sure they will be more popular as it gains momentum. I'll see how C++0x deals with parallelism.

QUOTE
Being highly parallel intensive computation and multithreaded is not the same thing. Similar to "C"? Not really, more of an "API". Have you seen how the codes look like?  rclxub.gif tongue.gif
I wish I have extra time to read through Nvidia's 61 pages and AMD's 142 pages of the OpenCL programming guide. sad.gif

QUOTE
As mentioned before, porting applications to GPCPU usually requires major re-writes, re-code and re-compile. That is why there is so few of them. Check True Fusion: AMD A8-3800 APU Review. Page 20: GPGPU Applications (seems to be the only review that test the very few GPGPU applications around). wink.gif
Give it some time. AMD is a small player and when Intel adopts OpenCL, we'll see a lot of OpenCL programs. In fact Intel has their OpenCL SDK already. icon_idea.gif

QUOTE
Not all programs can be GPGPU accelerated, because GPGPU hardware still has many limitations since they work on fixed sets of data but not with dynamic data (those with lots of data dependency for example, such as ray-tracing which is still until today best done with CPUs).  hmm.gif
I agree that not all can be GPGPU accelerated but some workloads can be done by the GPU. Scaling for CPU performance with # of cores is pretty bad especially if the code written for it is not optimized for multicore.

QUOTE
CUDA is already highly popular in HPC, you can see that from the HPC hardware (majority HPC machines use Tesla, including the latest Cray supercomputer). Unfortunately OpenCL is still not that popular (probably due to having some platform specific APIs for different GPGPU architecture from each manufacturer, AMD have their own coding structure while NVIDIA have also their own coding when it comes to optimizing for fastest compute tasks). Then there's also DirectCompute, another recent competing API for GPGPU from Microsoft. Thus often programmers usually tend to stick to one already familiar API (such as CUDA). It also does seem NVIDIA Tesla hardware is more popular because it perform specific tasks better than AMD Stream (e.g. Folding@Home). hmm.gif
Nvidia is the current leader for such standards no doubt about that. AMD is just starting with their HD7000 series or their current iteration of their GPU architecture is more compute based.

QUOTE
OBR the joker? IMHO take his results with lots of salt or ignore them altogether (after the stunts he pulled). doh.gif
laugh.gif

This post has been edited by dma0991: Jul 28 2011, 05:01 PM
dma0991
post Jul 28 2011, 06:40 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(lex @ Jul 28 2011, 05:49 PM)
Like I've mentioned before, some compilers are already optimized for multi-threaded (even simplified). Watch SGI Picks the Intel MIC Swim Lane for Exascale by 2018 from 5:32 onwards (this is ICC). wink.gif

A quick look at OpenCL on Wikipedia, you can see what I meant. wink.gif

Currently, Intel's implementation will be based more on AVX and future hardware (possibly with LRBNi extensions). icon_idea.gif

Scaling of CPU performance with increasing number of cores will depend on the hardware itself. For example a 2 socket systems scales better than 4 socket (this is well known, which is why most supercomputers are based on stacks of 2 socket systems). Then there's the MIC which scales very well because of its tight integration of simple x86 cores (no need for QPI or Hypertransport links).  hmm.gif

Finding AMD Stream in HPC field is very rare as it is mostly dominated by NVIDIA Tesla. IMHO, Intel's MIC will be the next big player in this field. icon_rolleyes.gif
*
I won't know for certain how well Intel MIC will perform in real world yet but from my understanding, if a portion of a code is not parallel enough, it will be subjected to Amdahl's Law. Without the code being as parallel as the hardware, it will reach to a maximum number of cores and there is no performance gain afterwards. I am not sure how parallel is Intel's implemention with MIC but I think if the programming language from the ground up is made for parallel, it should be able to benefit the most. This is my opinion only though and might not reflect how well MIC will perform.

AVX lol, another 500+800 pages from Intel that I have not read. AVX is also implemented with BD. I do not know much about Larabee but from what I have heard others have said that it is not really awesome. Like I said, market dominance can change with time and since AMD is going towards that direction, Nvidia is going to have a little bit more competition than they used to have. Intel MIC, maybe. hmm.gif
dma0991
post Aug 15 2011, 09:16 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(AlamakLor @ Aug 15 2011, 08:43 PM)
You know what AMD really needs to do? STFU with the news, rams, and etc and get the CPUs out the door ASAP.  doh.gif AMD is almost like Najib, I'm getting sick of it TBH.
*
September 19th. Jot that down on your calendar or you could pre-order now and be the first to get when it comes out. moneyflies.gif
dma0991
post Sep 4 2011, 01:44 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


AMD Bulldozer Enhanced (FX-Komodo) Processors Will work on AM3+
QUOTE
user posted image

dma0991
post Sep 4 2011, 02:00 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(adie82 @ Sep 4 2011, 12:13 PM)
or is it FM2 backward compatible with FM1?
*
I think that should be the way if FM2 has the same pin count and pin position as FM1. There should be backwards compatibility if it is true and I do think AMD wouldn't be changing sockets without backwards compatibility with every processor generation they produce.
dma0991
post Sep 5 2011, 12:58 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(waie @ Sep 5 2011, 12:32 AM)
I'm still wondering whether I should upgrade my system. If upgrade, I have two option ;

1. Sell off my mobo, ram and proc and buy SandyBridge after pricedrop when BullDozer launches
2. Change my motherboard now to am3+, then after bulldozer out, I'll change my processor
*
1. Not very likely that SB will have a price drop although it is speculated to happen. Intel could easily just sell a refresh with higher stock clock speeds or BD is not going to be for every user so it doesn't bother Intel at all that they should reduce their prices.
2. Stick with AM3+ boards that have the 970 or 990 chipsets and a full sized board as I have seen some of the CPU support list for the Gigabyte's range and lower end boards will only support 95W TDP Bulldozers while higher tier boards will be able to support all. Could be applicable to other brands as well.
dma0991
post Sep 5 2011, 01:16 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(waie @ Sep 5 2011, 01:06 AM)
1. i do not know why I'm eager to change to intel but yeah, if the price will drop, it'll be good already. like i7 2600k.. now few sellers selling it at sub rm800 right?

2. So you adviced me to buy higher end mobo/a good mobo if i want to but the mobo now?

btw, when is bulldozer predicted to launch? next week?
*
1. Supposedly there are rumors saying that there will be a SB refresh to counter BD at launch but it is still a rumor only. SB is the next best thing to BD I think as there are rumors of Ivy Bridge being delayed as well. Pricing depends but should be around RM900 +-
2. Not necessarily high end. Some boards cant support a 125W TDP processor . Just double check the CPU support list of the board you're buying if you intend to get the top end FX-8150.

I have no idea but they should release it soon. Rumored to be 19th September but starting to sound less and less likely. We'll see what comes by then.


dma0991
post Sep 7 2011, 10:39 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


AMD Reschedules Launch of FX-Series Chips to October, Changes Launch Lineup
QUOTE
user posted image

It is in mid October or Q4 2011 this time compared to 19th September. FX-4170 also does not look promising with such a chip having a 125W TDP @ 4GHz+ as it might indicate there is not much overclocking headroom compared to the Core i5 2500K.

user posted image
dma0991
post Sep 8 2011, 07:34 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


The Start of a New Era
QUOTE
Interlagos is a server part, there is no desktop variant. The client version of Bulldozer is Zambezi and it will launch in Q4.

It is confirmed that there will be nothing this 19th September. Q4 is October onwards.
dma0991
post Sep 12 2011, 08:17 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


Sadly BD might not be the holy grail for the ultimate gaming rig that everyone is looking for. BD is more optimized for server like workloads and in a server where VM and heavily threaded application scales very well with core count, the regular consumer's PC is not so. Single threaded performance for BD is out of the question and higher clock speeds doesn't mean much when it's IPC is expected to be much lower than SB and much closer to Nehalem.

Most likely AMD is abandoning the enthusiast consumer as the market seems to be shrinking lately due to the majority of users want a handheld device. They have Llano to attack the mobile space and BD to attack the server space that will be providing service like cloud to mobile devices. The direction is still uncertain as AMD was under lackluster management by Hector Ruiz and Dirk Meyer. Hopefully under Rory Reed, things may change for the better in terms of competitiveness.
dma0991
post Sep 12 2011, 08:38 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(shajack @ Sep 12 2011, 08:25 PM)
if the enthusiast market is shrinking, kinda wondering y intel is pushing its processor(beside EE) every other years, which is breaking benchmark every now n then (certainly not everyone look at benchmark,but lots do)...tho intel is certainly losing the handheld segment
*
I can't say for sure that the enthusiast market is shrinking but from what market analysts say, the forecasted growth in percentage has dropped therefore there is a sign that it is shrinking. As much as I hate to say this, the enthusiast market is actually quite small when you compare with the number of mainstream consumers. There are more people who are illiterate when it comes with computers by a huge margin than people who are very technical with their PC.

Some say that SB is the sweet spot for a enthusiast to be and IB might not promise much either as it is geared for the mobile space. All the 3D transistor you heard of has more benefits towards the mobile space and Intel is not sitting down when it comes to mobile space either when they can dump $300 million to make sure that Ultrabook is a success. Probably desktops will gain some popularity once again when the handheld device bubble pops. There are only so many cores you can put into a handheld device and scaling is an issue when there will be a quad core ARM processor in the making this year, all of this happening while battery capacity has not improved at all.
dma0991
post Sep 13 2011, 01:17 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(billytong @ Sep 13 2011, 12:58 PM)
Not until someone put voice recognition as window default User interaction, Some AI within windows to think and help u as assistant. Microsoft hasnt been agresively pushing Kinect to Desktop

I still have to use the same 30years old keyboard/mouse combo to navigate my computer UI. I cant tell my PC to search something for me give instructions to do certain task using voice. 

Until these actually come, the mass user will most likely go towards tablet, laptop.
*
You do realize that voice recognition is not exactly a strong point for a computer. Computer language is rigid while human language is not. We can mix and match what we say and it will still be understandable to the other person but not so for the computer. To actually push for voice recognition you would need a database of voices and of which Google has done some of the work but it is definitely their patent and not willing to sell them. Even so, I don't think you can do speech recognition with the phone alone only.

I tried doing the speech recognition on a Galaxy S before and I don't think it works while being offline which leads me to think that your voice is relayed to Google's server before relaying back as a translation. Even speech recognition is rigid in a sense that some needs a starting word before a command is given. Tablet and such are entertainment devices and hardly any serious work can be done on them. A real keyboard is there to stay for a very long time. I'd like to see someone actually try to type without looking at their touch screen, I'm sure there are who could but the absence of tactile feedback makes that much harder.
dma0991
post Sep 13 2011, 02:08 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(billytong @ Sep 13 2011, 01:44 PM)
» Click to show Spoiler - click again to hide... «

*
No, I'm not saying that speech recognition is not a strong point for a computer, it definitely will give an added advantage to a computer. I'm saying that based on how a computer works, it can't really recognize speech directly like how humans could. It is not so much that speech recognition has not been progressively researched, it actually is a work in progress. I had friends attending an even that is hosted by Google for speech. They were given a Google Nexus, a piece of paper with certain words to say while the Google Nexus records their voice. It is part of how speech recognition works and you would definitely need a database on speech types as there are infinite variation to voices. The sad part is that I did not get to join the event, my friend was given a nice Google t-shirt which I want. tongue.gif

We do have enough hardware to process but not with the smartphone or handheld itself. We still have to rely on a large database and large amount of processing power of a server to be able to translate fast enough. The algorithm for speech recognition is relational in one way or another and something like IBM Watson which is almost similar to speech recognition. It can't translate what it doesn't have in it's database. You would need a large enough database of required data and fast enough processing power to make sure that an answer is given instantly.
dma0991
post Sep 13 2011, 07:49 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(everling @ Sep 13 2011, 07:02 PM)
I think I can live without hearing my neighbours browse their porn folders via speech recognition. laugh.gif
*
Nice one. laugh.gif
dma0991
post Sep 13 2011, 10:33 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(HiT-AbLe @ Sep 13 2011, 10:30 PM)
More news, apparently most of the BD can do 5Ghz on air, 5.5Ghz is the highest on air so far. BD pretty much won't beat SB on IPC, so the question is can higher clock BD able to match SB?
*
It was made from the ground up to have high clockspeeds as AMD knows that they don't have the IPC to match SB. Nevertheless I heard as well that overvolting till 1.5V is still quite a safe range for BD.
dma0991
post Sep 19 2011, 11:32 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(billytong @ Sep 19 2011, 09:03 AM)
If they got something good to show, they probably show it by now. Multiple delays + no official performance showcase = more likely the product is a failure. It make no sense for not showing a great performance of bulldozer(if it is true) to cannibalize the sales of SB.
*
Regardless of how BD will actually perform it actually better for AMD if they do not show any results prior to launch. They are small compared to Intel and showing your trump card early in the game is going to hurt AMD's chance of making sure that their competition doesn't know how well their products perform. No matter how well BD will perform, there is no way that AMD will cause Intel to lose their share of the market period. No matter how good BD will be, you wouldn't see AMD being the market leader and Intel as the underdog in a short period of time. Also, GloFlo does not have the production capacity or R&D budget to actually be as competitive as Intel.
dma0991
post Sep 20 2011, 08:23 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(billytong @ Sep 20 2011, 01:33 PM)
I have somewhat agree with u at some points u said, trump card is best not to show too early, but the situation is diff now. SB is selling like hotcakes, if u are AMD if u got something good, u could showed it by now.(hey wait we got something better dont buy SB yet). It is already within +/- 1-2 months from release, there is no way Intel can come up something better this quick. So IMO it is actually better for AMD to show some real performance figures than hiding everything. Remember they are showing their bobcat openly prior of launch? Why not bulldozer?
*

You shouldn't reveal a product before launch, read about Osborne Effect.
QUOTE(billytong @ Sep 20 2011, 01:33 PM)
Exactly, we not even count that the 1yr old SB 32nm can go up to 3.8GHz easily with stock volt. Intel could easily launch a 3.8GHz SB if they wanted to, Ivy bridge is around the corner, it could easily reach 4GHz+ if Intel are really pushing it.

If Bulldozer want to win this competition they have to have clock for clock performance and also able to scale up to 4GHz+ together. Otherwise they should just rename it to Phenom II X8
*

3.8GHz with your stock voltage, not the average stock voltage that all processors have which varies depending on whether you're getting a lemon or a golden chip. If Intel were to release SB at a higher clock speed, the yield will drop as the bar for standard quality is raised, it has to be at the right range based on their internal batch testing.

You can't compare Intel and AMD directly with IPC as both are using different methods to achieve the same goal. SB has higher IPC but more likely lower clock speed, BD has higher clock speeds to make up for the lower IPC. That is why BD can break the speed record because it is like Netburst, which Intel is no longer pursuing the high clock speed and low IPC type.


QUOTE(yimingwuzere @ Sep 20 2011, 06:55 PM)
http://www.pcgameshardware.de/aid,837552/B...ie/?iid=1555077

Not sure if serious benchmarks, as OBR leaked a batch of fake benchmarks to Donanimhaber before.
*

Anything OBR says, write or does have to be taken with a large chunk of rock salt, a grain wouldn't be enough.
dma0991
post Sep 21 2011, 11:44 AM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(billytong @ Sep 21 2011, 10:11 AM)
3.8Ghz is actually a very conservative number. Anyone with a SB would know that a SB can go 4.2GHz @ stock volt, thats with IF you are talking lemon/golden chip. You can refer to Lex's comment an official 3.6GHz is already here.

How much higher? SB is already reaching 3.6GHz, Ivy bridge could raise the bar to 4GHz. If the IPC is as low as what the leaked benchmark said(I hope it is not the truth),  AMD would need to have a lot higher frequency to cover up the ground to get on par with Ivy performance.
*
Not too sure about your SB but on average, most wouldn't go over 4.5GHz for a stable 24/7 OC which is quite near to the 1.375V Intel's recommended max for 32nm. 4.2GHz at stock voltage is on some processors but not the majority number of processors could do 4.2GHz at stock voltage. Let's just say that I am wrong and Intel really could release a chip at 3.8GHz with no problems whatsoever. Would they? I don't think so because if I take a 4.5GHz as a baseline, we're looking at a 1.1-1.2GHz increase from stock clockspeed which is a huge gain compared to a 0.3-0.7GHz with stock being at 3.8-4.2GHz, the WOW factor isn't that great if Intel increases the stock out of the factory. It could be that Intel has intentionally left quite a lot of headroom just to give their users the perception that SB is a great overclocker.

Again, IB might not be 4GHz but within the 3-4GHz range but much closer to 4GHz than SB. IPC for BD is low because it just how it was made and not because it sucks at clock for clock. It is possible to see on average 5GHz++ @ 1.5V BDs which 1.5V is still a conservative range for BD due to it's claimed robustness. It could possibly go as far as 6GHz easily with a proper custom watercooling, 24/7 stable. A SB although could go till 5.7GHz max, nobody would do that on a 24/7 OC as the voltage required will slowly kill it. What BD can't make up for IPC it gains in clockspeed. IB performance is still unknown and most probably we're comparing Piledriver with IB by then.
dma0991
post Sep 21 2011, 01:00 PM

nyoron~
*******
Senior Member
3,333 posts

Joined: Dec 2009


QUOTE(billytong @ Sep 21 2011, 12:00 PM)
the question remains is will there be a 5GHz official BD from AMD to deal with 2600K/2700K or Ivy bridge. 

SB is already 3.6Ghz, a rough +/-10% extra clock rate is about 4GHz. So I would not call it impossible for Ivy, it still up to Intel if they wanted to up their competitive bar or not. With the OC headroom SB has, Intel is still saving from headroom for BD. if it is performing much better than Intel expecting.
*
Most likely there will not be a 5GHz stock BD in the future. It is likely that some minor improvements is made to BD's architecture to improve its performance. AMD's priority now is not the desktop space and not much effort will be placed in a market they can't compete very well and is not that big anymore. AMD's priority is more of the server space, that's why you see the server BD, Interlagos is released way earlier.

I never said that it is impossible for IB to be 4GHz, I just say that it is more likely that Intel wouldn't. Why would they release the first iteration at 3.8GHz with the current price when they can release a new processor with a 200MHz speed bump over the original iteration and call it an upgrade with a higher price. There is no need for Intel to be pushing the limits of their processors and raise the competitive bar when they are not facing much competition from AMD.

8 Pages « < 4 5 6 7 8 >Top
 

Change to:
| Lo-Fi Version
0.0452sec    0.39    7 queries    GZIP Disabled
Time is now: 29th November 2025 - 09:21 PM