Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

 86 Mac Plus Vs 07 AMD DualCore! Unbelievable!

views
     
SUSjoe_star
post Jun 3 2007, 11:27 AM

Serving the Servants
******
Senior Member
1,810 posts

Joined: Mar 2007
QUOTE(kmarc @ Jun 3 2007, 10:16 AM)
If the author really want a comparison, somebody could just write a simple software for dual-core that does all the function that he compared. Probably just need a few lines of codes anyway...... Then, we can assess whether it is slower. Anyway, what's the point, the differences is too insignificant!
*
Which I believe is the point of the article anyway. Programmers should go back to the basics and produce more efficient and simple code rather than make it more complicated. Any expert in this field please correct me if I'm wrong as I'm not really into this field but thats the conclusion I can come to with my (near zero) knowledge anyway.
a1098113
post Jun 3 2007, 11:34 AM

~Retired~
*******
Senior Member
3,119 posts

Joined: May 2007
From: Home


QUOTE(joe_star @ Jun 3 2007, 11:27 AM)
Which I believe is the point of the article anyway. Programmers should go back to the basics and produce more efficient and simple code rather than make it more complicated. Any expert in this field please correct me if I'm wrong as I'm not really into this field but thats the conclusion I can come to with my (near zero) knowledge anyway.
*
well its all about keeping the cash flow coming in though... if bill gates stopped at xp then the companies who do comp softwares and hardware will be stiffled by it. There will be no 8800gts and no dual cores.. im sure there are programmers who can code such systems, but its a big money making business, so all we comsumers have to live by it. and the programmers follow where the money talks thumbup.gif
kmarc
post Jun 3 2007, 11:52 AM

The future is here - Cryptocurrencies!
Group Icon
Elite
14,576 posts

Joined: May 2006
From: Sarawak



QUOTE(joe_star @ Jun 3 2007, 11:27 AM)
Which I believe is the point of the article anyway. Programmers should go back to the basics and produce more efficient and simple code rather than make it more complicated. Any expert in this field please correct me if I'm wrong as I'm not really into this field but thats the conclusion I can come to with my (near zero) knowledge anyway.
*
That would be ideal but not in this day and age.

Software companies are producing apps that caters for everybody. They can't be making softwares that only cater for a certain segment of the market.

Take Microsoft word for instance. Frankly speaking, I only use it for typing notes, spelling check, thesaurus, some tables and pictures. That's all. However, there are other functions that I don't use but is used by other people like simple drawing, formulas, review, web page design, etc. And sometimes, there are those rare functions that I need that is available in Word.

Take the generic Paint program in windows as another example. It is so simple and easy to use, I usually use it to edit simple pictures to paste in this forum. However, when I want to do something more advance, the program doesn't have that function. So I have to resort to big-bulky full-of-functions/code apps like photoshop to edit the pictures.

Then there are people who start complaining that a certain app is not good as it does not have this function or that function. Imaging if microsoft word does not have the automatic spell-checker....... cry.gif


charge-n-go
post Jun 3 2007, 01:43 PM

Look at all my stars!!
*******
Senior Member
4,060 posts

Joined: Jan 2003
From: Penang / PJ

QUOTE(ikanayam @ Jun 3 2007, 11:09 AM)
4. I don't think memory bandwidth is a limitation in the vast majority of applications. L1 cache hit rates are >90% (>95% even i think) in modern CPUs. L2 will catch much of the rest. Only in certain massively streaming parallel processing applications will you be memory bandwidth limited, but then those tend to be processing heavy as well, so you are probably processing power limited before that.
*
Yes, definitely not memory bandwidth limitation. The smart memory pre-fetch has eliminated a lot of memory transaction on FSB.

Actually, there are just too many branches and serialization in modern software, where the the SMT is completely useless. Other factors like cache and page tables coherency also will slow down the multi threaded performance exponentially when we have more n more cores in the system.


Of course, these days it is impossible to write programs with only assembly or C language. So the optimization won't be as good as oldern days (like the Mac PLUS). Heck, even C programming is a lot slower than ASM programming! (from my observation from internal test suites).

This post has been edited by charge-n-go: Jun 3 2007, 01:48 PM
ikanayam
post Jun 3 2007, 01:57 PM

there are no pacts between fish and men
********
Senior Member
10,544 posts

Joined: Jan 2003
From: GMT +8:00

QUOTE(charge-n-go @ Jun 3 2007, 12:43 AM)
Yes, definitely not memory bandwidth limitation. The smart memory pre-fetch has eliminated a lot of memory transaction on FSB.

Actually, there are just too many branches and serialization in modern software, where the OOO or the SMT cannot be used. These days it is impossible to write programs with only assembly or C language. So the optimization won't be as good as oldern days. Heck, even C programming is a lot slower than ASM programming! (from my observation from internal test suites).
*
Prefetching tries to hide latency, it can't hide a lack of memory bandwidth. It increases memory transactions because you don't always prefetch the right data.

Programming wise, it's a matter of where you spend your time optimizing of course. Good programming practices coupled with a good compiler can do very well. Of course for your super critical stuff you will want to go down to ASM, but otherwise, you can probably get better improvements by simply spending your time improving your algorithm rather than hacking at ASM code. Same thing with hardware design. Biggest gains come from improvement in algorithms, the gain typically gets smaller as you move down to a lower level.
kmarc
post Jun 3 2007, 01:59 PM

The future is here - Cryptocurrencies!
Group Icon
Elite
14,576 posts

Joined: May 2006
From: Sarawak



QUOTE(charge-n-go @ Jun 3 2007, 01:43 PM)
Of course, these days it is impossible to write programs with only assembly or C language. So the optimization won't be as good as oldern days (like the Mac PLUS). Heck, even C programming is a lot slower than ASM programming! (from my observation from internal test suites).
*
Yup. I still remember the days when people talk about code optimization. Assembly (ASM) language was the best!! However, it was also a pain in the neck to programme and virtually impossible to write for programs these days as there are million and million lines of codes......
ikanayam
post Jun 3 2007, 02:09 PM

there are no pacts between fish and men
********
Senior Member
10,544 posts

Joined: Jan 2003
From: GMT +8:00

QUOTE(kmarc @ Jun 3 2007, 12:59 AM)
Yup. I still remember the days when people talk about code optimization. Assembly (ASM) language was the best!! However, it was also a pain in the neck to programme and virtually impossible to write for programs these days as there are million and million lines of codes......
*
It is in the software with millions of lines of code (such as an OS) that you typically see a lot of hand tuned ASM code. You don't have to do it all in ASM from scratch, you can compile it and then hand optimize the ASM code for certain critical stuff. Obviously it makes sense to do for a function you call a billion times, or for an important inner loop.
charge-n-go
post Jun 3 2007, 02:16 PM

Look at all my stars!!
*******
Senior Member
4,060 posts

Joined: Jan 2003
From: Penang / PJ

QUOTE(ikanayam @ Jun 3 2007, 01:57 PM)
Prefetching tries to hide latency, it can't hide a lack of memory bandwidth. It increases memory transactions because you don't always prefetch the right data.
*
LoL, my mistake. What was I thinking just now laugh.gif

QUOTE(ikanayam @ Jun 3 2007, 02:09 PM)
Programming wise, it's a matter of where you spend your time optimizing of course. Good programming practices coupled with a good compiler can do very well. Of course for your super critical stuff you will want to go down to ASM, but otherwise, you can probably get better improvements by simply spending your time improving your algorithm rather than hacking at ASM code. Same thing with hardware design. Biggest gains come from improvement in algorithms, the gain typically gets smaller as you move down to a lower level.

It is in the software with millions of lines of code (such as an OS) that you typically see a lot of hand tuned ASM code. You don't have to do it all in ASM from scratch, you can compile it and then hand optimize the ASM code for certain critical stuff.  Obviously it makes sense to do for a function you call a billion times, or for an important inner loop.
*
Well said, the algorithm is very important. C / C++ compiler should not have too much problem as Intel is actually providing them to customers, but I dunno how efficient other high level language will be. From my experience in test coding, the most important part will be the algorithm within a FOR loop. If 1 loop is a bit slower, the end result will b farking slow considering there are million of loops (streaming for eg). Actually, some critical functions which access hardware directly (usually APIs) will be written in ASM or C programming.
kmarc
post Jun 3 2007, 02:43 PM

The future is here - Cryptocurrencies!
Group Icon
Elite
14,576 posts

Joined: May 2006
From: Sarawak



QUOTE(ikanayam @ Jun 3 2007, 02:09 PM)
It is in the software with millions of lines of code (such as an OS) that you typically see a lot of hand tuned ASM code. You don't have to do it all in ASM from scratch, you can compile it and then hand optimize the ASM code for certain critical stuff.  Obviously it makes sense to do for a function you call a billion times, or for an important inner loop.
*
QUOTE(charge-n-go @ Jun 3 2007, 02:16 PM)
LoL, my mistake. What was I thinking just now laugh.gif
Well said, the algorithm is very important. C / C++ compiler should not have too much problem as Intel is actually providing them to customers, but I dunno how efficient other high level language will be. From my experience in test coding, the most important part will be the algorithm within a FOR loop. If 1 loop is a bit slower, the end result will b farking slow considering there are million of loops (streaming for eg). Actually, some critical functions which access hardware directly (usually APIs) will be written in ASM or C programming.
*
Wow! That's deep but informative. Not for average users like me! sweat.gif

Thx for the info! thumbup.gif

Just curious, do average programmers do ASM these days?

This post has been edited by kmarc: Jun 3 2007, 02:44 PM
ikanayam
post Jun 3 2007, 02:53 PM

there are no pacts between fish and men
********
Senior Member
10,544 posts

Joined: Jan 2003
From: GMT +8:00

QUOTE(charge-n-go @ Jun 3 2007, 01:16 AM)
LoL, my mistake. What was I thinking just now laugh.gif
Well said, the algorithm is very important. C / C++ compiler should not have too much problem as Intel is actually providing them to customers, but I dunno how efficient other high level language will be. From my experience in test coding, the most important part will be the algorithm within a FOR loop. If 1 loop is a bit slower, the end result will b farking slow considering there are million of loops (streaming for eg). Actually, some critical functions which access hardware directly (usually APIs) will be written in ASM or C programming.
*
Anyone who is serious about performance would not use other less efficient languages smile.gif

QUOTE(kmarc @ Jun 3 2007, 01:43 AM)
Just curious, do average programmers do ASM these days?
*
No, you only use ASM to optimize some really critical stuff. You would have to be at least "pretty good" and understand the hardware you are optimizing for before you can do that. Else you may end up doing a worse job than the compiler.
a1098113
post Jun 3 2007, 03:16 PM

~Retired~
*******
Senior Member
3,119 posts

Joined: May 2007
From: Home


interesting machine language stuff and C ehehe. i rather see some simpler stuff so i can stilll run my old p4 machine with the current world softwares. I dont like change to be so rapid, its an exponential rise, as far as tech is concerned. I hope there will be somewhat a rest so people like me can catch up and quipt myself
cruel_boy
post Jun 3 2007, 04:41 PM

BeyONd cRUelIty LieS sIVa
*****
Senior Member
913 posts

Joined: Feb 2005
From: Ampang, Kuala Lumpur, Seremban...



i can just say one thing, if anyone thinks that older systems performs better than that of current for their normal tasks..... why dont you just switched to old systems to accomodate your needs.

There are thousands, n perhaps millions of old systems dumped in basement and store waiting, never to be used again. You want the latest technology, and yet you want it to be simple. Let me ask one thing.... can a Ferrari goes from 0 to 100kmph in 3.3seconds if its engine is as simple as a proton's?

u have a newer tech, u need to have new system n new UI n new softwares. If not there is no point for the new technologies to be introduced
SUSjoe_star
post Jun 3 2007, 06:28 PM

Serving the Servants
******
Senior Member
1,810 posts

Joined: Mar 2007
One might ask.....do we need Ferraris to go to work or college? But thats not the point of this topic. No 1 here is saying that we do not want to embrace new tech etc nor that old tech is better than new. But the implementation could possibly be improved. I for 1 agree with that.

 

Change to:
| Lo-Fi Version
0.0161sec    0.90    5 queries    GZIP Disabled
Time is now: 21st December 2025 - 12:16 AM