Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

 Intel 13th/14th gen cpus crashing, degrading

views
     
chocobo7779
post May 11 2024, 04:19 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 04:10 PM)
Fully agree with you, without being tied to X86 legacy, Apple could wipe the slate with a new CPU design and they clearly showed what real modern CPU uarch could do current manufacturing process. Both Intel/AMD X86 will need some newfangled complex & expensive SOC layout or exotic materials to push speeds higher and all that just to match even current M3/M4 that is made on the same matured process.

chocobo7779
I dont fully agree on the reason why Apple charges that much, while yes their CPU SOC is much larger the cost to make is not exponentially like what you pay. Apple charges waterfish prices simple bcoz its an Apple. And for the sheer volume their CPU costing per unit isnt that all much different as these will be inside iphones too.
*
QUOTE
, without being tied to X86 legacy

Again, ISA doesn't really matter especially when you consider how complex modern CPUs are. The x86 bloat is mostly vestigial at the moment and doesn't really affects the ability to make highly efficient chips icon_idea.gif

QUOTE
th Intel/AMD X86 will need some newfangled complex & expensive SOC layout to push speeds higher

Well, they do, see AMD APUs on the PS5/Series X consoles for an example (not directly comparable to Apple Silicon as AMD will need to design them with a budget in mind, after all those consoles cost around 500USD and both Sony/Microsoft still have to sell them at a loss and recoup the loss through game sales and subscriptions) icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 04:25 PM
chocobo7779
post May 11 2024, 04:30 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 04:22 PM)
Im advocating more towards fully realising ARM uarch to a desktop equivalent or else a new CPU uarch from scratch without the inefficient legacy (hello RISC V?)
*
Again, no, according to Jim Keller who worked at AMD/Apple/DEC/PA Semi (probably one of the best acquisitions Apple made) and is responsible for CPUs like the AMD K8:

QUOTE
JK: [Arguing about instruction sets] is a very sad story. It's not even a couple of dozen [op-codes] - 80% of core execution is only six instructions - you know, load, store, add, subtract, compare and branch. With those you have pretty much covered it. If you're writing in Perl or something, maybe call and return are more important than compare and branch. But instruction sets only matter a little bit - you can lose 10%, or 20%, [of performance] because you're missing instructions.
QUOTE
JK: I care a little. Here's what happened - so when x86 first came out, it was super simple and clean, right? Then at the time, there were multiple 8-bit architectures: x86, the 6800, the 6502. I programmed probably all of them way back in the day. Then x86, oddly enough, was the open version. They licensed that to seven different companies. Then that gave people opportunity, but Intel surprisingly licensed it. Then they went to 16 bits and 32 bits, and then they added virtual memory, virtualization, security, then 64 bits and more features. So what happens to an architecture as you add stuff, you keep the old stuff so it's compatible.

So when Arm first came out, it was a clean 32-bit computer. Compared to x86, it just looked way simpler and easier to build. Then they added a 16-bit mode and the IT (if then) instruction, which is awful. Then [they added] a weird floating-point vector extension set with overlays in a register file, and then 64-bit, which partly cleaned it up. There was some special stuff for security and booting, and so it has only got more complicated.

Now RISC-V shows up and it's the shiny new cousin, right? Because there's no legacy. It's actually an open instruction set architecture, and people build it in universities where they don’t have time or interest to add too much junk, like some architectures have. So relatively speaking, just because of its pedigree, and age, it's early in the life cycle of complexity. It's a pretty good instruction set, they did a fine job. So if I was just going to say if I want to build a computer really fast today, and I want it to go fast, RISC-V is the easiest one to choose. It’s the simplest one, it has got all the right features, it has got the right top eight instructions that you actually need to optimize for, and it doesn't have too much junk.


https://www.anandtech.com/show/16762/an-ana...person-at-tesla

...and it's not like ARM is a 'bloat free' ISA either icon_idea.gif

RISC-V? Probably but unless there is a chip that is commercially available and has a large enough software library then there's little reason for that ISA to take off. Note that the success of ISA goes way beyond performance/efficiency, so that's why back in the 1990s Intel was able to defeat a lot of alternate ISAs (such as Alpha/MIPS/Itanium/SPARC) despite those ISAs are far superior to x86, due to its strong install base, as well as the massive economies of scale it offers icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 04:39 PM
chocobo7779
post May 11 2024, 11:51 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(hashtag2016 @ May 11 2024, 05:03 PM)
x86 is important legacy, an important one. I would rather it stay than gone.
If x86 no no more.. very likely  pc diy no more.  innocent.gif
*
Socketed ARM chips do exist, but yeah you don't have to really worry about the state of x86 right now icon_idea.gif

One of the reasons why x86 couldn't compete with ARM chips on efficiency is that there's really not much innovation and progress going on in the x86 realm (especially power efficiency) for over a decade, due to a severe lack of competition. AMD's Phenom series of chips are lukewarm at best from a performance/price standpoint, and Bulldozer was a huge disaster to the point of driving AMD towards near bankruptcy and probably lead Intel to making small iterations on their CPUs. It wasn't until 2017 where Ryzen arrived to the scene, and even that AMD didn't return to proper competition all around with Zen 2 icon_idea.gif

On the other hand, Apple has been designing low powered ARM chips for iPhones and iPads for over a decade now (they have been dabbling in semiconductors since the early 1980s), and their offerings have been outperforming lots of SoCs from Android competitors significantly for many years, even today. By doing so they have gathered quite a lot of know-how on how to design high performance low powered SoCs coupled with the stagnant x86 market it's not hard to see why they switched to their in house silicon (note that the M series chips are not just a scaled up A series chip) icon_idea.gif

x86 incumbents, by comparison hasn't really focused on efficiency-minded chips, with their design targets being mostly performance and areal efficiency. Power efficiency for x86 was sort of a niche thing back them outside of netbooks/subnotebooks (remember those?) icon_idea.gif

That being said however it seems both AMD/Intel are starting to focusing on power efficiency (see Phoenix and the upcoming Strix Point APUs, along with Intel's new Meteor Lake chips) on mobile chips. It certainly isn't as groundbreaking as Apple Silicon but it sure is a good stepping stone at that after many years of stagnancy (note that chip design and manufacturing can be incredibly time intensive) icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 11:51 PM
chocobo7779
post Aug 7 2024, 09:22 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(kingkingyyk @ Aug 7 2024, 03:06 PM)
Not really again. x86 can scale up to that level too, but it will be large and too costly to produce. wink.gif

So it is not about x86 is bad in efficiency and ARM is the saver, it is the company that performs the engineering decision. Recall that we had the similar design of Snapdragon vs Dimensity, one made by Samsung and another made by Mediatek, the efficiency curve are completely different.
*
Even SD X Elite isn't that much of a competitor compared to modern x86 chips either, despite being designed by a team ex-Apple Silicon engineers icon_idea.gif

 

Change to:
| Lo-Fi Version
0.0161sec    0.25    7 queries    GZIP Disabled
Time is now: 30th November 2025 - 01:41 AM