Welcome Guest ( Log In | Register )

11 Pages « < 3 4 5 6 7 > » Bottom

Outline · [ Standard ] · Linear+

 Intel 13th/14th gen cpus crashing, degrading

views
     
babylon52281
post May 11 2024, 03:36 PM

Look at all my stars!!
*******
Senior Member
2,672 posts

Joined: Apr 2017
QUOTE(hashtag2016 @ May 10 2024, 11:49 PM)
U can have ur own different opinion on the matter or the product. (this is what's forum exist for)
but suka suka shout out fanbois this fanbios that, it is not nice...  hmm.gif devil.gif

Seems that intel will pulish a officer announcement this month, so they do know how serious the issue is. icon_idea.gif

p/s: I only mention the that x3d burning issue after sb had mention about it , although it is simply unrelated issue to the topic. brows.gif
*
Lol fanbois & haters kena burn by the truth so you folks dont like it, well tough. Yes you can voice your opinion but to do that and conveniently ignore that others have opinion is just meant your a fanboi.

Intel will release an official reply, just as AMD did with their own CPU burn case, that is given.

Stop with the pissing posts then maybe people will have respect what you say.
babylon52281
post May 11 2024, 03:40 PM

Look at all my stars!!
*******
Senior Member
2,672 posts

Joined: Apr 2017
QUOTE(stella_purple @ May 11 2024, 02:42 AM)
amd one is worse, it may turn into fire hazard laugh.gif

*
Both sides CPU have issues when pushed too far le. This is what happens when zero efficiency and both sides ramp up the power game then make worse allow users to go even further.

ARM & RISC V FTW!

This post has been edited by babylon52281: May 11 2024, 03:41 PM
babylon52281
post May 11 2024, 03:52 PM

Look at all my stars!!
*******
Senior Member
2,672 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 08:22 AM)
No PCIe 4?  Not that much of a problem for most GPUs unless you are talking something about RX6400/6500XT with their nerfed PCIe x4 interface.  Even the mighty 4090 only loses about 2% performance on PCIe 3.0 x16:

On the SSD side of things there's not much difference between PCIe 3.0 and 4.0 either for gaming icon_idea.gif

USB 3.2?  That's more or less belong to the 'nice to have' territory rather than must haves, and even that can be done with a PCIe card if you really need one icon_idea.gif
*
Nah not on GPU, maybe only 4090 can fully utilise the x16 on a Gen4 bandwidth. It is for M2 SSD and even if you dont see much benefits today, DS will allow faster drives to load world details with less latency meaning a less laggy gaming experience.

Its the saying that few years ago people might not see the need for >8GB VRAM on GPU but oh boy isnt that an issue for more & more todays games. If games world building becomes bigger & more detailed, it will sooner need to load direct from SSD at faster pace.
chocobo7779
post May 11 2024, 03:53 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 03:40 PM)
Both sides CPU have issues when pushed too far le. This is what happens when zero efficiency and both sides ramp up the power game then make worse allow users to go even further.

ARM & RISC V FTW!
*
You can also has inefficient ARM cores either if you clocked them to the moon, note that power scales linearly with frequency, and a factor of 2 with voltage icon_idea.gif
This is why high clock speeds can be a bad thing if the process node or the architecture are not built for it (even Intel admits it on their slide deck, see their flat-ish curves on the power scaling chart):

https://download.intel.com/newsroom/2022/cl...nalyst-deck.pdf

But then for some reason, despite the huge advances in x86 efficiency we still have ridiculously inefficient chips because both Intel/AMD practically clocked them up to near unsustainable levels (I mean, why does 5GHz+ ULV mobile chips and 6GHz+ desktop chips exist?) sweat.gif

This post has been edited by chocobo7779: May 11 2024, 04:10 PM
chocobo7779
post May 11 2024, 03:56 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 03:52 PM)
Nah not on GPU, maybe only 4090 can fully utilise the x16 on a Gen4 bandwidth. It is for M2 SSD and even if you dont see much benefits today, DS will allow faster drives to load world details with less latency meaning a less laggy gaming experience.

Its the saying that few years ago people might not see the need for >8GB VRAM on GPU but oh boy isnt that an issue for more & more todays games. If games world building becomes bigger & more detailed, it will sooner need to load direct from SSD at faster pace.
*
? Find one game that actually makes use of high speed I/O then - even games like Rift Apart (which claims to utilize disk I/O heavily) will do just fine on a SATA SSD, and considering the very long dev times for AAA titles I think your machine will be long out of date before you need a faster SSD to run the game icon_idea.gif
People here are vastly overestimating the I/O requirements for disks in modern titles icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 04:14 PM
chocobo7779
post May 11 2024, 04:07 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
Anyway let's get back to the topic, we're getting a bit derailed there - the original topic is Intel's 14th gen stability issues, not a discussion between ISA and power efficiency icon_idea.gif
babylon52281
post May 11 2024, 04:10 PM

Look at all my stars!!
*******
Senior Member
2,672 posts

Joined: Apr 2017
QUOTE(lolzcalvin @ May 11 2024, 02:26 PM)
and that is what apple is doing extremely right, and at times their low power operation performance is rivaling AMD/Intel high end CPUs which are using 4-5x more power for the same operation. their M4 has just been released recently, with MT performance closing in a 13700K, and ST performance obliterating many, if not all, modern x86 CPUs. a BASE M4 is doing that? at 5x less power? node advantage + SME aside, cannot dismiss what Apple has been doing and they're definitely putting more pressure on AMD/Intel. ESPECIALLY INTEL.

M2/M3 era has already seen the chip performing faster than x86 counterparts in a number of applications such as in Adobe apps, DaVinci Resolve and Handbrake. new M4 era will be another eye opener similar to M2.

with M4 being released this early, Qualcomm is shitting themselves too. I mentioned I had faith previously on X Elite but things do change fast within a month. X Elite is due for >1 year now. after their shoddy X Plus reveal a few weeks ago, rough rumors are saying they're in a very messy situation rn. we'll see how Qualcomm handles this.
hence why x86 is living for backwards compatibility to cater for relic systems. been so long since 8086 era.

however, it really isn't x86 fault for the "slowness" simply because it's just an ISA. a good uarch (microarchitecture, or simply CPU design) will yield great results. Apple has greatly improved their uarch to gain higher frequency, as well as shoving in ARM SME into it, even with small IPC gain (and still yield ~25% improvement over M3).
*
Fully agree with you, without being tied to X86 legacy, Apple could wipe the slate with a new CPU design and they clearly showed what real modern CPU uarch could do current manufacturing process. Both Intel/AMD X86 will need some newfangled complex & expensive SOC layout or exotic materials to push speeds higher and all that just to match even current M3/M4 that is made on the same matured process.

chocobo7779
I dont fully agree on the reason why Apple charges that much, while yes their CPU SOC is much larger the cost to make is not exponentially like what you pay. Apple charges waterfish prices simple bcoz its an Apple. And for the sheer volume their CPU costing per unit isnt that all much different as these will be inside iphones too.
babylon52281
post May 11 2024, 04:14 PM

Look at all my stars!!
*******
Senior Member
2,672 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 03:53 PM)
You can also has inefficient ARM cores either if you clocked them to the moon, note that power scales linearly with frequency, and a factor of 2 with voltage icon_idea.gif
This is why high clock speeds can be a bad thing if the process node or the architecture are not built for it (even Intel admits it on their slide deck, see their flat-ish curves on the power scaling chart):

https://download.intel.com/newsroom/2022/cl...nalyst-deck.pdf

But then for some reason, despite the huge advances in x86 efficiency we still have ridiculously inefficient chips because both Intel/AMD practically clocked them up to near unsustainable levels (I mean, why does 5GHz+ ULV mobile chips and 6GHz+ desktop chips exist?)  sweat.gif
*
Well thats the thing. ARM shtick is not power game but power efficiency. Like Netburst vs Core back then, do you want a high clock inefficient chip vs cooler, slower but technically 'faster' chip? The death of Netburst and rise of Core clearly indicates what the market wants.
chocobo7779
post May 11 2024, 04:18 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 04:14 PM)
Well thats the thing. ARM shtick is not power game but power efficiency. Like Netburst vs Core back then, do you want a high clock inefficient chip vs cooler, slower but technically 'faster' chip? The death of Netburst and rise of Core clearly indicates what the market wants.
*
Yeah, but that'll require wider, larger core designs, which are not area efficient and will need larger dies unless you want AMD/Intel to cannibalize their far more profitable server/HPC business that is icon_idea.gif
babylon52281
post May 11 2024, 04:18 PM

Look at all my stars!!
*******
Senior Member
2,672 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 04:07 PM)
Anyway let's get back to the topic, we're getting a bit derailed there - the original topic is Intel's 14th gen stability issues, not a discussion between ISA and power efficiency icon_idea.gif
*
More like the original topic was a pissing game between AMD fanbois & Intel fanbois. Both sides are real losers when each have their own hardware issues and there exist better CPU uarch.
chocobo7779
post May 11 2024, 04:19 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 04:10 PM)
Fully agree with you, without being tied to X86 legacy, Apple could wipe the slate with a new CPU design and they clearly showed what real modern CPU uarch could do current manufacturing process. Both Intel/AMD X86 will need some newfangled complex & expensive SOC layout or exotic materials to push speeds higher and all that just to match even current M3/M4 that is made on the same matured process.

chocobo7779
I dont fully agree on the reason why Apple charges that much, while yes their CPU SOC is much larger the cost to make is not exponentially like what you pay. Apple charges waterfish prices simple bcoz its an Apple. And for the sheer volume their CPU costing per unit isnt that all much different as these will be inside iphones too.
*
QUOTE
, without being tied to X86 legacy

Again, ISA doesn't really matter especially when you consider how complex modern CPUs are. The x86 bloat is mostly vestigial at the moment and doesn't really affects the ability to make highly efficient chips icon_idea.gif

QUOTE
th Intel/AMD X86 will need some newfangled complex & expensive SOC layout to push speeds higher

Well, they do, see AMD APUs on the PS5/Series X consoles for an example (not directly comparable to Apple Silicon as AMD will need to design them with a budget in mind, after all those consoles cost around 500USD and both Sony/Microsoft still have to sell them at a loss and recoup the loss through game sales and subscriptions) icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 04:25 PM
babylon52281
post May 11 2024, 04:22 PM

Look at all my stars!!
*******
Senior Member
2,672 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 04:18 PM)
Yeah, but that'll require wider, larger core designs, which are not area efficient and will need larger dies unless you want AMD/Intel to cannibalize their far more profitable server/HPC business that is icon_idea.gif
*
Im advocating more towards fully realising ARM uarch to a desktop equivalent or else a new CPU uarch from scratch without the inefficient legacy (hello RISC V?)
chocobo7779
post May 11 2024, 04:30 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 04:22 PM)
Im advocating more towards fully realising ARM uarch to a desktop equivalent or else a new CPU uarch from scratch without the inefficient legacy (hello RISC V?)
*
Again, no, according to Jim Keller who worked at AMD/Apple/DEC/PA Semi (probably one of the best acquisitions Apple made) and is responsible for CPUs like the AMD K8:

QUOTE
JK: [Arguing about instruction sets] is a very sad story. It's not even a couple of dozen [op-codes] - 80% of core execution is only six instructions - you know, load, store, add, subtract, compare and branch. With those you have pretty much covered it. If you're writing in Perl or something, maybe call and return are more important than compare and branch. But instruction sets only matter a little bit - you can lose 10%, or 20%, [of performance] because you're missing instructions.
QUOTE
JK: I care a little. Here's what happened - so when x86 first came out, it was super simple and clean, right? Then at the time, there were multiple 8-bit architectures: x86, the 6800, the 6502. I programmed probably all of them way back in the day. Then x86, oddly enough, was the open version. They licensed that to seven different companies. Then that gave people opportunity, but Intel surprisingly licensed it. Then they went to 16 bits and 32 bits, and then they added virtual memory, virtualization, security, then 64 bits and more features. So what happens to an architecture as you add stuff, you keep the old stuff so it's compatible.

So when Arm first came out, it was a clean 32-bit computer. Compared to x86, it just looked way simpler and easier to build. Then they added a 16-bit mode and the IT (if then) instruction, which is awful. Then [they added] a weird floating-point vector extension set with overlays in a register file, and then 64-bit, which partly cleaned it up. There was some special stuff for security and booting, and so it has only got more complicated.

Now RISC-V shows up and it's the shiny new cousin, right? Because there's no legacy. It's actually an open instruction set architecture, and people build it in universities where they don’t have time or interest to add too much junk, like some architectures have. So relatively speaking, just because of its pedigree, and age, it's early in the life cycle of complexity. It's a pretty good instruction set, they did a fine job. So if I was just going to say if I want to build a computer really fast today, and I want it to go fast, RISC-V is the easiest one to choose. It’s the simplest one, it has got all the right features, it has got the right top eight instructions that you actually need to optimize for, and it doesn't have too much junk.


https://www.anandtech.com/show/16762/an-ana...person-at-tesla

...and it's not like ARM is a 'bloat free' ISA either icon_idea.gif

RISC-V? Probably but unless there is a chip that is commercially available and has a large enough software library then there's little reason for that ISA to take off. Note that the success of ISA goes way beyond performance/efficiency, so that's why back in the 1990s Intel was able to defeat a lot of alternate ISAs (such as Alpha/MIPS/Itanium/SPARC) despite those ISAs are far superior to x86, due to its strong install base, as well as the massive economies of scale it offers icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 04:39 PM
babylon52281
post May 11 2024, 04:35 PM

Look at all my stars!!
*******
Senior Member
2,672 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 04:19 PM)
Again, ISA doesn't really matter especially when you consider how complex modern CPUs are.  The x86 bloat is mostly vestigial at the moment and doesn't really affects the ability to make highly efficient chips icon_idea.gif
Well, they do, see AMD APUs on the PS5/Series X consoles for an example (not directly comparable to Apple Silicon as AMD will need to design them with a budget in mind, after all those consoles cost around 500USD and both Sony/Microsoft still have to sell them at a loss and recoup the loss through game sales and subscriptions) icon_idea.gif
*
TSMC dont actually play that game. Why, bcoz their leading the node shrink pack so without throwing prices they ady got a line of customers willing to throw money for each new node.

What they do is preferential block batching, where a the first batch nodes will be sold at the highest markups (invariably Apple came with the first pick), and then the markup goes down as subsequent batches are fulfilled. Others playing catchup will charge less but its the release of these new nodes which is important as many tied their product launches to their CPU SOC which is based on each cutting edge node iteration. No fool will launch a flagship phone using CPU based on older nodes, good luck if they do coz its simply bad marketing.
hashtag2016
post May 11 2024, 04:56 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(babylon52281 @ May 11 2024, 03:36 PM)
Lol fanbois & haters kena burn by the truth so you folks dont like it, well tough. Yes you can voice your opinion but to do that and conveniently ignore that others have opinion is just meant your a fanboi.

Intel will release an official reply, just as AMD did with their own CPU burn case, that is given.

Stop with the pissing posts then maybe people will have respect what you say.
*
so what is your problem anyway, which posts pissing u? mind to explain? brows.gif devil.gif

p/s: Intel screw up their face is not our problem, we r consumer only... brows.gif

This post has been edited by hashtag2016: May 11 2024, 04:58 PM
hashtag2016
post May 11 2024, 05:03 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(babylon52281 @ May 11 2024, 04:10 PM)
Fully agree with you, without being tied to X86 legacy, Apple could wipe the slate with a new CPU design and they clearly showed what real modern CPU uarch could do current manufacturing process. Both Intel/AMD X86 will need some newfangled complex & expensive SOC layout or exotic materials to push speeds higher and all that just to match even current M3/M4 that is made on the same matured process.

chocobo7779
I dont fully agree on the reason why Apple charges that much, while yes their CPU SOC is much larger the cost to make is not exponentially like what you pay. Apple charges waterfish prices simple bcoz its an Apple. And for the sheer volume their CPU costing per unit isnt that all much different as these will be inside iphones too.
*
x86 is important legacy, an important one. I would rather it stay than gone.
If x86 no no more.. very likely pc diy no more. innocent.gif
babylon52281
post May 11 2024, 05:30 PM

Look at all my stars!!
*******
Senior Member
2,672 posts

Joined: Apr 2017
QUOTE(hashtag2016 @ May 11 2024, 04:56 PM)
so what is your problem anyway, which posts pissing u? mind to explain?  brows.gif  devil.gif

p/s: Intel screw up their face is not our problem, we r consumer only... brows.gif
*
Look back at your own postings to know. You made your opinion, okay, so just leave it or else your just trolling here
hashtag2016
post May 11 2024, 05:33 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(babylon52281 @ May 11 2024, 05:30 PM)
Look back at your own postings to know. You made your opinion, okay, so just leave it or else your just trolling here
*
what post, I have many posts recently..nothing special so far.. brows.gif mega_shok.gif
chocobo7779
post May 11 2024, 11:51 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(hashtag2016 @ May 11 2024, 05:03 PM)
x86 is important legacy, an important one. I would rather it stay than gone.
If x86 no no more.. very likely  pc diy no more.  innocent.gif
*
Socketed ARM chips do exist, but yeah you don't have to really worry about the state of x86 right now icon_idea.gif

One of the reasons why x86 couldn't compete with ARM chips on efficiency is that there's really not much innovation and progress going on in the x86 realm (especially power efficiency) for over a decade, due to a severe lack of competition. AMD's Phenom series of chips are lukewarm at best from a performance/price standpoint, and Bulldozer was a huge disaster to the point of driving AMD towards near bankruptcy and probably lead Intel to making small iterations on their CPUs. It wasn't until 2017 where Ryzen arrived to the scene, and even that AMD didn't return to proper competition all around with Zen 2 icon_idea.gif

On the other hand, Apple has been designing low powered ARM chips for iPhones and iPads for over a decade now (they have been dabbling in semiconductors since the early 1980s), and their offerings have been outperforming lots of SoCs from Android competitors significantly for many years, even today. By doing so they have gathered quite a lot of know-how on how to design high performance low powered SoCs coupled with the stagnant x86 market it's not hard to see why they switched to their in house silicon (note that the M series chips are not just a scaled up A series chip) icon_idea.gif

x86 incumbents, by comparison hasn't really focused on efficiency-minded chips, with their design targets being mostly performance and areal efficiency. Power efficiency for x86 was sort of a niche thing back them outside of netbooks/subnotebooks (remember those?) icon_idea.gif

That being said however it seems both AMD/Intel are starting to focusing on power efficiency (see Phoenix and the upcoming Strix Point APUs, along with Intel's new Meteor Lake chips) on mobile chips. It certainly isn't as groundbreaking as Apple Silicon but it sure is a good stepping stone at that after many years of stagnancy (note that chip design and manufacturing can be incredibly time intensive) icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 11:51 PM
1024kbps
post May 12 2024, 01:47 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(stella_purple @ May 11 2024, 02:42 AM)
amd one is worse, it may turn into fire hazard laugh.gif

user posted image


*
AMD burnt CPU is because OC
Intel stock setting

dont you see the difference?


11 Pages « < 3 4 5 6 7 > » Top
 

Change to:
| Lo-Fi Version
0.0159sec    0.65    6 queries    GZIP Disabled
Time is now: 28th November 2025 - 09:21 AM