Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

 Intel 13th/14th gen cpus crashing, degrading

views
     
TSimbibug
post Apr 15 2024, 03:01 PM, updated 2y ago

Regular
******
Senior Member
1,697 posts

Joined: Jan 2013


Around the 10th gen or so, Intel seems to be juicing up their cpus in order to look competitive with Ryzen cpus.
user posted image
And Intel has done this sort of thing in the past, just not to the extent of what is going on now.

This problem has just gotten way worse since. The current 13th/14th gen cpus will be degraded at stock settings because Intel has ridiculously high power limits. The default limits are already very high and the 'extreme' ICCmax for 150W TDP 13th/14th gen cpus is 400A is clearly crazy high. The average pc builder or gamer is not going to know that leaving the settings at default will degrade their cpus permanently in a few short months.

Recently Nvidia pushed back by telling users with Raptor Lake cpus to contact Intel after getting "out of video memory" errors.
https://www.tomshardware.com/pc-components/...t-intel-support
And even worse, it looks like the Raptor Lake cpus have degraded in a few months to a point where crashing occurs.
https://www.lowyat.net/2024/320284/gamers-r...-cpus-en-masse/

Sad to see how Intel sunk to to the level where they are putting the blame on mobo manufacturers for not enforcing limits LOWER than their own specs for long term reliability.
awol
post Apr 15 2024, 03:05 PM

Enthusiast
*****
Junior Member
910 posts

Joined: Jun 2007
From: Selangor
intel is history.
Zen is da future.
Duckies
post Apr 15 2024, 03:11 PM

Rubber Ducky
*******
Senior Member
9,789 posts

Joined: Jun 2008
From: Rubber Duck Pond


Yes...and I have to downvolt + limit power for my 14700k to prevent high temperature...cilaka

In fact AMD CPU and GPU now is the best performance/price. If not because of Intel's good marketing...

This post has been edited by Duckies: Apr 15 2024, 03:12 PM
SUSifourtos
post Apr 15 2024, 03:25 PM

Look at all my stars!!
*******
Senior Member
2,256 posts

Joined: Feb 2012



If You live long enough....

Crazy Power consumption, High temperature, slow = AMD.
Bulldozer... fx-8350...

AMD is the New Intel.
Intel now is the Old AMD.


Decided to olff-load intel....
( shorting NVIDIA moment is coming )
pandah
post Apr 15 2024, 03:28 PM

Enthusiast
*****
Senior Member
719 posts

Joined: Jul 2011

QUOTE(Duckies @ Apr 15 2024, 03:11 PM)
Yes...and I have to downvolt + limit power for my 14700k to prevent high temperature...cilaka

In fact AMD CPU and GPU now is the best performance/price. If not because of Intel's good marketing...
*
big reduction in performance? Or generally not noticeable?
TSimbibug
post Apr 15 2024, 03:29 PM

Regular
******
Senior Member
1,697 posts

Joined: Jan 2013


QUOTE(Duckies @ Apr 15 2024, 03:11 PM)
Yes...and I have to downvolt + limit power for my 14700k to prevent high temperature...cilaka

In fact AMD CPU and GPU now is the best performance/price. If not because of Intel's good marketing...
*
The 14700 is not a bad cpu, its just that Intel wants to win and be no. 1 at everything at any cost it seems. Lowering the PL1/PL2/Icc is not going destroy performance, its still going to be good and generate alot less heat.
Duckies
post Apr 15 2024, 03:31 PM

Rubber Ducky
*******
Senior Member
9,789 posts

Joined: Jun 2008
From: Rubber Duck Pond


QUOTE(pandah @ Apr 15 2024, 03:28 PM)
big reduction in performance? Or generally not noticeable?
*
Not noticeable when I did benchmarking but the temperature reduction is a lot. From going over 100c and gets thermal throttled before the manual tuning to around 90c after tuning. And this is with AIO some more.

QUOTE(imbibug @ Apr 15 2024, 03:29 PM)
The 14700 is not a bad cpu, its just that Intel wants to win and be no. 1  at everything at any cost it seems. Lowering the PL1/PL2/Icc is not going destroy performance, its still going to be good and generate alot less heat.
*
Exactly but the problem is general consumer don't know or won't know about these things...they just want to use it out of the box and seeing this kind of temperature will scare the shit out of them. In fact it scares the shit out of me the first time I saw the temperature also.

This post has been edited by Duckies: Apr 15 2024, 03:33 PM
BL98
post Apr 15 2024, 03:31 PM

Enthusiast
*****
Junior Member
742 posts

Joined: Sep 2020


QUOTE(ifourtos @ Apr 15 2024, 03:25 PM)
If You live long enough....

Crazy Power consumption, High temperature, slow = AMD.
Bulldozer... fx-8350...

AMD is the New Intel.
Intel now is the Old AMD.
Decided to olff-load intel....
( shorting NVIDIA moment is coming )
*
Time to buy AMD stock or better to top up NVDA?
babylon52281
post Apr 15 2024, 04:51 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(imbibug @ Apr 15 2024, 03:01 PM)
Around the 10th gen or so, Intel seems to be juicing up their cpus in order to look competitive with Ryzen cpus.
user posted image
And Intel has done this sort of thing in the past, just not to the extent of what is going on now.

This problem has just gotten way worse since. The current 13th/14th gen cpus will be degraded at stock settings because Intel has ridiculously high power limits. The default limits are already very high and the 'extreme' ICCmax for 150W TDP 13th/14th gen cpus is 400A is clearly crazy high. The average pc builder or gamer is not going to know that leaving the settings at default will degrade their cpus permanently in a few short months.

Recently Nvidia pushed back by telling users with Raptor Lake cpus to contact Intel after getting "out of video memory" errors.
https://www.tomshardware.com/pc-components/...t-intel-support
And even worse, it looks like the Raptor Lake cpus have degraded in a few months to a point where crashing occurs.
https://www.lowyat.net/2024/320284/gamers-r...-cpus-en-masse/

Sad to see how Intel sunk to to the level where they are putting the blame on mobo manufacturers for not enforcing limits LOWER than their own specs for long term reliability.
*
Whoever buys i9 purely for games really got more money than sense. Its more of a HEDT CPU and if you want proper gaming CPU go for i7 which doesnt seem to be affected.

Anyways both teams are not being honest with their power draws of course Intel is much worse simply because their silicon quality allows Kskus to draw more power than spec but its also mobo maker fault for actually pushing the CPU beyond reasonable limits when removing power limiters.

Its basically Ksku CPU is a car without brakes and then certain particular mobo makers removing RPM limiter too. What happens? The car will go as fast as it physically can until it crashes, the same goes for an unlimited CPU too, it figuratively crashes.

Will it degrade the CPU? Yes eventually but unless your running 24/7 at the max turbo limit constantly it shouldnt die so soon. Games typically dont even run CPU to its max capacity unless theres some weird missmatch combo to hit a hard CPU bottleneck.
babylon52281
post Apr 15 2024, 05:04 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Duckies @ Apr 15 2024, 03:11 PM)
Yes...and I have to downvolt + limit power for my 14700k to prevent high temperature...cilaka

In fact AMD CPU and GPU now is the best performance/price. If not because of Intel's good marketing...
*
You use O11 fish tank case sure will easily overheat coz these cases have pisspoor airflow. Get a Lancool3 with the same parts inside and temps will be much different

babylon52281
post Apr 15 2024, 05:06 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(BL98 @ Apr 15 2024, 03:31 PM)
Time to buy AMD stock or better to top up NVDA?
*
Buy ARM or Qualcomm stocks. Their CPU destroys both X86 in terms of power efficiency. ARM is the future.
Duckies
post Apr 15 2024, 05:08 PM

Rubber Ducky
*******
Senior Member
9,789 posts

Joined: Jun 2008
From: Rubber Duck Pond


QUOTE(babylon52281 @ Apr 15 2024, 05:04 PM)
You use O11 fish tank case sure will easily overheat coz these cases have pisspoor airflow. Get a Lancool3 with the same parts inside and temps will be much different
» Click to show Spoiler - click again to hide... «

*
Fish tank case more cantik tongue.gif At first I want to for O11 Vision some more...that 1 the airflow lagi teruk


babylon52281
post Apr 15 2024, 05:52 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Duckies @ Apr 15 2024, 05:08 PM)
Fish tank case more cantik tongue.gif At first I want to for O11 Vision some more...that 1 the airflow lagi teruk
*
You traded performance for looks so dont complain lar tongue.gif

In fish tanks you cannot push to the max so its really a balance of keeping temps in check just for the looks. For such usage I would prefer nonK CPU less temp headaches.

This post has been edited by babylon52281: Apr 15 2024, 05:52 PM
TristanX
post Apr 15 2024, 07:00 PM

Where is my stars?
Group Icon
Elite
24,334 posts

Joined: Nov 2004
From: Setapak, Kuala Lumpur


QUOTE(imbibug @ Apr 15 2024, 03:01 PM)
Around the 10th gen or so, Intel seems to be juicing up their cpus in order to look competitive with Ryzen cpus.
user posted image
And Intel has done this sort of thing in the past, just not to the extent of what is going on now.

This problem has just gotten way worse since. The current 13th/14th gen cpus will be degraded at stock settings because Intel has ridiculously high power limits. The default limits are already very high and the 'extreme' ICCmax for 150W TDP 13th/14th gen cpus is 400A is clearly crazy high. The average pc builder or gamer is not going to know that leaving the settings at default will degrade their cpus permanently in a few short months.

Recently Nvidia pushed back by telling users with Raptor Lake cpus to contact Intel after getting "out of video memory" errors.
https://www.tomshardware.com/pc-components/...t-intel-support
And even worse, it looks like the Raptor Lake cpus have degraded in a few months to a point where crashing occurs.
https://www.lowyat.net/2024/320284/gamers-r...-cpus-en-masse/

Sad to see how Intel sunk to to the level where they are putting the blame on mobo manufacturers for not enforcing limits LOWER than their own specs for long term reliability.
*
Read the Tom's Hardware news properly. 4096W is not Intel official power limit. 253W is for 13900K and 14900K. Motherboard vendors pushing too much performance on already "overclocked" chips.

You don't need power limit removed to get the most out of their chips too.

user posted image
https://www.techpowerup.com/review/intel-co...-14900k/18.html

It can be tuned to be very efficient too.


This post has been edited by TristanX: Apr 15 2024, 07:26 PM
Ebony & Ivory
post Apr 15 2024, 09:25 PM

Enthusiast
*****
Senior Member
959 posts

Joined: Jan 2016

Duckies
post Apr 15 2024, 09:49 PM

Rubber Ducky
*******
Senior Member
9,789 posts

Joined: Jun 2008
From: Rubber Duck Pond


QUOTE(babylon52281 @ Apr 15 2024, 05:52 PM)
You traded performance for looks so dont complain lar  tongue.gif 

In fish tanks you cannot push to the max so its really a balance of keeping temps in check just for the looks. For such usage I would prefer nonK CPU less temp headaches.
*
Haha it's not the best but certainly not the worst. In fact can still put 7 case fans + 3 fans for AIO ma tongue.gif

Also don't really need to push the CPU to the max since game nowadays are heavily on GPU itself...so can maintain balance between temperature and performance for the CPU already cukup makan

This post has been edited by Duckies: Apr 15 2024, 09:50 PM
lolzcalvin
post Apr 15 2024, 11:25 PM

shibe in predicament
******
Senior Member
1,586 posts

Joined: Mar 2014
From: 75°26'11.6"S, 136°16'16.0"E


QUOTE(Duckies @ Apr 15 2024, 03:11 PM)
Yes...and I have to downvolt + limit power for my 14700k to prevent high temperature...cilaka

In fact AMD CPU and GPU now is the best performance/price. If not because of Intel's good marketing...
*
actually AMD GPU is the one with best price/performance ratio, but CPU wise it's a tie with Intel. Intel still slightly edges out in lower tier ones.
I went Intel because 7950X and 7950X3D were more expensive than 13900K...and all X670 mobos costed both arms and a left nut
TSimbibug
post Apr 16 2024, 09:44 PM

Regular
******
Senior Member
1,697 posts

Joined: Jan 2013


QUOTE(babylon52281 @ Apr 15 2024, 04:51 PM)
Whoever buys i9 purely for games really got more money than sense. Its more of a HEDT CPU and if you want proper gaming CPU go for i7 which doesnt seem to be affected.

Anyways both teams are not being honest with their power draws of course Intel is much worse simply because their silicon quality allows Kskus to draw more power than spec but its also mobo maker fault for actually pushing the CPU beyond reasonable limits when removing power limiters.

Its basically Ksku CPU is a car without brakes and then certain particular mobo makers removing RPM limiter too. What happens? The car will go as fast as it physically can until it crashes, the same goes for an unlimited CPU too, it figuratively crashes.

Will it degrade the CPU? Yes eventually but unless your running 24/7 at the max turbo limit constantly it shouldnt die so soon. Games typically dont even run CPU to its max capacity unless theres some weird missmatch combo to hit a hard CPU bottleneck.
*
Whether or not the i9 is overpowered for games is beside the point. The user might be using for other programs other than games.
There are more reports that 13th and 14th gen get degraded from the factory settings even under normal use.
"However, despite all the power they wield, gamers in South Korea were returning them en masse, all because of a problem arising through the fighting game Tekken 8......
Other reports point to a combination of a 13900K or 14900K with and the top-tier RTX 4090 inducing crashes in-game, just a few months after being assembled or paired together."
TSimbibug
post Apr 16 2024, 10:15 PM

Regular
******
Senior Member
1,697 posts

Joined: Jan 2013


QUOTE(TristanX @ Apr 15 2024, 07:00 PM)
Read the Tom's Hardware news properly. 4096W is not Intel official power limit. 253W is for 13900K and 14900K. Motherboard vendors pushing too much performance on already "overclocked" chips.

You don't need power limit removed to get the most out of their chips too.

user posted image
https://www.techpowerup.com/review/intel-co...-14900k/18.html

It can be tuned to be very efficient too.

*
I wasn't referring to to THW about the official power limit. Intel's early review sample NDA document said >253W for PL1/PL2 and about 400A for icc-max for their 'extreme profile'. Even the default 253W stock limit seems to be in overclocking territory for 125W tdp chips.

There is way too many reports of instability/chips degraded for 13/14th gen cpus, and some of them happening even after setting stock Intel power limits.
The Zen 4 cpus seems to do better in apps other than gaming but the big difference is in long term stability and reliability. Being abit slower is a ton better than unusable unstable cpu thats prematurely degraded.


TristanX
post Apr 16 2024, 10:33 PM

Where is my stars?
Group Icon
Elite
24,334 posts

Joined: Nov 2004
From: Setapak, Kuala Lumpur


QUOTE(imbibug @ Apr 16 2024, 10:15 PM)
I wasn't referring to to THW about the official power limit. Intel's early review sample NDA document said >253W for PL1/PL2 and about 400A for icc-max for their 'extreme profile'. Even the default 253W stock limit seems to be in overclocking territory for 125W tdp chips.

There is way too many reports of instability/chips degraded for 13/14th gen cpus, and some of them happening even after setting stock Intel power limits.
The Zen 4 cpus seems to do better in apps other than gaming but the big difference is in long term stability and reliability. Being abit slower is a ton better than unusable unstable cpu thats prematurely degraded.
*
Degradation happened at 4096W I guess. The damage already done. Won't fix the issue even when they go back to stock limits.

I have bad experience with Ryzen 7 1700 and 5900X. No issue with 2700X and 3900X. For AMD setups, never buy it at launch. We will be beta testers due to not enough testing. Bugs will still be present. 5900X being my worst and I used it for one year. You can get potato chips if you are unlucky.



One of the cases...
https://forum.lowyat.net/index.php?showtopic=5022425

This post has been edited by TristanX: Apr 16 2024, 10:35 PM
TristanX
post Apr 16 2024, 10:33 PM

Where is my stars?
Group Icon
Elite
24,334 posts

Joined: Nov 2004
From: Setapak, Kuala Lumpur


For some reason, double posted.

This post has been edited by TristanX: Apr 16 2024, 10:36 PM
hashtag2016
post Apr 20 2024, 01:32 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
Don't worry , be happy.. I think more and more mobo maker will include something like Intel DEFAULT setting to help users trobleshoot their PC issue(if any).
Since, Asus already Introduce "Intel Baseline Profile option " in their latest BIOS..it is a positive move, I think.. brows.gif

although u have to choose between Stability or Performance .. drool.gif
https://twitter.com/9550pro/status/1781481593972129929

p/s: IMHO, I think even gen 12nd and some of the non-K might also face similar issue. (that's just my personal thought, don't quote me please). brows.gif brows.gif

This post has been edited by hashtag2016: Apr 20 2024, 01:35 PM
babylon52281
post Apr 20 2024, 09:41 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(hashtag2016 @ Apr 20 2024, 01:32 PM)
Don't worry , be happy.. I think more and more mobo maker will include something like Intel DEFAULT setting to help users trobleshoot their PC issue(if any).
Since, Asus already Introduce "Intel Baseline Profile option " in their latest BIOS..it is a positive move, I think.. brows.gif

although u have to choose between Stability or Performance .. drool.gif
https://twitter.com/9550pro/status/1781481593972129929

p/s: IMHO, I think even gen 12nd and some of the non-K might also face similar issue. (that's just my personal thought, don't quote me please). brows.gif  brows.gif
*
So far no stability issues with my PL2 unleashed nonK 12700F. Been running for 1.5 years with my Asrock B660M PG Riptide mobo. In games it doesnt really hit its maxed turbo power limit so Im still contemplating if to do BCLK OC to hit 12700K speeds.
Muusyc
post Apr 21 2024, 02:40 PM

Casual
***
Junior Member
352 posts

Joined: Oct 2021
QUOTE(ifourtos @ Apr 15 2024, 03:25 PM)
If You live long enough....

Crazy Power consumption, High temperature, slow = AMD.
Bulldozer... fx-8350...

AMD is the New Intel.
Intel now is the Old AMD.
Decided to olff-load intel....
( shorting NVIDIA moment is coming )
*
And the one gets fooled by them both, is the consumers themselves. This is just how these companies play the market.
SUSifourtos
post Apr 21 2024, 03:34 PM

Look at all my stars!!
*******
Senior Member
2,256 posts

Joined: Feb 2012



QUOTE(Muusyc @ Apr 21 2024, 02:40 PM)
And the one gets fooled by them both, is the consumers themselves. This is just how these companies play the market.
*
u mean Intel purposely release shit product to Let AMD Lead the Market?

also like how Tesla produce less value Higher priced EV compare BYD to let it grow bigger?


Baconateer
post Apr 21 2024, 03:39 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


Im glad i went with amd am5

Considered getting 12/13th gen i5 1X400 chip...

But the no upgrade path after 14th gen made me reconsidered.
stella_purple
post Apr 21 2024, 05:36 PM

Casual
***
Junior Member
392 posts

Joined: Oct 2011
QUOTE(Baconateer @ Apr 21 2024, 03:39 PM)
Im glad i went with amd am5

Considered getting 12/13th gen i5 1X400 chip...

But the no upgrade path after 14th gen made me reconsidered.
*
am5 got its own problem as well
Baconateer
post Apr 21 2024, 05:42 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(stella_purple @ Apr 21 2024, 05:36 PM)
am5 got its own problem as well
*
For me..hvnt encountered any..
hashtag2016
post Apr 21 2024, 08:18 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(babylon52281 @ Apr 20 2024, 09:41 PM)
So far no stability issues with my PL2 unleashed nonK 12700F. Been running for 1.5 years with my Asrock B660M PG Riptide mobo. In games it doesnt really hit its maxed turbo power limit so Im still contemplating if to do BCLK OC to hit 12700K speeds.
*
I think it is a good feature lah.. although not everybody need to use it..
At least if sb think their build got some stable issue even without turn on XMP, then they can straight forward turn on this switch .
and if the problem still persist, then they can straight forward go to claim warranty(this only just my opinion) , no need to gaduh with the PC shop or any Seller.. brows.gif icon_idea.gif

Anyway, we still need to wait the officer investigation result from intel, to see if they find anything interesting.. hmm.gif

This post has been edited by hashtag2016: Apr 21 2024, 08:21 PM
babylon52281
post Apr 21 2024, 10:14 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Baconateer @ Apr 21 2024, 03:39 PM)
Im glad i went with amd am5

Considered getting 12/13th gen i5 1X400 chip...

But the no upgrade path after 14th gen made me reconsidered.
*
Futureproofing is fallacy. Recall the AM4 300series to 400series issues and also their limited growth (no PCIE4, no USB 3.2) by todays standards even if you can technically still "upgrade" with newer CPU.

AM5 1st gen is held back by limited DDR5 speeds. This will fast gets old as DDR5 tech matures and speeds goes over 10k.

Do not buy for something you think might happen future, just buy for the best today.

This post has been edited by babylon52281: Apr 21 2024, 10:21 PM
babylon52281
post Apr 21 2024, 10:20 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(hashtag2016 @ Apr 21 2024, 08:18 PM)
I think it is a good feature lah..  although not everybody need to use it..
At least if sb think their build got some stable issue even without turn on XMP, then they can straight forward turn on this switch  .
and if the problem still persist, then they can straight forward go to claim warranty(this only just  my opinion) , no need to gaduh with the PC shop or any Seller.. brows.gif  icon_idea.gif

Anyway, we still need to wait the officer investigation result from intel, to see if they find anything interesting.. hmm.gif
*
Not sure what XMP gotta do with it. Whether Ksku or nonK both can run XMP at default settings (with right mobo), and the issues are more towards people running their CPUS way over default power limit settings due to their mobos auto set to OC mode when detect Ksku installed.
hashtag2016
post Apr 21 2024, 10:33 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(babylon52281 @ Apr 21 2024, 10:20 PM)
Not sure what XMP gotta do with it. Whether Ksku or nonK both can run XMP at default settings (with right mobo), and the issues are more towards people running their CPUS way over default power limit settings due to their mobos auto set to OC mode when detect Ksku installed.
*
XMP is memory overclocking.. it may or may not cause some of the stability issue.
To rule that out ,of course , we need to turn XMP off while troubleshooting stable issue.

For people who think their cpu is fine, of couse they can do whatever they want on their own pc.. brows.gif

1024kbps
post Apr 22 2024, 07:56 AM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(awol @ Apr 15 2024, 03:05 PM)
intel is history.
Zen is da future.
*
ARM and RISC-V isn't too far behind

If both AMD and Intel keep making mistake they will become history
awol
post Apr 22 2024, 08:04 AM

Enthusiast
*****
Junior Member
910 posts

Joined: Jun 2007
From: Selangor
QUOTE(1024kbps @ Apr 22 2024, 07:56 AM)
ARM and RISC-V isn't too far behind

If both AMD and Intel keep making mistake they will become history
*
ARM maybe, but RISC-V, tired of YT headline RISC-V is the future for many years already.
seems like only intel make mistake while AMD make money.
babylon52281
post Apr 22 2024, 09:43 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(awol @ Apr 22 2024, 08:04 AM)
ARM maybe, but RISC-V, tired of YT headline RISC-V is the future for many years already.
seems like only intel make mistake while AMD make money.
*
AMD mistake is keeping AM5 prices too f**king high, HUB recently did a review and their basement budget Mobos are basically all crap so you cannot do a budget build with AM5 either, which is why AM4 is still hanging around and not getting replaced as it was supposed to be. This is a failure from product marketing POV.

And for a fabless maker, they also failed to prioritise software side of the CPU leading to systems being less stable (DOCP/EXPO, CCD priority) than Intel build (who still has to run a fab business). In that regards, heck even Nvidia has better grip on their various software ecosystems for other uses of their GPU.

Both have their faults and surprising it is Apple, another fabless brand, that is showing how to design CPU hardware with their superb M2 Ultra, with near flawless software integration. And then there is ARM...

Apple, ARM > Intel, AMD

All you AMD & Intel fanbois can balik rumah cry
awol
post Apr 22 2024, 09:55 AM

Enthusiast
*****
Junior Member
910 posts

Joined: Jun 2007
From: Selangor
QUOTE(babylon52281 @ Apr 22 2024, 09:43 AM)
AMD mistake is keeping AM5 prices too f**king high, HUB recently did a review and their basement budget Mobos are basically all crap so you cannot do a budget build with AM5 either, which is why AM4 is still hanging around and not getting replaced as it was supposed to be. This is a failure from product marketing POV.

And for a fabless maker, they also failed to prioritise software side of the CPU leading to systems being less stable (DOCP/EXPO, CCD priority) than Intel build (who still has to run a fab business). In that regards, heck even Nvidia has better grip on their various software ecosystems for other uses of their GPU.

Both have their faults and surprising it is Apple, another fabless brand, that is showing how to design CPU hardware with their superb M2 Ultra, with near flawless software integration. And then there is ARM...

Apple, ARM > Intel, AMD

All you AMD & Intel fanbois can balik rumah cry
*
not an AMD & intel fan boi brows.gif
this year ARM laptop from qualcomm will hit the market (again), see how it goes.

i like macOS, but i dont like apple.
babylon52281
post Apr 22 2024, 03:53 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(awol @ Apr 22 2024, 09:55 AM)
not an AMD & intel fan boi  brows.gif
this year ARM laptop from qualcomm will hit the market (again), see how it goes.

i like macOS, but i dont like apple.
*
Never had an Apple product whatsoever and never figured I will ever need one but I can appreciate their innovations to push electronics design boundaries down to the CPU arch itself. I just dont agree Apple pricing for such innovations.
awol
post Apr 22 2024, 04:12 PM

Enthusiast
*****
Junior Member
910 posts

Joined: Jun 2007
From: Selangor
QUOTE(babylon52281 @ Apr 22 2024, 03:53 PM)
Never had an Apple product whatsoever and never figured I will ever need one but I can appreciate their innovations to push electronics design boundaries down to the CPU arch itself. I just dont agree Apple pricing for such innovations.
*
their logic is to control hardware and software.
last time hardware depend on others, now they able to control/design it.
apple aside, on windows x86_64 still better than ARM and RISC-V.
Muusyc
post Apr 22 2024, 10:32 PM

Casual
***
Junior Member
352 posts

Joined: Oct 2021
QUOTE(ifourtos @ Apr 21 2024, 03:34 PM)
u mean Intel purposely release shit product to Let AMD Lead the Market?

also like how Tesla produce less value Higher priced EV compare BYD to let it grow bigger?
*
It is a roller coaster ride.
1024kbps
post Apr 23 2024, 02:08 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Apr 22 2024, 09:43 AM)
AMD mistake is keeping AM5 prices too f**king high, HUB recently did a review and their basement budget Mobos are basically all crap so you cannot do a budget build with AM5 either, which is why AM4 is still hanging around and not getting replaced as it was supposed to be. This is a failure from product marketing POV.

And for a fabless maker, they also failed to prioritise software side of the CPU leading to systems being less stable (DOCP/EXPO, CCD priority) than Intel build (who still has to run a fab business). In that regards, heck even Nvidia has better grip on their various software ecosystems for other uses of their GPU.

Both have their faults and surprising it is Apple, another fabless brand, that is showing how to design CPU hardware with their superb M2 Ultra, with near flawless software integration. And then there is ARM...

Apple, ARM > Intel, AMD

All you AMD & Intel fanbois can balik rumah cry
*
Precisely, Apple Silicon is using AMRv8 architecture, it's part of the ARM family,

ARM can be designed and produced without much restriction like x86, even if AMD got brought up by other corp they cant continue to produce Zen like AMD.

not to mention Top500 supercomputer top 1st place was once using ARM processor made by fujitsu.
Amazon AWS server now use their own AWS graviton cpu also using ARM too.

Qualcomm just need to get their things done and need mass adoptions from big software vendor, get more game developers support and produce binary that works with the cpu without emulation.
it will take years, but probably worth the wait and x86 cpu will become secondary choice and slowly become thing in the past.

it's inevitable, then the consumer market will have cpu made by various vendors, not just AMD/Intel anymore.
lolzcalvin
post Apr 23 2024, 03:09 PM

shibe in predicament
******
Senior Member
1,586 posts

Joined: Mar 2014
From: 75°26'11.6"S, 136°16'16.0"E


QUOTE(awol @ Apr 22 2024, 04:12 PM)
on windows x86_64 still better than ARM and RISC-V.
*
for now. microsoft is currently betting on snapdragon x elite for their Windows on ARM. snapdragon x elite does look promising.

however, x86-64 may not be going anywhere soon despite the rising popularity of RISC-based instruction set architectures (ISAs).

the reason why windows still requiring x86-64 (for now) is because they have a lot of legacy codes that are dependent on x86 ISA. many wizards we use in windows today can be dated back to win95. this is why x86-64 hangs around for so long -- backwards compatibility for the past 20+ years. hell even intel 8086-based software can run on modern x86 CPUs with little tweaking. people may attribute better efficiency to ARM, but it really matter not on the difference in ISAs, but how CPU vendors are pushing at which direction while designing their CPUs. for so long both AMD and Intel been pushing towards high performance (both of them compete in high performance computing while having a lower priority on battery life), while Qualcomm and Apple have been pushing for efficiency (both of them compete in mobile space and therefore having handful of experience on low power operations).

a good read of RISC vs CISC if you're into a bit of technicality: https://chipsandcheese.com/2021/07/13/arm-o...-doesnt-matter/
if u don't already know, x86-64 is CISC-based while ARM and RISC-V are RISC-based
awol
post Apr 23 2024, 03:35 PM

Enthusiast
*****
Junior Member
910 posts

Joined: Jun 2007
From: Selangor
QUOTE(lolzcalvin @ Apr 23 2024, 03:09 PM)
for now. microsoft is currently betting on snapdragon x elite for their Windows on ARM. snapdragon x elite does look promising.
i agree with you. lets see how SD X Elite fare against intel/amd on laptop and its performance against M3 SoC.

then again, it still limited to laptop/mobile.
power user still depend on high end desktop.
babylon52281
post Apr 23 2024, 05:57 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(lolzcalvin @ Apr 23 2024, 03:09 PM)
for now. microsoft is currently betting on snapdragon x elite for their Windows on ARM. snapdragon x elite does look promising.

however, x86-64 may not be going anywhere soon despite the rising popularity of RISC-based instruction set architectures (ISAs).

the reason why windows still requiring x86-64 (for now) is because they have a lot of legacy codes that are dependent on x86 ISA. many wizards we use in windows today can be dated back to win95. this is why x86-64 hangs around for so long -- backwards compatibility for the past 20+ years. hell even intel 8086-based software can run on modern x86 CPUs with little tweaking. people may attribute better efficiency to ARM, but it really matter not on the difference in ISAs, but how CPU vendors are pushing at which direction while designing their CPUs. for so long both AMD and Intel been pushing towards high performance (both of them compete in high performance computing while having a lower priority on battery life), while Qualcomm and Apple have been pushing for efficiency (both of them compete in mobile space and therefore having handful of experience on low power operations).

a good read of RISC vs CISC if you're into a bit of technicality: https://chipsandcheese.com/2021/07/13/arm-o...-doesnt-matter/
if u don't already know, x86-64 is CISC-based while ARM and RISC-V are RISC-based
*
Windows on ARM have been tried before and failed (see Windows RT). Mainly coz no backwards portability of current vast Windows software library to ARM, so those who buys Windows ARM PC were disappointed they cannot run their usual softwares.
For Windows ARM to be success they need to have easy portability or majority ARM compatible softwares ready from Day1.
1024kbps
post Apr 23 2024, 06:22 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



In short, qualcomm just need to butt in to kick both x86 player awake, just remember what happened to the high end cpu still in quad fcking cores when AMD still underdog

PC space need competition, duopoly is bad
hashtag2016
post Apr 23 2024, 06:38 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
qualcomm is overated, wait for Malaysia 1st AI chip...B.I.J.A.K. (Business Intelligence Job Analytics Keystone) rclxms.gif rclxms.gif



(just kidding...) brows.gif  brows.gif

This post has been edited by hashtag2016: Apr 23 2024, 06:39 PM
TristanX
post Apr 23 2024, 07:26 PM

Where is my stars?
Group Icon
Elite
24,334 posts

Joined: Nov 2004
From: Setapak, Kuala Lumpur


QUOTE(1024kbps @ Apr 23 2024, 06:22 PM)
In short, qualcomm just need to butt in to kick both x86 player awake, just remember what happened to the high end cpu still in quad fcking cores when AMD still underdog

PC space need competition, duopoly is bad
*
Its not that easy. Still limited to process nodes. Why would Intel and AMD push power usage up to 90-95C depending on cooling? Its physical limitations. Die shrinks becoming increasingly more difficult as well. Unless someone comes out with a new material for the processors.
1024kbps
post Apr 23 2024, 10:08 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(TristanX @ Apr 23 2024, 07:26 PM)
Its not that easy. Still limited to process nodes. Why would Intel and AMD push power usage up to 90-95C depending on cooling? Its physical limitations. Die shrinks becoming increasingly more difficult as well. Unless someone comes out with a new material for the processors.
*
They should design better architecture then, node shrink can only improve thermal and power efficiency,
One of the issue is lack of optimization, your program take longer time to complete instead of running efficiently,

A lot of programs already shifted their load to gpu, eg web canvas, 3d rendering, font rendering already runs on gpu, video playback, image post processing filters, etc, they're much faster on gpu and freed a lot of resources.
Better coded program would use both CPU and gpu, but it's very rare 😕
TristanX
post Apr 23 2024, 11:11 PM

Where is my stars?
Group Icon
Elite
24,334 posts

Joined: Nov 2004
From: Setapak, Kuala Lumpur


QUOTE(1024kbps @ Apr 23 2024, 10:08 PM)
They should design better architecture then, node shrink can only improve thermal and power efficiency,
One of the issue is lack of optimization, your program take longer time to complete instead of running efficiently,

A lot of programs already shifted their load to gpu, eg web canvas, 3d rendering, font rendering already runs on gpu, video playback, image post processing filters, etc, they're much faster on gpu and freed a lot of resources.
Better coded program would use both CPU and gpu, but it's very rare 😕
*
Last major breakthrough is Sandy Bridge. Its been a while. Today, Intel is still behind when comes to the hardware. They still able to keep up. Just with a lot more power.

There are candidates for new materials. Like 100Ghz nanocarbon I think. I think no one able to get it stable.

Lots of things to consider too. Now we have a lot of hackers cracking it. Security patch usually nerfs the processors.
babylon52281
post Apr 24 2024, 10:31 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(TristanX @ Apr 23 2024, 11:11 PM)
Last major breakthrough is Sandy Bridge. Its been a while. Today, Intel is still behind when comes to the hardware. They still able to keep up. Just with a lot more power.

There are candidates for new materials. Like 100Ghz nanocarbon I think. I think no one able to get it stable.

Lots of things to consider too. Now we have a lot of hackers cracking it. Security patch usually nerfs the processors.
*
Arguably Alderlake is also a major breakthru being the 1st consumer big.little hybrid X86 CPU, next breakthru would be Lunarlake with proper tiled design desktop CPU.
hashtag2016
post May 2 2024, 01:58 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
refer: https://www.igorslab.de/en/intel-releases-t...ty-issue-update

13th and 14th Generation K SKU Processor Instability Issue Update
Intel® has observed that this issue may be related to out of specification operating conditions resulting in sustained high voltage and frequency during periods of elevated heat.
Analysis of affected processors shows some parts experience shifts in minimum operating voltages which may be related to operation outside of Intel® specified operating conditions.

While the root cause has not yet been identified, Intel® has observed the majority of reports of this issue are from users with unlocked/overclock capable motherboards.
Intel® has observed 600/700 Series chipset boards often set BIOS defaults to disable thermal and power delivery safeguards designed to limit processor exposure to sustained periods of high voltage and frequency, for example:
– Disabling Current Excursion Protection (CEP)
– Enabling the IccMax Unlimited bit
– Disabling Thermal Velocity Boost (TVB) and/or Enhanced Thermal Velocity Boost (eTVB)
– Additional settings which may increase the risk of system instability:
– Disabling C-states
– Using Windows Ultimate Performance mode
– Increasing PL1 and PL2 beyond Intel® recommended limits
Intel® requests system and motherboard manufacturers to provide end users with a default BIOS profile that matches Intel® recommended settings.

Intel® strongly recommends customer’s default BIOS settings should ensure operation within Intel’s recommended settings.
In addition, Intel® strongly recommends motherboard manufacturers to implement warnings for end users alerting them to any unlocked or overclocking feature usage.
Intel® is continuing to actively investigate this issue to determine the root cause and will provide additional updates as relevant information becomes available.

Intel® will be publishing a public statement regarding issue status and Intel® recommended BIOS setting recommendations targeted for May 2024.


This post has been edited by hashtag2016: May 2 2024, 02:00 PM
TSimbibug
post May 4 2024, 11:29 PM

Regular
******
Senior Member
1,697 posts

Joined: Jan 2013


QUOTE(TristanX @ Apr 16 2024, 10:33 PM)
Degradation happened at 4096W I guess. The damage already done. Won't fix the issue even when they go back to stock limits.

I have bad experience with Ryzen 7 1700 and 5900X. No issue with 2700X and 3900X. For AMD setups, never buy it at launch. We will be beta testers due to not enough testing. Bugs will still be present. 5900X being my worst and I used it for one year. You can get potato chips if you are unlucky.



One of the cases...
https://forum.lowyat.net/index.php?showtopic=5022425
*
Intel's current problems are alot more serious than the teething problems AMD had with Ryzen. The early Ryzens had memory incompatibility/instability with took some bios updates to resolve. The 5000 series had the USB dropout issue. Now Intel probably has more memory incompatibility/instability issues than Ryzen putting aside the problems with voltage/power.

Someone at chiphell took the time to organise testing of hundreds of intel 13/14th gen cpus and found very poor stability at auto out of the box settings. If 5/10 13th gen and 2/10 14th gen managed to pass, its a clear sign that its complete garbage and not just a few bad apples.
https://wccftech.com/only-5-out-of-10-core-...ability-issues/

Performance takes a hit as expected with Intels baseline bios fix - "This is reported to be up to -30% in multi-threaded applications and up to -15% in games which is quite big".
Hardware unboxed ran gaming benchmarks with the new bios fix and the performance hit was 10%-20%. IIRC 20% was the perf hit for low 1% fps.
https://www.youtube.com/watch?v=OdF5erDRO-c&t=520s


And then you have to consider Intel's current issues with the Meltdown bug. The last downfall patch supposedly had a big performance hit on older cpus, up to 39%.

This post has been edited by imbibug: May 4 2024, 11:33 PM
stella_purple
post May 5 2024, 01:12 AM

Casual
***
Junior Member
392 posts

Joined: Oct 2011
QUOTE(imbibug @ May 4 2024, 11:29 PM)
Intel's current problems are alot more serious than the teething problems AMD had with Ryzen. The early Ryzens had memory incompatibility/instability with took some bios updates to resolve. The 5000 series had the USB dropout issue. Now Intel probably has more memory incompatibility/instability issues than Ryzen putting aside the problems with voltage/power.

Someone at chiphell took the time to organise testing of hundreds of intel 13/14th gen cpus and found very poor stability at auto out of the box settings. If 5/10 13th gen and 2/10 14th gen managed to pass, its a clear sign that its complete garbage and not just a few bad apples.
https://wccftech.com/only-5-out-of-10-core-...ability-issues/

Performance takes a hit as expected with Intels baseline bios fix - "This is reported to be up to -30% in multi-threaded applications and up to -15% in games which is quite big".
Hardware unboxed ran gaming benchmarks with the new bios fix and the performance hit was 10%-20%. IIRC 20% was the perf hit for low 1% fps.
https://www.youtube.com/watch?v=OdF5erDRO-c&t=520s
And then you have to consider Intel's current issues with the Meltdown bug. The last downfall patch supposedly had a big performance hit on older cpus, up to 39%.
*

AMD do sell lowest binned of garbage chip as well ... not to mention there is also EXPO problem with Ryzen

From my experience with both platform, memory OC / stability definitely is better on Intel than AMD.

user posted image

This post has been edited by stella_purple: May 5 2024, 07:30 AM
TristanX
post May 5 2024, 09:38 AM

Where is my stars?
Group Icon
Elite
24,334 posts

Joined: Nov 2004
From: Setapak, Kuala Lumpur


QUOTE(imbibug @ May 4 2024, 11:29 PM)
Intel's current problems are alot more serious than the teething problems AMD had with Ryzen. The early Ryzens had memory incompatibility/instability with took some bios updates to resolve. The 5000 series had the USB dropout issue. Now Intel probably has more memory incompatibility/instability issues than Ryzen putting aside the problems with voltage/power.

Someone at chiphell took the time to organise testing of hundreds of intel 13/14th gen cpus and found very poor stability at auto out of the box settings. If 5/10 13th gen and 2/10 14th gen managed to pass, its a clear sign that its complete garbage and not just a few bad apples.
https://wccftech.com/only-5-out-of-10-core-...ability-issues/

Performance takes a hit as expected with Intels baseline bios fix - "This is reported to be up to -30% in multi-threaded applications and up to -15% in games which is quite big".
Hardware unboxed ran gaming benchmarks with the new bios fix and the performance hit was 10%-20%. IIRC 20% was the perf hit for low 1% fps.
https://www.youtube.com/watch?v=OdF5erDRO-c&t=520s
And then you have to consider Intel's current issues with the Meltdown bug. The last downfall patch supposedly had a big performance hit on older cpus, up to 39%.
*
Nope. Last year, AMD was even worse.



Kills itself and the mobo! Fire hazard!

This post has been edited by TristanX: May 5 2024, 09:43 AM
babylon52281
post May 5 2024, 04:34 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
AMD fanbois conveniently forgot that their Ryzen had its own CPU failure before Intel did?
https://www.lowyat.net/2023/299165/amd-ryze...800x3d-burnout/

I just love how in that previous case was Intel fanbois pissing on AMD and today we have AMD fanbois pissing on Intel. Well guess what guys from both camps, both AMD & Intel have their share of hardware design failures. Its to be expected when humans are to design things that are billions in scalar and even the best human effort will still have that 0.001% failure which WILL HAPPEN considering how many CPUS they sell.

So to those fanbois from both camps just chill out, any issue that are under warranty will he resolved thru RMA, its not end of the world for you. And once you have calmed down here is a good read for you guys
https://hwbusters.com/freestyle/are-you-sti...cle-is-for-you/
hashtag2016
post May 9 2024, 03:31 AM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
To be fair, I don't think 7800X3D cases affect much people, since it was very expensive by that time, not many people bought it ,and the problem was solved fast .
The intel issue affect more widely, and cause more 'brain' damage, although it seems not deadly so far.. drool.gif

p/s: I think people already given Intel special vip treatment, not may posts was created. if this intel inccident were happen on AMD, cannot imaging how many posts and threads will be created from those angry users.. brows.gif

This post has been edited by hashtag2016: May 9 2024, 03:42 AM
lolzcalvin
post May 9 2024, 10:42 AM

shibe in predicament
******
Senior Member
1,586 posts

Joined: Mar 2014
From: 75°26'11.6"S, 136°16'16.0"E


QUOTE(hashtag2016 @ May 9 2024, 03:31 AM)
To be fair, I don't think 7800X3D  cases  affect much people, since it was very expensive by that time, not many people bought it ,and the problem was solved  fast .
The intel  issue affect more widely, and cause more 'brain' damage, although it seems not deadly so far..  drool.gif

p/s: I think people already given Intel special vip treatment, not may posts was created. if this intel inccident were happen on AMD, cannot imaging how many posts and threads will be created  from those angry users.. brows.gif
*
most people in here give intel special treatment only biggrin.gif
out of this circle u will see a lot of criticisms on how intel is handling the situation and the criticisms are warranted.

they were the one who let mobo vendors loose in an attempt to gain upper hand on fps regardless of it consuming 300+W...then kpkb about mobo vendors when chips started degrading. a few years ago they said unlocking power limits was in-spec and wasn't counted as overclocking during an interview with anandtech. the unlimited PLs have been here for years now.

don't forget this new "intel baseline profile" has a legitimate performance hit across all workloads.

conversely, if this incident happens on AMD, I predict the hoo haas will still be lesser than intended simply because many people still regard them as underdogs. whataboutism will be brought into the conversation. all in all both sides also gt fanboys. better just buy what suits you best.
hashtag2016
post May 9 2024, 04:24 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(lolzcalvin @ May 9 2024, 10:42 AM)
most people in here give intel special treatment only biggrin.gif
out of this circle u will see a lot of criticisms on how intel is handling the situation and the criticisms are warranted.

they were the one who let mobo vendors loose in an attempt to gain upper hand on fps regardless of it consuming 300+W...then kpkb about mobo vendors when chips started degrading. a few years ago they said unlocking power limits was in-spec and wasn't counted as overclocking during an interview with anandtech. the unlimited PLs have been here for years now.

don't forget this new "intel baseline profile" has a legitimate performance hit across all workloads.

conversely, if this incident happens on AMD, I predict the hoo haas will still be lesser than intended simply because many people still regard them as underdogs. whataboutism will be brought into the conversation. all in all both sides also gt fanboys. better just buy what suits you best.
*
"intel baseline profile" no more, say hello to "Intel Default Settings" brows.gif
seems that , the beta bios that provided "intel baseline profile" is taken down from gigabyte website. drool.gif

-----
Intel Issues Official Statement Regarding 14th and 13th Gen Instability, Recommends Intel Default Settings
by Gavin Bonshor on May 8, 2024 10:05 AM EST
https://www.anandtech.com/show/21389/intel-...efault-settings

QUOTE
Several motherboard manufacturers have released BIOS profiles labeled ‘Intel Baseline Profile’. However, these BIOS profiles are not the same as the 'Intel Default Settings' recommendations that Intel has recently shared with its partners regarding the instability issues reported on 13th and 14th gen K SKU processors.

These ‘Intel Baseline Profile’ BIOS settings appear to be based on power delivery guidance previously provided by Intel to manufacturers describing the various power delivery options for 13th and 14th Generation K SKU processors based on motherboard capabilities.

Intel is not recommending motherboard manufacturers to use ‘baseline’ power delivery settings on boards capable of higher values.

Intel’s recommended ‘Intel Default Settings’ are a combination of thermal and power delivery features along with a selection of possible power delivery profiles based on motherboard capabilities.

Intel recommends customers to implement the highest power delivery profile compatible with each individual motherboard design as noted in the table below:
user posted image
---
Intel Issues New Statement Regarding 13th, 14th Gen Core i9 Instability Issues
The new statement addresses the consumers, and not the motherboard vendors.
BY JOHN LAW MAY 9, 2024
https://www.lowyat.net/2024/322056/intel-ne...-13th-14th-gen/

This post has been edited by hashtag2016: May 9 2024, 04:37 PM
babylon52281
post May 9 2024, 08:12 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(hashtag2016 @ May 9 2024, 03:31 AM)
To be fair, I don't think 7800X3D  cases  affect much people, since it was very expensive by that time, not many people bought it ,and the problem was solved  fast .
The intel  issue affect more widely, and cause more 'brain' damage, although it seems not deadly so far..  drool.gif

p/s: I think people already given Intel special vip treatment, not may posts was created. if this intel inccident were happen on AMD, cannot imaging how many posts and threads will be created  from those angry users.. brows.gif
*
Brother, 7800X3d is in the same segment & just as pricy or even less than 13900K & 14900K lar. It didnt affect many bcoz AMD failed to sell as much due to pricing topkek fail. That itself has failed potential buyers at the front gate. In terms of percentage users failure then it could mean that more percentage of AMD users were affected compared to Intel.

AMD problem did NOT solve fast. It took them many burnt CPUS & a whole month to come out with a non buggy Agesa bios to set it back to baseline voltage.

So stop trying to defend them fanboi. Each of them when there is hardware failure they are responsible. When mobos overspec the CPU mobo makers are responsible.
hashtag2016
post May 10 2024, 11:49 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(babylon52281 @ May 9 2024, 08:12 PM)
Brother, 7800X3d is in the same segment & just as pricy or even less than 13900K  & 14900K lar. It didnt affect many bcoz AMD failed to sell as much due to pricing topkek fail. That itself has failed potential buyers at the front gate. In terms of percentage users failure then it could mean that more percentage of AMD users were affected compared to Intel.

AMD problem did NOT solve fast. It took them many burnt CPUS & a whole month to come out with a non buggy Agesa bios to set it back to baseline voltage.

So stop trying to defend them fanboi. Each of them when there is hardware failure they are responsible. When mobos overspec the CPU mobo makers are responsible.
*
U can have ur own different opinion on the matter or the product. (this is what's forum exist for)
but suka suka shout out fanbois this fanbios that, it is not nice... hmm.gif devil.gif

Seems that intel will pulish a officer announcement this month, so they do know how serious the issue is. icon_idea.gif

p/s: I only mention the that x3d burning issue after sb had mention about it , although it is simply unrelated issue to the topic. brows.gif

This post has been edited by hashtag2016: May 10 2024, 11:51 PM
lee_lnh
post May 11 2024, 02:20 AM

Getting Started
**
Junior Member
115 posts

Joined: Oct 2008


amd one was do or die... since it burn out.
inhell gonna affect for life..
stella_purple
post May 11 2024, 02:42 AM

Casual
***
Junior Member
392 posts

Joined: Oct 2011
QUOTE(lee_lnh @ May 11 2024, 02:20 AM)
amd one was do or die... since it burn out.
inhell gonna affect for life..
*
amd one is worse, it may turn into fire hazard laugh.gif

user posted image



This post has been edited by stella_purple: May 11 2024, 02:51 AM
adamtayy
post May 11 2024, 05:24 AM

Regular
******
Senior Member
1,380 posts

Joined: May 2006
From: Penang island



QUOTE(stella_purple @ May 11 2024, 02:42 AM)
amd one is worse, it may turn into fire hazard laugh.gif

user posted image


*
i think, overclock kaw-kaw

thats why....
chocobo7779
post May 11 2024, 07:41 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(imbibug @ Apr 15 2024, 03:01 PM)
Around the 10th gen or so, Intel seems to be juicing up their cpus in order to look competitive with Ryzen cpus.
user posted image
And Intel has done this sort of thing in the past, just not to the extent of what is going on now.

This problem has just gotten way worse since. The current 13th/14th gen cpus will be degraded at stock settings because Intel has ridiculously high power limits. The default limits are already very high and the 'extreme' ICCmax for 150W TDP 13th/14th gen cpus is 400A is clearly crazy high. The average pc builder or gamer is not going to know that leaving the settings at default will degrade their cpus permanently in a few short months.

Recently Nvidia pushed back by telling users with Raptor Lake cpus to contact Intel after getting "out of video memory" errors.
https://www.tomshardware.com/pc-components/...t-intel-support
And even worse, it looks like the Raptor Lake cpus have degraded in a few months to a point where crashing occurs.
https://www.lowyat.net/2024/320284/gamers-r...-cpus-en-masse/

Sad to see how Intel sunk to to the level where they are putting the blame on mobo manufacturers for not enforcing limits LOWER than their own specs for long term reliability.
*
The whole stability mess would have been mitigated had Intel actually tried to compete with AMD's X3D chips by not brute forcing clock speeds just to eke out that tiny little performance advantage (so it can look good on presentations) sweat.gif
IMHO the LGA1700 platform is a weird place; while the 12th gen chips are legitimately good, the 13th gen chips feels like a 12th gen chip with mostly minor improvements, and the 14th gen is just a pure waste of sand. It really speaks volume about Intel's platform 'longevity' especially when you consider Arrow Lake will come on a new platform/socket sweat.gif

QUOTE(Duckies @ Apr 15 2024, 03:11 PM)
Yes...and I have to downvolt + limit power for my 14700k to prevent high temperature...cilaka

In fact AMD CPU and GPU now is the best performance/price. If not because of Intel's good marketing...
*
I really wish both AMD/Intel didn't lock themselves in an all out performance war and they should start designing desktop chips will efficiency in mind - those 5.5-6+ GHz clock speeds are sort of unsustainable and reminds of the Netburst era where little emphasis is put on efficiency. Especially when you consider most x86 chips are clocked way beyond the efficiency point which leads to excessive power consumption. Yes I know you can power tune those CPUs or even use Eco mode but then the hivemind only wanted longer performance bars at any cost so they are more than happy to oblige sweat.gif

This post has been edited by chocobo7779: May 11 2024, 12:50 PM
chocobo7779
post May 11 2024, 07:43 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(pandah @ Apr 15 2024, 03:28 PM)
big reduction in performance? Or generally not noticeable?
*
Probably not noticeable outside of very heavy, multithreaded workloads (in games it will be fine) icon_idea.gif
chocobo7779
post May 11 2024, 07:44 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(imbibug @ Apr 15 2024, 03:29 PM)
The 14700 is not a bad cpu, its just that Intel wants to win and be no. 1  at everything at any cost it seems. Lowering the PL1/PL2/Icc is not going destroy performance, its still going to be good and generate alot less heat.
*
Yup, same goes to AMD as well with their '95C Tjmax' temperatures sweat.gif
chocobo7779
post May 11 2024, 07:48 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(TristanX @ Apr 15 2024, 07:00 PM)
Read the Tom's Hardware news properly. 4096W is not Intel official power limit. 253W is for 13900K and 14900K. Motherboard vendors pushing too much performance on already "overclocked" chips.

You don't need power limit removed to get the most out of their chips too.

user posted image
https://www.techpowerup.com/review/intel-co...-14900k/18.html

It can be tuned to be very efficient too.

*
On the other hand, the whole power limit mess has been already happening for about a decade or so, according to Hardware Unboxed:


Yes, you can tune Intel chips to very efficient levels, but the same goes to AMD as well (it's even better when you consider you can just use 105W Eco mode on the BIOS for the Ryzen 9 chips and you'll have very little performance loss outside of synthetic benchmarks with a significant uplift in efficiency) icon_idea.gif
That being said however it just kind of proves that modern x86 CPUs are massively tuned for performance with very little regard in efficiency icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 08:02 AM
chocobo7779
post May 11 2024, 07:57 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ Apr 15 2024, 05:06 PM)
Buy ARM or Qualcomm stocks. Their CPU destroys both X86 in terms of power efficiency. ARM is the future.
*
From a ISA point of view, both x86 and ARM are quite performant and efficient (there is a common myth that x86 is 'inefficient' because of the legacy bloat), and ISA doesn't really mean much where's there's no native apps to run. Back in the 1990s there are plenty of RISC ISAs that outperformed x86 significantly like DEC Alpha and yet x86 prevailed due to the large library of software written. I know binary translation/dynamic recompilation will help but this will introduce a performance overhead icon_idea.gif
Modern CPUs these days are so complex from the architectural standpoint that the ISA doesn't mean much anymore icon_idea.gif

Ultimately, implementation matters (there's also the economic/business side that needs to consider), not ISA - there's also a YouTube channel that talks about semiconductor design and manufacturing that goes deep dive in CPU architectures:
https://www.youtube.com/@HighYield

https://chipsandcheese.com/2021/07/13/arm-o...-doesnt-matter/

This post has been edited by chocobo7779: May 11 2024, 08:05 AM
chocobo7779
post May 11 2024, 08:05 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(hashtag2016 @ Apr 20 2024, 01:32 PM)
Don't worry , be happy.. I think more and more mobo maker will include something like Intel DEFAULT setting to help users trobleshoot their PC issue(if any).
Since, Asus already Introduce "Intel Baseline Profile option " in their latest BIOS..it is a positive move, I think.. brows.gif

although u have to choose between Stability or Performance .. drool.gif
https://twitter.com/9550pro/status/1781481593972129929

p/s: IMHO, I think even gen 12nd and some of the non-K might also face similar issue. (that's just my personal thought, don't quote me please). brows.gif  brows.gif
*
Probably for the better, but this just makes the 14th gen K CPUs look like a rebranded 13th gen and even more of a waste of sand sweat.gif
adamtayy
post May 11 2024, 08:07 AM

Regular
******
Senior Member
1,380 posts

Joined: May 2006
From: Penang island



QUOTE(chocobo7779 @ May 11 2024, 07:44 AM)
Yup, same goes to AMD as well with their '95C Tjmax' temperatures sweat.gif
*
Attached Image
Tk-maxx, U.K
chocobo7779
post May 11 2024, 08:22 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ Apr 21 2024, 10:14 PM)
Futureproofing is fallacy. Recall the AM4 300series to 400series issues and also their limited growth (no PCIE4, no USB 3.2) by todays standards even if you can technically still "upgrade" with newer CPU.

AM5 1st gen is held back by limited DDR5 speeds. This will fast gets old as DDR5 tech matures and speeds goes over 10k.

Do not buy for something you think might happen future, just buy for the best today.
*
No PCIe 4? Not that much of a problem for most GPUs unless you are talking something about RX6400/6500XT with their nerfed PCIe x4 interface. Even the mighty 4090 only loses about 2% performance on PCIe 3.0 x16:
user posted image

On the SSD side of things there's not much difference between PCIe 3.0 and 4.0 either for gaming icon_idea.gif

USB 3.2? That's more or less belong to the 'nice to have' territory rather than must haves, and even that can be done with a PCIe card if you really need one icon_idea.gif
chocobo7779
post May 11 2024, 08:46 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ Apr 22 2024, 09:43 AM)
AMD mistake is keeping AM5 prices too f**king high, HUB recently did a review and their basement budget Mobos are basically all crap so you cannot do a budget build with AM5 either, which is why AM4 is still hanging around and not getting replaced as it was supposed to be. This is a failure from product marketing POV.

And for a fabless maker, they also failed to prioritise software side of the CPU leading to systems being less stable (DOCP/EXPO, CCD priority) than Intel build (who still has to run a fab business). In that regards, heck even Nvidia has better grip on their various software ecosystems for other uses of their GPU.

Both have their faults and surprising it is Apple, another fabless brand, that is showing how to design CPU hardware with their superb M2 Ultra, with near flawless software integration. And then there is ARM...

Apple, ARM > Intel, AMD

All you AMD & Intel fanbois can balik rumah cry
*
QUOTE
AMD mistake is keeping AM5 prices too f**king high, HUB recently did a review and their basement budget Mobos are basically all crap so you cannot do a budget build with AM5 either, which is why AM4 is still hanging around and not getting replaced as it was supposed to be. This is a failure from product marketing POV.
? Depending on where you live you can get a B650 board that will happily run a 7950X at full power, here's one for USD109:
https://www.microcenter.com/product/664700/...atx-motherboard

The problem with most of these boards are the fact that they simply fail at pricing them right - I mean why bother buying those boards where the HDV will happily outperform them at the same price, if not lower? Perhaps it may work better for those who only uses 65W chips or even the 7800X3D, but that kind of defeats the point of AM5, so that's why HUB more or less advised users to stay away from them icon_idea.gif
AM4 is still very viable and there's really nothing wrong with that especially with their X3D chips that will still happy perform quite well compared with modern Intel equivalents icon_idea.gif

Software side is tricky, but let's not forget that Intel/Nvidia has their fair share of software/driver issues icon_idea.gif

QUOTE
Both have their faults and surprising it is Apple, another fabless brand, that is showing how to design CPU hardware with their superb M2 Ultra, with near flawless software integration. And then there is ARM...
I mean, the M2 Ultra is a very expensive chip in terms of cost and transistor budget - it had what, 100 billion transistors which is much closer to something like AMD's MI300A than most consumer hardware (Nvidia's AD103 only has around 76 billion transistors). Flawless software integration isn't hard to do with Apple as they have much more R&D budget to spend with a highly vertically integrated ecosystem (this is the nice part of a walled garden) icon_idea.gif
Sure AMD/Intel can design those chips but why bother selling them at the consumer market where they can make vastly more selling them as a HPC/AI chip as the consumer market is fundamentally a low margin business icon_idea.gif

Again, ISA doesn't matter, see above post, and is not like standard ARM cores are a gold standard in performance/efficiency icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 08:48 AM
chocobo7779
post May 11 2024, 08:55 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(1024kbps @ Apr 22 2024, 07:56 AM)
ARM and RISC-V isn't too far behind

If both AMD and Intel keep making mistake they will become history
*
Unless Snapdragon X actually eats up market then no - the whole thing about ARM/RISC-V will take over x86 almost reminds on the age old sentence [insert year] will be the year of the Linux desktop sweat.gif

This post has been edited by chocobo7779: May 11 2024, 08:56 AM
chocobo7779
post May 11 2024, 12:57 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(lolzcalvin @ Apr 23 2024, 03:09 PM)
for now. microsoft is currently betting on snapdragon x elite for their Windows on ARM. snapdragon x elite does look promising.

however, x86-64 may not be going anywhere soon despite the rising popularity of RISC-based instruction set architectures (ISAs).

the reason why windows still requiring x86-64 (for now) is because they have a lot of legacy codes that are dependent on x86 ISA. many wizards we use in windows today can be dated back to win95. this is why x86-64 hangs around for so long -- backwards compatibility for the past 20+ years. hell even intel 8086-based software can run on modern x86 CPUs with little tweaking. people may attribute better efficiency to ARM, but it really matter not on the difference in ISAs, but how CPU vendors are pushing at which direction while designing their CPUs. for so long both AMD and Intel been pushing towards high performance (both of them compete in high performance computing while having a lower priority on battery life), while Qualcomm and Apple have been pushing for efficiency (both of them compete in mobile space and therefore having handful of experience on low power operations).

a good read of RISC vs CISC if you're into a bit of technicality: https://chipsandcheese.com/2021/07/13/arm-o...-doesnt-matter/
if u don't already know, x86-64 is CISC-based while ARM and RISC-V are RISC-based
*
To be fair to AMD/Intel, it's really not hard to match Apple in terms of peak performance/efficiency in mobile chips, but what Apple dominates is the performance at low/mid power range which matters a lot more in real world as it's very unlikely for most modern workloads to utilize peak performance for any long period of time, due to power/heat constraints

This is sometimes why synthetic benchmarks like Cinebench/Geekbench can be misleading on power constrained devices as those only measure peak performance, but not sustained performance which is a lot more important (most x86 CPUs tend to focus a lot on peak performance, due to the fact on desktops they can be run at an indefinite period as long as the power/heat budget allows) idea

I really wish Intel didn't can their Y series chips that early though, as those chips could really become Apple M series competitor if they kept on iterating on it sad.gif

There's also the very large, captive markets that require x86, like government/education/corporate/manufacturing sectors that often uses specialized, in-house software and are not COTS, and cannot be ported to ARM easily, even if those software had their source codes available icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 03:04 PM
chocobo7779
post May 11 2024, 01:03 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(awol @ Apr 23 2024, 03:35 PM)
i agree with you. lets see how SD X Elite fare against intel/amd on laptop and its performance against M3 SoC.

then again, it still limited to laptop/mobile.
power user still depend on high end desktop.
*
It will be quite interesting to see where Strix Point and Lunar Lake headed, pretty excited to see what x86 can offer in the next few years hmm.gif
That being said though, the X Elite chips are promising, but compatibility, pricing and availability will make or break this chip icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 01:07 PM
chocobo7779
post May 11 2024, 01:32 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(TristanX @ Apr 23 2024, 07:26 PM)
Its not that easy. Still limited to process nodes. Why would Intel and AMD push power usage up to 90-95C depending on cooling? Its physical limitations. Die shrinks becoming increasingly more difficult as well. Unless someone comes out with a new material for the processors.
*
...more like core design, really icon_idea.gif
Apple cores are specifically targeting high IPCs, so they can clock them lower relative to x86 incumbents to achieve high single threaded performance with excellent efficiency. This is why Apple cores tend to be quite large and wide compared to your garden variety x86 chips. One of the downsides is that they are often 'inefficient' in terms of SoC/die size and transistor budget, so it is very expensive to make, and this partly explains why Apple charges very high prices for RAM/storage upgrades for their hardware, presumably that they can subsidize the cost of manufacturing those SoCs icon_idea.gif

Of course x86 cores can be 'fatter and wider' to increase performance without increasing clock speeds to stratospheric heights, but that's not going to be cheap to make, and note that AMD/Intel will need to cater to vastly different markets and use cases ranging from cheap laptops to multimillion dollar supercomputers as opposed to Apple where their targeted audience and software ecosystems are much more focused and locked down icon_idea.gif

Those are speculations of course, so correct me if I'm wrong icon_idea.gif


QUOTE
Why would Intel and AMD push power usage up to 90-95C depending on cooling?
Simply put, the enthusiasts wanted longer performance bars on benchmarks so there is an all out performance war with very little regard to efficiency icon_idea.gif

The big problem with modern CPUs right now isn't really the CPU core itself, but rather the memory subsystem, as the improvements on CPUs are vastly outclassing the improvements on memory speeds/bandwidth which leads to memory bottlenecks, and moving data between the CPU and memory isn't exactly power efficient either. This is why you have things like memory on package (like Apple chips/upcoming Lunar Lake CPUs) and die stacking like what AMD does to their X3D lineup of CPUs icon_idea.gif

QUOTE
Die shrinks becoming increasingly more difficult as well. Unless someone comes out with a new material for the processors.

That's why you have things like chiplets, backside power delivery such as PowerVia (could be revolutionary) and new chip packaging/interconnect methods to make further die shrinks possible icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 01:58 PM
chocobo7779
post May 11 2024, 01:47 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ Apr 24 2024, 10:31 AM)
Arguably Alderlake is also a major breakthru being the 1st consumer big.little hybrid X86 CPU, next breakthru would be Lunarlake with proper tiled design desktop CPU.
*
Lunar Lake is mobile only though sweat.gif

There's also Arrow Lake with PowerVia backside power delivery icon_idea.gif
chocobo7779
post May 11 2024, 01:49 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(TristanX @ Apr 23 2024, 11:11 PM)
Last major breakthrough is Sandy Bridge. Its been a while. Today, Intel is still behind when comes to the hardware. They still able to keep up. Just with a lot more power.

There are candidates for new materials. Like 100Ghz nanocarbon I think. I think no one able to get it stable.

Lots of things to consider too. Now we have a lot of hackers cracking it. Security patch usually nerfs the processors.
*
QUOTE
Lots of things to consider too. Now we have a lot of hackers cracking it. Security patch usually nerfs the processors.
Usually it's often multithreading that often causes lots of security loopholes, that's why Intel is rumored to remove them in Arrow Lake chips, and replacing them with rentable units (if the rumors are correct) icon_idea.gif
That being said though it's very unlikely for your regular machine to get hacked using CPU vulnerabilities unless you do a lot of *ahem* stuff, or you fail to do the bare basics of computer security icon_idea.gif

QUOTE
Last major breakthrough is Sandy Bridge. Its been a while. Today, Intel is still behind when comes to the hardware. They still able to keep up. Just with a lot more power.
Alder Lake would like a word with you though, but yeah x86 is going to be a lot more exciting in the next few years laugh.gif

This post has been edited by chocobo7779: May 11 2024, 01:50 PM
chocobo7779
post May 11 2024, 01:53 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(imbibug @ May 4 2024, 11:29 PM)
Intel's current problems are alot more serious than the teething problems AMD had with Ryzen. The early Ryzens had memory incompatibility/instability with took some bios updates to resolve. The 5000 series had the USB dropout issue. Now Intel probably has more memory incompatibility/instability issues than Ryzen putting aside the problems with voltage/power.

Someone at chiphell took the time to organise testing of hundreds of intel 13/14th gen cpus and found very poor stability at auto out of the box settings. If 5/10 13th gen and 2/10 14th gen managed to pass, its a clear sign that its complete garbage and not just a few bad apples.
https://wccftech.com/only-5-out-of-10-core-...ability-issues/

Performance takes a hit as expected with Intels baseline bios fix - "This is reported to be up to -30% in multi-threaded applications and up to -15% in games which is quite big".
Hardware unboxed ran gaming benchmarks with the new bios fix and the performance hit was 10%-20%. IIRC 20% was the perf hit for low 1% fps.
https://www.youtube.com/watch?v=OdF5erDRO-c&t=520s
And then you have to consider Intel's current issues with the Meltdown bug. The last downfall patch supposedly had a big performance hit on older cpus, up to 39%.
*
You almost forgot the Ryzen 3000 boost clock bug which AMD was able to fix in the ABBA version of their AGESA icon_idea.gif
lolzcalvin
post May 11 2024, 02:26 PM

shibe in predicament
******
Senior Member
1,586 posts

Joined: Mar 2014
From: 75°26'11.6"S, 136°16'16.0"E


QUOTE(chocobo7779 @ May 11 2024, 12:57 PM)
...but what Apple dominates is the performance at low/mid power range which matters a lot more in real world...
*
and that is what apple is doing extremely right, and at times their low power operation performance is rivaling AMD/Intel high end CPUs which are using 4-5x more power for the same operation. their M4 has just been released recently, with MT performance closing in a 13700K, and ST performance obliterating many, if not all, modern x86 CPUs. a BASE M4 is doing that? at 5x less power? node advantage + SME aside, cannot dismiss what Apple has been doing and they're definitely putting more pressure on AMD/Intel. ESPECIALLY INTEL.

M2/M3 era has already seen the chip performing faster than x86 counterparts in a number of applications such as in Adobe apps, DaVinci Resolve and Handbrake. new M4 era will be another eye opener similar to M2.

with M4 being released this early, Qualcomm is shitting themselves too. I mentioned I had faith previously on X Elite but things do change fast within a month. X Elite is due for >1 year now. after their shoddy X Plus reveal a few weeks ago, rough rumors are saying they're in a very messy situation rn. we'll see how Qualcomm handles this.

QUOTE(chocobo7779 @ May 11 2024, 12:57 PM)
...There's also the very large, captive markets that require x86, like government/education/corporate/manufacturing sectors that often uses specialized, in-house software and are not COTS, and cannot be ported to ARM easily, even if those software had their source codes available icon_idea.gif
*
hence why x86 is living for backwards compatibility to cater for relic systems. been so long since 8086 era.

however, it really isn't x86 fault for the "slowness" simply because it's just an ISA. a good uarch (microarchitecture, or simply CPU design) will yield great results. Apple has greatly improved their uarch to gain higher frequency, as well as shoving in ARM SME into it, even with small IPC gain (and still yield ~25% improvement over M3).
chocobo7779
post May 11 2024, 03:02 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(lolzcalvin @ May 11 2024, 02:26 PM)
and that is what apple is doing extremely right, and at times their low power operation performance is rivaling AMD/Intel high end CPUs which are using 4-5x more power for the same operation. their M4 has just been released recently, with MT performance closing in a 13700K, and ST performance obliterating many, if not all, modern x86 CPUs. a BASE M4 is doing that? at 5x less power? node advantage + SME aside, cannot dismiss what Apple has been doing and they're definitely putting more pressure on AMD/Intel. ESPECIALLY INTEL.

M2/M3 era has already seen the chip performing faster than x86 counterparts in a number of applications such as in Adobe apps, DaVinci Resolve and Handbrake. new M4 era will be another eye opener similar to M2.

with M4 being released this early, Qualcomm is shitting themselves too. I mentioned I had faith previously on X Elite but things do change fast within a month. X Elite is due for >1 year now. after their shoddy X Plus reveal a few weeks ago, rough rumors are saying they're in a very messy situation rn. we'll see how Qualcomm handles this.
hence why x86 is living for backwards compatibility to cater for relic systems. been so long since 8086 era.

however, it really isn't x86 fault for the "slowness" simply because it's just an ISA. a good uarch (microarchitecture, or simply CPU design) will yield great results. Apple has greatly improved their uarch to gain higher frequency, as well as shoving in ARM SME into it, even with small IPC gain (and still yield ~25% improvement over M3).
*
Mind you, the Snapdragon X series is about 1 year late, and it seems like Qualcomm is sort of sandbagging right now (heck even the original intent for Qualcomm to acquire Nuvia was to compete in servers, not laptops) icon_idea.gif

That being said though I'm not sure how x86 incumbents can compete with it though (perhaps wider core designs and decoding/execution units, but that's not exactly cheap to implement without potentially cannibalizing much more lucrative markets) hmm.gif

But yeah, that's the nice part of having practically unlimited R&D and transistor budget to play with icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 03:13 PM
babylon52281
post May 11 2024, 03:36 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(hashtag2016 @ May 10 2024, 11:49 PM)
U can have ur own different opinion on the matter or the product. (this is what's forum exist for)
but suka suka shout out fanbois this fanbios that, it is not nice...  hmm.gif devil.gif

Seems that intel will pulish a officer announcement this month, so they do know how serious the issue is. icon_idea.gif

p/s: I only mention the that x3d burning issue after sb had mention about it , although it is simply unrelated issue to the topic. brows.gif
*
Lol fanbois & haters kena burn by the truth so you folks dont like it, well tough. Yes you can voice your opinion but to do that and conveniently ignore that others have opinion is just meant your a fanboi.

Intel will release an official reply, just as AMD did with their own CPU burn case, that is given.

Stop with the pissing posts then maybe people will have respect what you say.
babylon52281
post May 11 2024, 03:40 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(stella_purple @ May 11 2024, 02:42 AM)
amd one is worse, it may turn into fire hazard laugh.gif

*
Both sides CPU have issues when pushed too far le. This is what happens when zero efficiency and both sides ramp up the power game then make worse allow users to go even further.

ARM & RISC V FTW!

This post has been edited by babylon52281: May 11 2024, 03:41 PM
babylon52281
post May 11 2024, 03:52 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 08:22 AM)
No PCIe 4?  Not that much of a problem for most GPUs unless you are talking something about RX6400/6500XT with their nerfed PCIe x4 interface.  Even the mighty 4090 only loses about 2% performance on PCIe 3.0 x16:

On the SSD side of things there's not much difference between PCIe 3.0 and 4.0 either for gaming icon_idea.gif

USB 3.2?  That's more or less belong to the 'nice to have' territory rather than must haves, and even that can be done with a PCIe card if you really need one icon_idea.gif
*
Nah not on GPU, maybe only 4090 can fully utilise the x16 on a Gen4 bandwidth. It is for M2 SSD and even if you dont see much benefits today, DS will allow faster drives to load world details with less latency meaning a less laggy gaming experience.

Its the saying that few years ago people might not see the need for >8GB VRAM on GPU but oh boy isnt that an issue for more & more todays games. If games world building becomes bigger & more detailed, it will sooner need to load direct from SSD at faster pace.
chocobo7779
post May 11 2024, 03:53 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 03:40 PM)
Both sides CPU have issues when pushed too far le. This is what happens when zero efficiency and both sides ramp up the power game then make worse allow users to go even further.

ARM & RISC V FTW!
*
You can also has inefficient ARM cores either if you clocked them to the moon, note that power scales linearly with frequency, and a factor of 2 with voltage icon_idea.gif
This is why high clock speeds can be a bad thing if the process node or the architecture are not built for it (even Intel admits it on their slide deck, see their flat-ish curves on the power scaling chart):

https://download.intel.com/newsroom/2022/cl...nalyst-deck.pdf

But then for some reason, despite the huge advances in x86 efficiency we still have ridiculously inefficient chips because both Intel/AMD practically clocked them up to near unsustainable levels (I mean, why does 5GHz+ ULV mobile chips and 6GHz+ desktop chips exist?) sweat.gif

This post has been edited by chocobo7779: May 11 2024, 04:10 PM
chocobo7779
post May 11 2024, 03:56 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 03:52 PM)
Nah not on GPU, maybe only 4090 can fully utilise the x16 on a Gen4 bandwidth. It is for M2 SSD and even if you dont see much benefits today, DS will allow faster drives to load world details with less latency meaning a less laggy gaming experience.

Its the saying that few years ago people might not see the need for >8GB VRAM on GPU but oh boy isnt that an issue for more & more todays games. If games world building becomes bigger & more detailed, it will sooner need to load direct from SSD at faster pace.
*
? Find one game that actually makes use of high speed I/O then - even games like Rift Apart (which claims to utilize disk I/O heavily) will do just fine on a SATA SSD, and considering the very long dev times for AAA titles I think your machine will be long out of date before you need a faster SSD to run the game icon_idea.gif
People here are vastly overestimating the I/O requirements for disks in modern titles icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 04:14 PM
chocobo7779
post May 11 2024, 04:07 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
Anyway let's get back to the topic, we're getting a bit derailed there - the original topic is Intel's 14th gen stability issues, not a discussion between ISA and power efficiency icon_idea.gif
babylon52281
post May 11 2024, 04:10 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(lolzcalvin @ May 11 2024, 02:26 PM)
and that is what apple is doing extremely right, and at times their low power operation performance is rivaling AMD/Intel high end CPUs which are using 4-5x more power for the same operation. their M4 has just been released recently, with MT performance closing in a 13700K, and ST performance obliterating many, if not all, modern x86 CPUs. a BASE M4 is doing that? at 5x less power? node advantage + SME aside, cannot dismiss what Apple has been doing and they're definitely putting more pressure on AMD/Intel. ESPECIALLY INTEL.

M2/M3 era has already seen the chip performing faster than x86 counterparts in a number of applications such as in Adobe apps, DaVinci Resolve and Handbrake. new M4 era will be another eye opener similar to M2.

with M4 being released this early, Qualcomm is shitting themselves too. I mentioned I had faith previously on X Elite but things do change fast within a month. X Elite is due for >1 year now. after their shoddy X Plus reveal a few weeks ago, rough rumors are saying they're in a very messy situation rn. we'll see how Qualcomm handles this.
hence why x86 is living for backwards compatibility to cater for relic systems. been so long since 8086 era.

however, it really isn't x86 fault for the "slowness" simply because it's just an ISA. a good uarch (microarchitecture, or simply CPU design) will yield great results. Apple has greatly improved their uarch to gain higher frequency, as well as shoving in ARM SME into it, even with small IPC gain (and still yield ~25% improvement over M3).
*
Fully agree with you, without being tied to X86 legacy, Apple could wipe the slate with a new CPU design and they clearly showed what real modern CPU uarch could do current manufacturing process. Both Intel/AMD X86 will need some newfangled complex & expensive SOC layout or exotic materials to push speeds higher and all that just to match even current M3/M4 that is made on the same matured process.

chocobo7779
I dont fully agree on the reason why Apple charges that much, while yes their CPU SOC is much larger the cost to make is not exponentially like what you pay. Apple charges waterfish prices simple bcoz its an Apple. And for the sheer volume their CPU costing per unit isnt that all much different as these will be inside iphones too.
babylon52281
post May 11 2024, 04:14 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 03:53 PM)
You can also has inefficient ARM cores either if you clocked them to the moon, note that power scales linearly with frequency, and a factor of 2 with voltage icon_idea.gif
This is why high clock speeds can be a bad thing if the process node or the architecture are not built for it (even Intel admits it on their slide deck, see their flat-ish curves on the power scaling chart):

https://download.intel.com/newsroom/2022/cl...nalyst-deck.pdf

But then for some reason, despite the huge advances in x86 efficiency we still have ridiculously inefficient chips because both Intel/AMD practically clocked them up to near unsustainable levels (I mean, why does 5GHz+ ULV mobile chips and 6GHz+ desktop chips exist?)  sweat.gif
*
Well thats the thing. ARM shtick is not power game but power efficiency. Like Netburst vs Core back then, do you want a high clock inefficient chip vs cooler, slower but technically 'faster' chip? The death of Netburst and rise of Core clearly indicates what the market wants.
chocobo7779
post May 11 2024, 04:18 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 04:14 PM)
Well thats the thing. ARM shtick is not power game but power efficiency. Like Netburst vs Core back then, do you want a high clock inefficient chip vs cooler, slower but technically 'faster' chip? The death of Netburst and rise of Core clearly indicates what the market wants.
*
Yeah, but that'll require wider, larger core designs, which are not area efficient and will need larger dies unless you want AMD/Intel to cannibalize their far more profitable server/HPC business that is icon_idea.gif
babylon52281
post May 11 2024, 04:18 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 04:07 PM)
Anyway let's get back to the topic, we're getting a bit derailed there - the original topic is Intel's 14th gen stability issues, not a discussion between ISA and power efficiency icon_idea.gif
*
More like the original topic was a pissing game between AMD fanbois & Intel fanbois. Both sides are real losers when each have their own hardware issues and there exist better CPU uarch.
chocobo7779
post May 11 2024, 04:19 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 04:10 PM)
Fully agree with you, without being tied to X86 legacy, Apple could wipe the slate with a new CPU design and they clearly showed what real modern CPU uarch could do current manufacturing process. Both Intel/AMD X86 will need some newfangled complex & expensive SOC layout or exotic materials to push speeds higher and all that just to match even current M3/M4 that is made on the same matured process.

chocobo7779
I dont fully agree on the reason why Apple charges that much, while yes their CPU SOC is much larger the cost to make is not exponentially like what you pay. Apple charges waterfish prices simple bcoz its an Apple. And for the sheer volume their CPU costing per unit isnt that all much different as these will be inside iphones too.
*
QUOTE
, without being tied to X86 legacy

Again, ISA doesn't really matter especially when you consider how complex modern CPUs are. The x86 bloat is mostly vestigial at the moment and doesn't really affects the ability to make highly efficient chips icon_idea.gif

QUOTE
th Intel/AMD X86 will need some newfangled complex & expensive SOC layout to push speeds higher

Well, they do, see AMD APUs on the PS5/Series X consoles for an example (not directly comparable to Apple Silicon as AMD will need to design them with a budget in mind, after all those consoles cost around 500USD and both Sony/Microsoft still have to sell them at a loss and recoup the loss through game sales and subscriptions) icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 04:25 PM
babylon52281
post May 11 2024, 04:22 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 04:18 PM)
Yeah, but that'll require wider, larger core designs, which are not area efficient and will need larger dies unless you want AMD/Intel to cannibalize their far more profitable server/HPC business that is icon_idea.gif
*
Im advocating more towards fully realising ARM uarch to a desktop equivalent or else a new CPU uarch from scratch without the inefficient legacy (hello RISC V?)
chocobo7779
post May 11 2024, 04:30 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ May 11 2024, 04:22 PM)
Im advocating more towards fully realising ARM uarch to a desktop equivalent or else a new CPU uarch from scratch without the inefficient legacy (hello RISC V?)
*
Again, no, according to Jim Keller who worked at AMD/Apple/DEC/PA Semi (probably one of the best acquisitions Apple made) and is responsible for CPUs like the AMD K8:

QUOTE
JK: [Arguing about instruction sets] is a very sad story. It's not even a couple of dozen [op-codes] - 80% of core execution is only six instructions - you know, load, store, add, subtract, compare and branch. With those you have pretty much covered it. If you're writing in Perl or something, maybe call and return are more important than compare and branch. But instruction sets only matter a little bit - you can lose 10%, or 20%, [of performance] because you're missing instructions.
QUOTE
JK: I care a little. Here's what happened - so when x86 first came out, it was super simple and clean, right? Then at the time, there were multiple 8-bit architectures: x86, the 6800, the 6502. I programmed probably all of them way back in the day. Then x86, oddly enough, was the open version. They licensed that to seven different companies. Then that gave people opportunity, but Intel surprisingly licensed it. Then they went to 16 bits and 32 bits, and then they added virtual memory, virtualization, security, then 64 bits and more features. So what happens to an architecture as you add stuff, you keep the old stuff so it's compatible.

So when Arm first came out, it was a clean 32-bit computer. Compared to x86, it just looked way simpler and easier to build. Then they added a 16-bit mode and the IT (if then) instruction, which is awful. Then [they added] a weird floating-point vector extension set with overlays in a register file, and then 64-bit, which partly cleaned it up. There was some special stuff for security and booting, and so it has only got more complicated.

Now RISC-V shows up and it's the shiny new cousin, right? Because there's no legacy. It's actually an open instruction set architecture, and people build it in universities where they don’t have time or interest to add too much junk, like some architectures have. So relatively speaking, just because of its pedigree, and age, it's early in the life cycle of complexity. It's a pretty good instruction set, they did a fine job. So if I was just going to say if I want to build a computer really fast today, and I want it to go fast, RISC-V is the easiest one to choose. It’s the simplest one, it has got all the right features, it has got the right top eight instructions that you actually need to optimize for, and it doesn't have too much junk.


https://www.anandtech.com/show/16762/an-ana...person-at-tesla

...and it's not like ARM is a 'bloat free' ISA either icon_idea.gif

RISC-V? Probably but unless there is a chip that is commercially available and has a large enough software library then there's little reason for that ISA to take off. Note that the success of ISA goes way beyond performance/efficiency, so that's why back in the 1990s Intel was able to defeat a lot of alternate ISAs (such as Alpha/MIPS/Itanium/SPARC) despite those ISAs are far superior to x86, due to its strong install base, as well as the massive economies of scale it offers icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 04:39 PM
babylon52281
post May 11 2024, 04:35 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(chocobo7779 @ May 11 2024, 04:19 PM)
Again, ISA doesn't really matter especially when you consider how complex modern CPUs are.  The x86 bloat is mostly vestigial at the moment and doesn't really affects the ability to make highly efficient chips icon_idea.gif
Well, they do, see AMD APUs on the PS5/Series X consoles for an example (not directly comparable to Apple Silicon as AMD will need to design them with a budget in mind, after all those consoles cost around 500USD and both Sony/Microsoft still have to sell them at a loss and recoup the loss through game sales and subscriptions) icon_idea.gif
*
TSMC dont actually play that game. Why, bcoz their leading the node shrink pack so without throwing prices they ady got a line of customers willing to throw money for each new node.

What they do is preferential block batching, where a the first batch nodes will be sold at the highest markups (invariably Apple came with the first pick), and then the markup goes down as subsequent batches are fulfilled. Others playing catchup will charge less but its the release of these new nodes which is important as many tied their product launches to their CPU SOC which is based on each cutting edge node iteration. No fool will launch a flagship phone using CPU based on older nodes, good luck if they do coz its simply bad marketing.
hashtag2016
post May 11 2024, 04:56 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(babylon52281 @ May 11 2024, 03:36 PM)
Lol fanbois & haters kena burn by the truth so you folks dont like it, well tough. Yes you can voice your opinion but to do that and conveniently ignore that others have opinion is just meant your a fanboi.

Intel will release an official reply, just as AMD did with their own CPU burn case, that is given.

Stop with the pissing posts then maybe people will have respect what you say.
*
so what is your problem anyway, which posts pissing u? mind to explain? brows.gif devil.gif

p/s: Intel screw up their face is not our problem, we r consumer only... brows.gif

This post has been edited by hashtag2016: May 11 2024, 04:58 PM
hashtag2016
post May 11 2024, 05:03 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(babylon52281 @ May 11 2024, 04:10 PM)
Fully agree with you, without being tied to X86 legacy, Apple could wipe the slate with a new CPU design and they clearly showed what real modern CPU uarch could do current manufacturing process. Both Intel/AMD X86 will need some newfangled complex & expensive SOC layout or exotic materials to push speeds higher and all that just to match even current M3/M4 that is made on the same matured process.

chocobo7779
I dont fully agree on the reason why Apple charges that much, while yes their CPU SOC is much larger the cost to make is not exponentially like what you pay. Apple charges waterfish prices simple bcoz its an Apple. And for the sheer volume their CPU costing per unit isnt that all much different as these will be inside iphones too.
*
x86 is important legacy, an important one. I would rather it stay than gone.
If x86 no no more.. very likely pc diy no more. innocent.gif
babylon52281
post May 11 2024, 05:30 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(hashtag2016 @ May 11 2024, 04:56 PM)
so what is your problem anyway, which posts pissing u? mind to explain?  brows.gif  devil.gif

p/s: Intel screw up their face is not our problem, we r consumer only... brows.gif
*
Look back at your own postings to know. You made your opinion, okay, so just leave it or else your just trolling here
hashtag2016
post May 11 2024, 05:33 PM

On my way
****
Junior Member
500 posts

Joined: Feb 2016
QUOTE(babylon52281 @ May 11 2024, 05:30 PM)
Look back at your own postings to know. You made your opinion, okay, so just leave it or else your just trolling here
*
what post, I have many posts recently..nothing special so far.. brows.gif mega_shok.gif
chocobo7779
post May 11 2024, 11:51 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(hashtag2016 @ May 11 2024, 05:03 PM)
x86 is important legacy, an important one. I would rather it stay than gone.
If x86 no no more.. very likely  pc diy no more.  innocent.gif
*
Socketed ARM chips do exist, but yeah you don't have to really worry about the state of x86 right now icon_idea.gif

One of the reasons why x86 couldn't compete with ARM chips on efficiency is that there's really not much innovation and progress going on in the x86 realm (especially power efficiency) for over a decade, due to a severe lack of competition. AMD's Phenom series of chips are lukewarm at best from a performance/price standpoint, and Bulldozer was a huge disaster to the point of driving AMD towards near bankruptcy and probably lead Intel to making small iterations on their CPUs. It wasn't until 2017 where Ryzen arrived to the scene, and even that AMD didn't return to proper competition all around with Zen 2 icon_idea.gif

On the other hand, Apple has been designing low powered ARM chips for iPhones and iPads for over a decade now (they have been dabbling in semiconductors since the early 1980s), and their offerings have been outperforming lots of SoCs from Android competitors significantly for many years, even today. By doing so they have gathered quite a lot of know-how on how to design high performance low powered SoCs coupled with the stagnant x86 market it's not hard to see why they switched to their in house silicon (note that the M series chips are not just a scaled up A series chip) icon_idea.gif

x86 incumbents, by comparison hasn't really focused on efficiency-minded chips, with their design targets being mostly performance and areal efficiency. Power efficiency for x86 was sort of a niche thing back them outside of netbooks/subnotebooks (remember those?) icon_idea.gif

That being said however it seems both AMD/Intel are starting to focusing on power efficiency (see Phoenix and the upcoming Strix Point APUs, along with Intel's new Meteor Lake chips) on mobile chips. It certainly isn't as groundbreaking as Apple Silicon but it sure is a good stepping stone at that after many years of stagnancy (note that chip design and manufacturing can be incredibly time intensive) icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 11:51 PM
1024kbps
post May 12 2024, 01:47 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(stella_purple @ May 11 2024, 02:42 AM)
amd one is worse, it may turn into fire hazard laugh.gif

user posted image


*
AMD burnt CPU is because OC
Intel stock setting

dont you see the difference?

TristanX
post May 12 2024, 02:42 PM

Where is my stars?
Group Icon
Elite
24,334 posts

Joined: Nov 2004
From: Setapak, Kuala Lumpur


QUOTE(1024kbps @ May 12 2024, 01:47 PM)
AMD burnt CPU is because OC
Intel stock setting

dont you see the difference?
*
SOC voltage issue, not OC.

I think Intel trying to sell potato chips now.
1024kbps
post May 12 2024, 03:05 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(TristanX @ May 12 2024, 02:42 PM)
SOC voltage issue, not OC.

I think Intel trying to sell potato chips now.
*
well crap, since both selling crap consumer product
i hope qualcomm elite x can stomp both and make something good.

aint fan of the amd and intel but forced to use them lol
stella_purple
post May 12 2024, 05:51 PM

Casual
***
Junior Member
392 posts

Joined: Oct 2011
QUOTE(1024kbps @ May 12 2024, 01:47 PM)
AMD burnt CPU is because OC
Intel stock setting

dont you see the difference?
*
What OC? AMD also stock setting lol

it's due to they not knowing or overconfident with their own CPU's tolerances to the point where it blows a hole in the substrate  
TristanX
post May 12 2024, 08:48 PM

Where is my stars?
Group Icon
Elite
24,334 posts

Joined: Nov 2004
From: Setapak, Kuala Lumpur


QUOTE(1024kbps @ May 12 2024, 03:05 PM)
well crap, since both selling crap consumer product
i hope qualcomm elite x can stomp both and make something good.

aint fan of the amd and intel but forced to use them lol
*
None of these would happen if they bake max safe voltage and power limit into the CPU.
babylon52281
post May 12 2024, 10:40 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(1024kbps @ May 12 2024, 01:47 PM)
AMD burnt CPU is because OC
Intel stock setting

dont you see the difference?
*
Stock setting is 253W TDP, mobos auto OC and push the CPUS to surpass that. If you manually set TDP to its rated max, no issues. So it that CPU fault or mobo fault?

And NO, 7800X3D exploding CPU was not just due to OC, redditor Skyfishjy had this to say:

user posted image

So NO, AMD case wasnt better. It was just as bad as Intel today
terradrive
post May 15 2024, 08:04 PM

RRAAAWWRRRRR
******
Senior Member
1,943 posts

Joined: Apr 2005


QUOTE(1024kbps @ May 12 2024, 03:05 PM)
well crap, since both selling crap consumer product
i hope qualcomm elite x can stomp both and make something good.

aint fan of the amd and intel but forced to use them lol
*
or buy cheaper product line like i5 and lower end ryzen lineups. Seems like the high end models have way more issues while the cheaper cpus have way less issues doh.gif
1024kbps
post May 17 2024, 09:41 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(terradrive @ May 15 2024, 08:04 PM)
or buy cheaper product line like i5 and lower end ryzen lineups. Seems like the high end models have way more issues while the cheaper cpus have way less issues  doh.gif
*
i never bought highest end offer from both, the latest cpu i got was r7 3700x,
after that i use lappy exclusively,

dont have much time to use pc lol, sad adult life sweat.gif
1024kbps
post May 17 2024, 09:50 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(stella_purple @ May 12 2024, 05:51 PM)
What OC? AMD also stock setting lol

it's due to they not knowing or overconfident with their own CPU's tolerances to the point where it blows a hole in the substrate  
*
one off issue right? i dont see any news from major tech site, unlike intel and the 12vhpwr

i did messed with my MSI board with OC before, luckily my MSI mobo is tank and lasted for many years lol, no cpu died.
it even out lasted my Enermax and SeaSonic PSU, both died and RMAed, for an entry board it's really bang for bucks, my next product will always MSI tongue.gif
Mea Culpa
post Jul 14 2024, 04:52 PM

Look at all my stars!!
*******
Senior Member
5,180 posts

Joined: Jan 2009
https://youtu.be/QzHcrbT5D_Y
hdbjhn2
post Jul 16 2024, 04:00 PM

Getting Started
**
Junior Member
267 posts

Joined: Mar 2014
From: Seremban


QUOTE(1024kbps @ Apr 22 2024, 07:56 AM)
ARM and RISC-V isn't too far behind

If both AMD and Intel keep making mistake they will become history
*
i think its more like temporary distruption. People will be more keen to try new and hyped up products,
like the battery life and stuff, advertise so big, but small gain(with up and down on other performance categories), meanwhile Intel n Amd might catch back few years later.
It's just more variety.
1024kbps
post Jul 16 2024, 06:42 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(hdbjhn2 @ Jul 16 2024, 04:00 PM)
i think its more like temporary distruption. People will be more keen to try new and hyped up products,
like the battery life and stuff, advertise so big, but small gain(with up and down on other performance  categories), meanwhile Intel n Amd might catch back few years later.
It's just more variety.
*
Both AMD and Intel can catch up performance, but for desktop you have to use gigantic coooler,
look at what Intel done to reach peak performance.

both cant catch up the efficiency that ARM have.
majority of android device are powered by ARM and apple inhouse CPU are ARM too.
then you have super computer powered by ARM too, the scalability and efficiency are what x86 cpu lacking.

remember how long you can use your phone or play games without worrying battery juice empty ?
you cant do that with x86 smartphone, because it does not exist, intel had one Atom powered android phone but it failed to make it into market,
hdbjhn2
post Jul 17 2024, 01:20 AM

Getting Started
**
Junior Member
267 posts

Joined: Mar 2014
From: Seremban


QUOTE(1024kbps @ Jul 16 2024, 06:42 PM)
Both AMD and Intel can catch up performance, but for desktop you have to use gigantic coooler,
look at what Intel done to reach peak performance.

both cant catch up the efficiency that ARM have.
majority of android device are powered by ARM and apple inhouse CPU are ARM too.
then you have super computer powered by ARM too, the scalability and efficiency are what x86 cpu lacking. 

remember how long you can use your phone or play games without worrying battery juice empty ?
you cant do that with x86 smartphone, because it does not exist, intel had one Atom powered android phone but it failed to make it into market,
*
Well, effectively, true, that is correct.
BUt just few things.

Gigantic coolers are mostly, not necessary actually. THat peak performance, its more like just to flex i got fastest cpu,
cause, that eventually makes into people mind, Intel is faster, Amd is faster, sedangkan that is only top end and high boost clock, blabla.
Point is, only if enabling prolonged pl2 crazy boost, high temps, hnce bigger cooler. But for for few percent performnce drop, but way less power and temp,
one can easily get away with normal coolers.
Also, AMD and Intel makes cpu that works with GPU and so much else high power draw(Voltage levels) components.
So, they can't make very low powered. (Correct me if im wrong).
For example, small speakers, u can use pc audio jack, that signal is enough.
But gigantic speakers, u need to amplify the signal itself. same with cpu, to talk with Gpu, and other more high power things.

Whereas ARm, they rely on igp and smaller voltage levels. This is my assumption.
So, we can't expect single component to cater both high perfomnce eco and stay low power also. THere is limit on Lowest voltage a cpu need to stay on.
Just like V12 need more power to stay on compared to v6.

Other than that, x86 and games without battery empty, in the end, for what it matters, battery life,
ARM does have its place. I always think about it also.
On phone/apple, right from wifi modul, using chrome, all can be done with battery, no coolers.
But on pc, lol, we need power supply, fan and cooling, and cpu that runs on higher power draw n temps.





babylon52281
post Jul 17 2024, 11:43 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
Anything that is made must have a set max limit, CPUS, even super hypercars. Why makers have set electronic limited 300kmph, or 400kmph, is not bcoz the car engine or aerodynamic cannot push it further but bcoz other things ie tyres, gearbox, etc, are not as durable to reach such high speeds. But what happens when the makers offer you an unlimited Ksku hypercar that could go as fast as you can floor the pedal? Well obviously something else happen if you try to drive like that on a daily basis, rite?

The same logic goes with Intel Ksku CPUS, the speed limits are off, the brakes are gone if you dont set any restriction how fast you want to push. If you set your own PL1=PL2=max PBP limit, many dont seem to have this CPU degradation problem.

Is it an Intel fault for unlimited Ksku version CPU? Yes
Is it mobo maker fault for unlimited power setting? Yes
Is it the user fault for not limiting their CPU? Yes
Is it market fault for creating OC'able CPU version? Yes
Is it Intel fault for not telling buyers to get nonK if they dont know what their doing? Yes

When everyone point fingers are each party, everyone is actually at fault too. Peeps should just buy nonK and leave the Kskus to wizards like derbauer or Kingpin. They arent complaining about hardware failures when pushed beyond the limits.

This post has been edited by babylon52281: Jul 17 2024, 11:48 AM
1024kbps
post Jul 20 2024, 06:46 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(hdbjhn2 @ Jul 17 2024, 01:20 AM)
Well, effectively, true, that is correct.
BUt just few things.

Gigantic coolers are mostly, not necessary actually. THat peak performance, its more like just to flex i got fastest cpu,
cause, that eventually makes into people mind, Intel is faster, Amd is faster, sedangkan that is only top end and high boost clock, blabla.
Point is, only if enabling prolonged pl2 crazy boost, high temps, hnce bigger cooler. But for for few percent performnce drop, but way less power and temp,
one can easily get away with normal coolers.
Also, AMD and Intel makes cpu that works with GPU and so much else high power draw(Voltage levels) components.
So, they can't make very low powered. (Correct me if im wrong).
For example, small speakers, u can use pc audio jack, that signal is enough.
But gigantic speakers, u need to amplify the signal itself. same with cpu, to talk with Gpu, and other more high power things.

Whereas ARm, they rely on igp and smaller voltage levels. This is my assumption.
So, we can't expect single component to cater both high perfomnce eco and stay low power also. THere is limit on Lowest voltage a cpu need to stay on.
Just like V12 need more power to stay on compared to v6.

Other than that, x86 and games without battery empty, in the end, for what it matters, battery life,
ARM does have its place. I always think about it also.
On phone/apple, right from wifi modul, using chrome, all can be done with battery, no coolers.
But on pc, lol, we need power supply, fan and cooling, and cpu that runs on higher power draw n temps.
*
actually all parties involved with x86 has done a lot of optimzation,
eg many heavy stuffs moved to GPU,
eg video decoding, to play 8K VP9/AV1, MPEG H265, etc are offloaded to GPU, you cant play it on old GPU, and CPU is out of questions。
the internet browsing, you think your cpu can handle those interactive and full screen scrolling website? they runs in GPU, CPU take back seat now.

x86 issues is still there because those ISA take up a lot of transistor and being useless there, many of our normal software don't use it,
in order to improve the performance, we have to relies on SSD and high clock speed CPU.
optimization require a lot of effort that usually only genius can write the code (assembly code).

ARM use little power because they have less ISA that take up the silicon space (Hence the cpu is RISC, while x86 is CISC)
ARM can runs on mobile devices to supercomputer.
while x86 is toward high performance computing

you can take a good read here https://www.redhat.com/en/topics/linux/ARM-vs-x86


babylon52281
post Jul 20 2024, 10:17 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(1024kbps @ Jul 20 2024, 06:46 PM)
actually all parties involved with x86 has done a lot of optimzation,
eg many heavy stuffs moved to GPU,
eg video decoding, to play 8K VP9/AV1, MPEG H265, etc are  offloaded to GPU, you cant play it on old GPU, and CPU is out of questions。
the internet browsing, you think your cpu can handle those interactive and full screen scrolling website? they runs in GPU, CPU take back seat now.

x86 issues is still there because those ISA take up a lot of transistor and being useless there, many of our normal software don't use it,
in order to improve the performance, we have to relies on SSD and high clock speed CPU.
optimization require a lot of effort that usually only genius can write the code (assembly code).

ARM use little power because they have less ISA that take up the silicon space (Hence the cpu is RISC, while x86 is CISC)
ARM can runs on mobile devices to supercomputer.
while x86 is toward high performance computing

you can take a good read here https://www.redhat.com/en/topics/linux/ARM-vs-x86
*
X86 also has a lot of legacy baggage in tow. Imagine that a 14900K could technically run programs made for Intel 8008 CPU, thats crazy longtime to be keeping compatibility. Many software have already ditched 32bit so in order for X86 to move forward at the least Lunarlake should ditch 32bit ISA and purely run on 64bit with software emulation to handle for any remainder 32bit programs.

Yes, Intel tried that with Itanium but then the emulation layer was terribly overcomplex while the CPU then werent fast enough for seamless software translation. These days, CPU are so efficient and powerful compared to IA64 that any emulation for less taxing 32bit shouldnt be a problem anymore.
hdbjhn2
post Jul 20 2024, 11:27 PM

Getting Started
**
Junior Member
267 posts

Joined: Mar 2014
From: Seremban


QUOTE(1024kbps @ Jul 20 2024, 06:46 PM)
actually all parties involved with x86 has done a lot of optimzation,
eg many heavy stuffs moved to GPU,
eg video decoding, to play 8K VP9/AV1, MPEG H265, etc are  offloaded to GPU, you cant play it on old GPU, and CPU is out of questions。
the internet browsing, you think your cpu can handle those interactive and full screen scrolling website? they runs in GPU, CPU take back seat now.

x86 issues is still there because those ISA take up a lot of transistor and being useless there, many of our normal software don't use it,
in order to improve the performance, we have to relies on SSD and high clock speed CPU.
optimization require a lot of effort that usually only genius can write the code (assembly code).

ARM use little power because they have less ISA that take up the silicon space (Hence the cpu is RISC, while x86 is CISC)
ARM can runs on mobile devices to supercomputer.
while x86 is toward high performance computing

you can take a good read here https://www.redhat.com/en/topics/linux/ARM-vs-x86
*
thank you for sharing that, understood.
1024kbps
post Jul 22 2024, 02:06 AM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Jul 20 2024, 10:17 PM)
X86 also has a lot of legacy baggage in tow. Imagine that a 14900K could technically run programs made for Intel 8008 CPU, thats crazy longtime to be keeping compatibility. Many software have already ditched 32bit so in order for X86 to move forward at the least Lunarlake should ditch 32bit ISA and purely run on 64bit with software emulation to handle for any remainder 32bit programs.

Yes, Intel tried that with Itanium but then the emulation layer was terribly overcomplex while the CPU then werent fast enough for seamless software translation. These days, CPU are so efficient and powerful compared to IA64 that any emulation for less taxing 32bit shouldnt be a problem anymore.
*
Legacy instructions is fine, the issue are;
The software we use don't really use them
Then optimizations take a lot of time, hence most new software or game first version usually are just work, optimizations take place later.

Unless you use it on audio\video production:
https://www.rarewares.org/ogg-oggdropxpd.php
You can see some of the Vorbis audio encoder have generic, p4 and athlon. These are optimized for different CPU
The lancer build is supercharged written by Japanese guy. Lancer one is one of the best example that software can really use the cpu instruction and multithread capabilities.

Then the video codec behind OBS, mpc-hc, are also runs in cpu and gpu combo,
X86 is still good for these kind of workload, but for other task strangely they use a lot of power but the performance is poor,
Another thing you can blame is Windows 11, it's such a powerhog.
I use edge regularly and it's no better than chrome.

Windows and their x86 partner should add low power CPU such as arm for low power use,
If Sony can use both arm and AMD jaguar on PS4, why windows PC can't?

On average the idle on my amd system is really poor, when it's idle the load is still there, both AMD and windows 11 does a really shitty job
My Intel tablet idle can last for 3 days, on battery
babylon52281
post Jul 22 2024, 08:40 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(1024kbps @ Jul 22 2024, 02:06 AM)
Legacy instructions is fine, the issue are;
The software we use don't really use them
Then optimizations take a lot of time, hence most new software or game first version usually are just work, optimizations take place later.

Unless you use it on audio\video production:
https://www.rarewares.org/ogg-oggdropxpd.php
You can see some of the Vorbis audio encoder have generic, p4 and athlon. These are optimized for different CPU
The lancer build is supercharged written by Japanese guy. Lancer one is one of the best example that software can really use the cpu instruction and multithread capabilities.

Then the video codec behind OBS, mpc-hc, are also runs in cpu and gpu combo,
X86 is still good for these kind of workload, but for other task strangely they use a lot of power but the performance is poor,
Another thing you can blame is Windows 11, it's such a powerhog.
I use edge regularly and it's no better than chrome.

Windows and their x86 partner should add low power CPU such as arm for low power use,
If Sony can use both arm and AMD jaguar on PS4, why windows PC can't?

On average the idle on my amd system is really poor, when it's idle the load is still there, both AMD and windows 11 does a really shitty job
My Intel tablet idle can last for 3 days, on battery
*
On a hardware level, legacy eats up transistors which could be put to use for more modern usage rather than wasting away. Directly it is bigger than needs to be, indirectly it might be why X86 has the poorer power efficiency as it needs to keep these legacy hardware "warm" in case they are called.

Indeed Win11 makes everything worse, too bad we need to change by next year anyhow while Win12 still not here yet (and could be worse than 11!). As for Edge its not surprising as its based off the same Chromium now. Use Mozilla or Opera instead.

If want hybrid low power use, Intel has Alder-Raptors design which the Ecores does perform efficiently but the problem is the BIG Pcore which loses in IPC vs AM5/6. And then rather than increasing it remains as 8 Pcores while Intel keep spamming Ecores which is POS when calls for heavy multitasked workloads vs AMD R9s. Imho just to run Windows & other background tasks, it only needs 4-8 Ecores so Raptor should have been configured as 10 Pcores + 8 Ecores giving the R9s a good fight.
SUSMilfuntastic
post Jul 25 2024, 05:22 AM

Real man help each other not SUS one another
****
Junior Member
559 posts

Joined: Dec 2022
From: Chyna builds kingdom instead of BS about freedom
Anybody using the snapdragon laptop, how was it compared to intel
babylon52281
post Jul 25 2024, 10:50 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Milfuntastic @ Jul 25 2024, 05:22 AM)
Anybody using the snapdragon laptop, how was it compared to intel
*
SD so far been underwhelming, I think its coz Win11 is not designed for such low power SOC. Rather than Windows it needs a native ARM based OS.
SUSMilfuntastic
post Jul 25 2024, 12:00 PM

Real man help each other not SUS one another
****
Junior Member
559 posts

Joined: Dec 2022
From: Chyna builds kingdom instead of BS about freedom
QUOTE(babylon52281 @ Jul 25 2024, 10:50 AM)
SD so far been underwhelming, I think its coz Win11 is not designed for such low power SOC. Rather than Windows it needs a native ARM based OS.
*
Qualcomm says their snapdragon has 45TOPS, I wonder what 45TOPS do in real life, does it need online wifi to use the 45TOPS. Really confusing this windows on snapdragon with AI
andrekua2
post Jul 27 2024, 06:18 PM

10k Club
********
All Stars
13,465 posts

Joined: Jan 2012


Do they affect desktop processor only or mobile too?

Just bought a gaming laptop with i5 13500HX for my son. Never crash or anything. Maybe only affect i7 and i9.
babylon52281
post Jul 27 2024, 07:35 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(andrekua2 @ Jul 27 2024, 06:18 PM)
Do they affect desktop processor only or mobile too?

Just bought a gaming laptop with i5 13500HX for my son. Never crash or anything. Maybe only affect i7 and i9.
*
Its still a HX CPU, so its not such a good idea to let it boost to 157W in such a small chassis.
Baconateer
post Jul 27 2024, 07:37 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(andrekua2 @ Jul 27 2024, 06:18 PM)
Do they affect desktop processor only or mobile too?

Just bought a gaming laptop with i5 13500HX for my son. Never crash or anything. Maybe only affect i7 and i9.
*
intel said only desktop
andrekua2
post Jul 27 2024, 11:12 PM

10k Club
********
All Stars
13,465 posts

Joined: Jan 2012


QUOTE(Baconateer @ Jul 27 2024, 07:37 PM)
intel said only desktop
*
Not really. It seems like mobile also affected but currently highly suspect i9 and some i7. No one mentioned i5 though but the increase E cores in HX does worry me a little.
Baconateer
post Jul 27 2024, 11:16 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(andrekua2 @ Jul 27 2024, 11:12 PM)
Not really. It seems like mobile also affected but currently highly suspect i9 and some i7. No one mentioned i5 though but the increase E cores in HX does worry me a little.
*
officially they said desktop only

i5 i7 i9 are all affected

but they degrade differently
babylon52281
post Jul 28 2024, 10:19 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Baconateer @ Jul 27 2024, 11:16 PM)
officially they said desktop only

i5 i7 i9 are all affected

but they degrade differently
*
CPU degradation due to excess heat & voltage will affect all to certain levels but on laptops with its poorer heat transfer the effects could happen earlier. This is provided if HX CPU behaves like Ksku where it will ramp above max PBP baseline. As with desktop Ksku my advice would be to manually turn off auto OC and limit PL1 & PL2 to its official max TDP.
Baconateer
post Jul 28 2024, 11:29 AM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(babylon52281 @ Jul 28 2024, 10:19 AM)
CPU degradation due to excess heat & voltage will affect all to certain levels but on laptops with its poorer heat transfer the effects could happen earlier. This is provided if HX CPU behaves like Ksku where it will ramp above max PBP baseline. As with desktop Ksku my advice would be to manually turn off auto OC and limit PL1 & PL2 to its official max TDP.
*
From what i read..there's no permanent fix bcs is a hardware issue.

What intel is doing is just delaying the degradation as much as possible..
babylon52281
post Jul 28 2024, 03:26 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Baconateer @ Jul 28 2024, 11:29 AM)
From what i read..there's no permanent fix bcs is a hardware issue.

What intel is doing is just delaying the degradation as much as possible..
*
Yes and no. The 'fix' will prevent degradation issues on fresh builds or those to be installed (make sure to update bios) as long as you ALSO MUST disable mobo auto OC as it will still break thru the limits set by Intel anyways. Here is what I mean (see timestamp 17:16)


but for those ady with problems, once its broken there is no repair, just RMA it.
Baconateer
post Jul 28 2024, 03:31 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(babylon52281 @ Jul 28 2024, 03:26 PM)
Yes and no. The 'fix' will prevent degradation issues on fresh builds or those to be installed (make sure to update bios) as long as you ALSO MUST disable mobo auto OC as it will still break thru the limits set by Intel anyways. Here is what I mean (see timestamp 17:16)


but for those ady with problems, once its broken there is no repair, just RMA it.
*
at this point..if someone really wants to buy intel..

just stick to 12th gen or better yet, go for AMD.
babylon52281
post Jul 28 2024, 04:04 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Baconateer @ Jul 28 2024, 03:31 PM)
at this point..if someone really wants to buy intel..

just stick to 12th gen or better yet, go for AMD.
*
13/14Gen nonK dont have these issues...

And AMD also have their own problems

Baconateer
post Jul 28 2024, 04:08 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(babylon52281 @ Jul 28 2024, 04:04 PM)
13/14Gen nonK dont have these issues...

And AMD also have their own problems

*
why take the chance?

also..AMD 9000 series is not released yet..

so we dont really know

but so far 7000 series dont hv major issues like Intel

thus AMD is the safer route.
SUSifourtos
post Jul 28 2024, 04:14 PM

Look at all my stars!!
*******
Senior Member
2,256 posts

Joined: Feb 2012



QUOTE(Milfuntastic @ Jul 25 2024, 12:00 PM)
Qualcomm says their snapdragon has 45TOPS, I wonder what 45TOPS do in real life, does it need online wifi to use the 45TOPS. Really confusing this windows on snapdragon with AI
*
wifi ?? in what sense the TOPS has anything to do with WIFI and Internet??


1. TOPS = Trillion Operation Per Seconds. , something like TFlops in GPU/CPU

2. The WHOLE POINT of AI Processor in SoC = RUN AI calculation LOCALLY

3. The higher the TOPS, the more responsive the AI Apps.


What confusing????

THis is already AS PLAIN AS POSSIBLE...
babylon52281
post Jul 28 2024, 04:59 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Baconateer @ Jul 28 2024, 04:08 PM)
why take the chance?

also..AMD 9000 series is not released yet..

so we dont really know

but so far 7000 series dont hv major issues like Intel

thus AMD is the safer route.
*
7000series no major issues? Dont believe all the spiel that fanboys tells you


And 9000series delay due to bugs caught the very last minute up to post shipment. What others still not yet found? As you said we dont really know...

There are no "safer" routes. Want safe & proven, go 12Gen or AM4.
babylon52281
post Jul 28 2024, 05:05 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(ifourtos @ Jul 28 2024, 04:14 PM)
wifi ?? in what sense the TOPS has anything to do with WIFI and Internet??
1. TOPS = Trillion Operation Per Seconds. , something like TFlops in GPU/CPU

2. The WHOLE POINT of AI Processor in SoC = RUN AI calculation LOCALLY

3. The higher the TOPS, the more responsive the AI Apps.
What confusing????

THis is already AS PLAIN AS POSSIBLE...
*
I heard some hardware (recent Xeons iinm) needs a special softkey to unlock full potential so it may be that certain hardware needs persistent internet connection to have full access? But that would be really scummy similar to many AAA games.
Baconateer
post Jul 28 2024, 05:10 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(babylon52281 @ Jul 28 2024, 04:59 PM)
7000series no major issues? Dont believe all the spiel that fanboys tells you


And 9000series delay due to bugs caught the very last minute up to post shipment. What others still not yet found? As you said we dont really know...

There are no "safer" routes. Want safe & proven, go 12Gen or AM4.
*
I forgot about this

But it seems AMD has fixed this issue

But this only affected 7800X3D right?
SUSifourtos
post Jul 28 2024, 05:28 PM

Look at all my stars!!
*******
Senior Member
2,256 posts

Joined: Feb 2012



QUOTE(babylon52281 @ Jul 28 2024, 05:05 PM)
I heard some hardware (recent Xeons iinm) needs a special softkey to unlock full potential so it may be that certain hardware needs persistent internet connection to have full access? But that would be really scummy similar to many AAA games.
*
stop confusing yourself with many inaccurate info that dont link together.

1. Intel has a special plan in Server SoC Market, they locked certain of their proc feature behind Paywall.

2. Server is and always working ONLINE ( connected )


Even intel itself do not apply this plan on PC/Mobile market.
babylon52281
post Jul 28 2024, 07:01 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(ifourtos @ Jul 28 2024, 05:28 PM)
stop confusing yourself with many inaccurate info that dont link together.

1. Intel has a special plan in Server SoC Market, they locked certain of their proc feature behind Paywall.

2. Server is and always working ONLINE ( connected )
Even intel itself do not apply this plan on PC/Mobile market.
*
Companies like to milk as much money from customers. If they could do that to server market, whos to say they wont try it with consumer market?
Game industry is prime example, what used to be fluff addons DLCS in some games can be the difference between winning & losing. While it may not be true for now, its uncertain what might be for near future CPUS or even GPUS (possible future might sell you a RTX 9900 for the price of 9800 but you have to pay yearly subscription to unlock to 9900)
babylon52281
post Jul 28 2024, 07:03 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Baconateer @ Jul 28 2024, 05:10 PM)
I forgot about this

But it seems AMD has fixed this issue

But this only affected 7800X3D right?
*
It took them months to fix last I heard. And it affects all X3Ds but of course 7800X3D are most affected due to best seller. Well if one can forgive AMD certainly same too for Intel, rite?
Baconateer
post Jul 28 2024, 07:12 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(babylon52281 @ Jul 28 2024, 07:03 PM)
It took them months to fix last I heard. And it affects all X3Ds but of course 7800X3D are most affected due to best seller. Well if one can forgive AMD certainly same too for Intel, rite?
*
If Intel can fix this permanently like AMD..then yes.

If not, Intel should recall or offer free replacement.
babylon52281
post Jul 28 2024, 07:22 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Baconateer @ Jul 28 2024, 07:12 PM)
If Intel can fix this permanently like AMD..then yes.

If not, Intel should recall or offer free replacement.
*
Intel, like AMD, are large corporations with risk of lawsuits if its not fixed permanently. Lets wait and see once the microcode is roll out, meanwhile no need to wait for the fix to arrive, just pull back everything like I said above and it will help save it sooner.
SUSifourtos
post Jul 28 2024, 08:14 PM

Look at all my stars!!
*******
Senior Member
2,256 posts

Joined: Feb 2012



QUOTE(babylon52281 @ Jul 28 2024, 07:01 PM)
Companies like to milk as much money from customers. If they could do that to server market, whos to say they wont try it with consumer market?
Game industry is prime example, what used to be fluff addons DLCS in some games can be the difference between winning & losing. While it may not be true for now, its uncertain what might be for near future CPUS or even GPUS (possible future might sell you a RTX 9900 for the price of 9800 but you have to pay yearly subscription to unlock to 9900)
*
A rtx 9900 card, with all the required vrm, mosfet, and RAM

Downgrade to 9800 and sell to user at 9800 price

Then ask if they want to unlock 9900??


U meed to umlock iq. Pls
Mea Culpa
post Jul 28 2024, 09:16 PM

Look at all my stars!!
*******
Senior Member
5,180 posts

Joined: Jan 2009
QUOTE(Baconateer @ Jul 28 2024, 07:12 PM)
If Intel can fix this permanently like AMD..then yes.

If not, Intel should recall or offer free replacement.
*
Looks like a NO.

https://www.pcmag.com/news/intel-hints-13th...ermanent-damage

Keyword : "preventative"

Not "fix"

This post has been edited by Mea Culpa: Jul 28 2024, 09:19 PM
babylon52281
post Jul 28 2024, 10:04 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(ifourtos @ Jul 28 2024, 08:14 PM)
A rtx 9900 card, with all the required vrm, mosfet, and RAM

Downgrade to 9800 and sell to user at 9800 price

Then ask if they want to unlock 9900??
U meed to umlock iq. Pls
*
Tell that to Intel. IQ no meed much?
Mea Culpa
post Aug 1 2024, 04:14 PM

Look at all my stars!!
*******
Senior Member
5,180 posts

Joined: Jan 2009
Class action lawsuits incoming.

https://www.tomshardware.com/pc-components/...tability-issues
Maxieos
post Aug 2 2024, 10:47 PM

Look at all my stars!!
*******
Senior Member
3,754 posts

Joined: May 2008
Saw this news but currently , what is the condition of crashing ? or at one point of time just BOD then cannot start pc ?

How much does it affect a company all using HP prebuild with all intel gen 13/14 ?
https://forum.lowyat.net/topic/5473031 , intel extend another 2 years?

babylon52281 So what are the option is better for user desktop ? getting an AMD or 12th gen ?
defaultname365
post Aug 3 2024, 12:52 PM

Windows® 8.1 | Xbox 360™ | PlayStation® 4
******
Senior Member
1,098 posts

Joined: Nov 2006
Ouch. If you're an Intel fan, look away (or look elsewhere).


kingkingyyk
post Aug 3 2024, 02:03 PM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Jul 28 2024, 07:22 PM)
Intel, like AMD, are large corporations with risk of lawsuits if its not fixed permanently. Lets wait and see once the microcode is roll out, meanwhile no need to wait for the fix to arrive, just pull back everything like I said above and it will help save it sooner.
*
They need not to fix it permanently. Their liability is the warranty period. As long as they provide replacement and they will be fine.
babylon52281
post Aug 4 2024, 01:19 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Maxieos @ Aug 2 2024, 10:47 PM)
Saw this news but currently , what is the condition of crashing ? or at one point of time just BOD then cannot start pc ?

How much does it affect a company all using HP prebuild with all intel gen 13/14 ?
https://forum.lowyat.net/topic/5473031 , intel extend another 2 years? 

babylon52281 So what are the option is better for user desktop ? getting an AMD or 12th gen ?
*
QUOTE(kingkingyyk @ Aug 3 2024, 02:03 PM)
They need not to fix it permanently. Their liability is the warranty period. As long as they provide replacement and they will be fine.
*
I will reply to both with the same answers. Basically if I were to trust the internetz then AMD would have been finished with their own SOC overvoltage burn issue as well the conspiracies from 9000series delayed launch, as well I would be with Intel and this 13/14 Gen issue.

Architectually, 13/14 RPL isnt that much different that 12 ADL, so rather if you could get a good deal for 12700/12900 then why not, for the price beats later gen i5 CPUS. So get it more for the price performance rather than any concern.

If your a regular user with no intention to OC, get the regular nonK 13/14Gen. The PL2 drop to 65W is there protect. Even if you unlock PL2 limit, just remember to disable mobo auto OC, and set PL1 & PL2 to Intel PBP limit.

If your set on OCing and you realised its why you get a Ksku then you be knowing what your doing & how to minimise the degradation effects. So its a risk you knowingly take yeah.

Intel extended warranty means squat if users are lazy to do those above & mobos continue to push those CPUS beyond limit.

As for options, AM4 is an option but it lags in power efficiency to LGA1700, and still continues to bug by more unstable platform. AM5 is still overpriced due to mobo & DDR5.

This post has been edited by babylon52281: Aug 4 2024, 01:20 AM
Erase
post Aug 4 2024, 04:55 AM

New Member
*
Junior Member
11 posts

Joined: Oct 2019
Im not good with detail or technical things,

But i play a game that the developer refuse to fix a bug that hackers take advantege of all the time.

In the end when hackers go extreme and crash all players during playime. My ssd suddenly died after awhile. The ssd is bundle oem nameless ssd.

Soon I change into a reliable sdd. Hacker started the crashing again. I survive countless crash daily. My ssd still running until now.

A defect is a defect no need give execuses. I came across a post that follow all safety intel settings but a recent windows update totally crash his computer, His totally piss off.


kingkingyyk
post Aug 4 2024, 11:01 AM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Aug 4 2024, 01:19 AM)
I will reply to both with the same answers. Basically if I were to trust the internetz then AMD would have been finished with their own SOC overvoltage burn issue as well the conspiracies from 9000series delayed launch, as well I would be with Intel and this 13/14 Gen issue.

Architectually, 13/14 RPL isnt that much different that 12 ADL, so rather if you could get a good deal for 12700/12900 then why not, for the price beats later gen i5 CPUS. So get it more for the price performance rather than any concern.

If your a regular user with no intention to OC, get the regular nonK 13/14Gen. The PL2 drop to 65W is there protect. Even if you unlock PL2 limit, just remember to disable mobo auto OC, and set PL1 & PL2 to Intel PBP limit.

If your set on OCing and you realised its why you get a Ksku then you be knowing what your doing & how to minimise the degradation effects. So its a risk you knowingly take yeah.

Intel extended warranty means squat if users are lazy to do those above & mobos continue to push those CPUS beyond limit.

As for options, AM4 is an option but it lags in power efficiency to LGA1700, and still continues to bug by more unstable platform. AM5 is still overpriced due to mobo & DDR5.
*
As far as concerned, X3D's efficiency is top notch, but would be too slow if user's usage pattern doesn't make use of the cache.
AM5 boards are priced decently now, DDR5 too. Would rather get a 7500F and cheap A620 (and stays with 65W) or okay-ish B650 and get possibility to run newer gen processor in the future, at mid-level budget. wink.gif AM4 is still unbeatable for lower budget, i.e. limited to choose 5500/5600.

---

Regular user (Yes, it presents the majority of the users) doesn't set the PL level manually, they use default settings. Would be easier if mobo provide a preset (one-click stable? as opposed by one-click OC) to apply all of the needed changes. However by imposing this limit, it will hurt the boost by a lot, rendering the system not as responsive. I would rather Intel try to do some manufacturing revisions or stricter binning, then replace the affected users with the new stepping.



This post has been edited by kingkingyyk: Aug 4 2024, 11:12 AM
babylon52281
post Aug 4 2024, 11:11 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Erase @ Aug 4 2024, 04:55 AM)
Im not good with detail or technical things,

But i play a game that the developer refuse to fix a bug that hackers take advantege of all the time.

In the end when hackers go extreme and crash all players during playime. My ssd suddenly died after awhile. The ssd is bundle oem nameless ssd.

Soon I change into a reliable sdd. Hacker started the crashing again. I survive countless crash daily. My ssd still running until now.

A defect is a defect no need give execuses. I came across a post that follow all safety intel settings but a recent windows update totally crash his computer, His totally piss off.
*
Err a windows update crash has very little to do with the CPU, so blame should be given but given to the right causes. Its the same like peeps blaming Microsoft for Crowdstrike outtage, and while the its easy to blame Windows for being too sensitive to kernel faults, the blame should be going to CS for for their buggy update.

My 2sen experience, SSD seems very sensitive to hard crashes I had a MLC drive that supposed to be more durable but it died after few Windows crash. If your game is this sensitive, I suggest run it from a HDD as it has less damages from crashes, my HDDs that was connected to that MLC drive outlasted and still in use today.
kingkingyyk
post Aug 4 2024, 11:14 AM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Aug 4 2024, 11:11 AM)
My 2sen experience, SSD seems very sensitive to hard crashes I had a MLC drive that supposed to be more durable but it died after few Windows crash.
*
Indeed it is. That's why you will find super capacitors in enterprise level SSDs to give the controller some time to flush the data back to the NAND. wink.gif
babylon52281
post Aug 4 2024, 11:35 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(kingkingyyk @ Aug 4 2024, 11:01 AM)
As far as concerned, X3D's efficiency is top notch, but would be too slow if user's usage pattern doesn't make use of the cache.
AM5 boards are priced decently now, DDR5 too. Would rather get a 7500F and cheap A620 (and stays with 65W) or okay-ish B650 and get possibility to run newer gen processor in the future, at mid-level budget. wink.gif AM4 is still unbeatable for lower budget, i.e. limited to choose 5500/5600.

---

Regular user (Yes, it presents the majority of the users) doesn't set the PL level manually, they use default settings. Would be easier if mobo provide a preset (one-click stable? as opposed by one-click OC) to apply all of the needed changes. However by imposing this limit, it will hurt the boost by a lot, rendering the system not as responsive. I would rather Intel try to do some manufacturing revisions or stricter binning, then replace the affected users with the new stepping.
*
From what I read, X3D has poorer idle power and has higher power usage in light use, understandable as its a gamer CPU rather than the universal CPU that is Intel monolithic. My 12700F does ~3W in internet use, X3D i read does 3X more. Gaming per watt does beat Intels which is where Ryzen does better. If usage doesnt benefit the cache, its no better or worse than the nonX versions. Its good in its niche role tho.

Decent AM5 mobos still not as cheap as decent LGA1700 mobos tho and Intel still can leverage cheaper but speedy DDR4. DDR5 doesnt have a speed over poor latency advantage until it hits 6000MHZ which is still pricier than DDR4. And if your getting Aseries mobo it will really hamper any CPU upgradability due to poorer VRMs, I would not reco pairing with current or future X3D for example, a good starter for upgrade purpose is to go for a B650 mobo with decent VRM. Much like Intel Hmobos, AMD Amobos are meant for office use with little to no intention for CPU upgrade.

But if one has the money for a decent build, yeah go for AM5. Intel Arrowlake will be new evolution so will have issues to resolve, wait for Lunarlake at least once their chiplet tech has matured.

As for preference, I would rather Intel to put the locks on their mobo partners eagerness to push CPU beyond limits (im looking at Asus, MSI as the worst). Also Intel should have hard limits even on Ksku that doesnt allow it to be driven beyond.
babylon52281
post Aug 4 2024, 11:43 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
Here is decent AM5 example from my lazy reco list.
Gaming PC Level 1 (USD$1000 - <RM 5000) Prices based in USD converted
CPU: AMD Ryzen 7 7600X
CPU Cooler: Thermalright Assassin X120 Refined SE
Motherboard: MSI PRO B650-S WiFi Pro / Asrock B650M-HDV
RAM: Corsair Vengeance LPX 32GB (2x16GB) DDR5 6000MHz CL36
Storage: WD Blue SN850 NVMe SSD 1TB
Graphics Card: MSI Gaming GeForce RTX 4060 Ti Ventus 8GB (16GB if budget can fit)
Alternative Graphics Card option: XFX Speedster RX 7700 XT Black Gaming
Case: Corsair 4000D AIRFLOW
PSU: Corsair RM750x, 750W, Cybenetics Gold, Fully Modular
1024kbps
post Aug 4 2024, 11:53 AM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(kingkingyyk @ Aug 4 2024, 11:14 AM)
Indeed it is. That's why you will find super capacitors in enterprise level SSDs to give the controller some time to flush the data back to the NAND. wink.gif
*
my old pc and lappy had experienced freeze and i will just hard reset, old one is 960 evo, new one is micron SSD
both SSd survived as i keep hard reset everyday lol

pc hard crash due to psu (enermax, well used and Seasonic) issue and eventually kick a bucket
1024kbps
post Aug 4 2024, 11:57 AM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Aug 4 2024, 11:11 AM)
Err a windows update crash has very little to do with the CPU, so blame should be given but given to the right causes. Its the same like peeps blaming Microsoft for Crowdstrike outtage, and while the its easy to blame Windows for being too sensitive to kernel faults, the blame should be going to CS for for their buggy update.

My 2sen experience, SSD seems very sensitive to hard crashes I had a MLC drive that supposed to be more durable but it died after few Windows crash. If your game is this sensitive, I suggest run it from a HDD as it has less damages from crashes, my HDDs that was connected to that MLC drive outlasted and still in use today.
*
older HDD tends to last longer due to lighter load, eg single platter, my hitachi 500gb and WD 320GB still running, both already run beyond the MTFB
while my toshiba 2TB died, some older model use multiplatter tend to died as soon as the warranty expired
kingkingyyk
post Aug 4 2024, 07:28 PM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Aug 4 2024, 11:35 AM)
From what I read, X3D has poorer idle power and has higher power usage in light use, understandable as its a gamer CPU rather than the universal CPU that is Intel monolithic. My 12700F does ~3W in internet use, X3D i read does 3X more. Gaming per watt does beat Intels which is where Ryzen does better. If usage doesnt benefit the cache, its no better or worse than the nonX versions. Its good in its niche role tho.
*
Let's say it is 3W vs 30W for idle, but in heavy load the situation is reversed at way larger scale. wink.gif So it is heavily dependent to the usage pattern.

https://www.techspot.com/review/2801-amd-ry...x3d/#Power2-png

QUOTE(1024kbps @ Aug 4 2024, 11:53 AM)
my old pc and lappy had experienced freeze and i will just hard reset, old one is 960 evo, new one is micron SSD
both SSd survived as i keep hard reset everyday lol

pc hard crash due to psu (enermax, well used and Seasonic) issue and eventually kick a bucket
*
Algorithm in the firmware is involved. Bigger brand has better experience and performs QC with this kind of scenario.

This post has been edited by kingkingyyk: Aug 4 2024, 07:41 PM
babylon52281
post Aug 5 2024, 12:38 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(kingkingyyk @ Aug 4 2024, 07:28 PM)
Let's say it is 3W vs 30W for idle, but in heavy load the situation is reversed at way larger scale. wink.gif So it is heavily dependent to the usage pattern.
Algorithm in the firmware is involved. Bigger brand has better experience and performs QC with this kind of scenario.
*
Indeed which is why I did say gaming wise its better per watt. So it really depends on usage whether its primarily just a gaming PC or a daily regular use with some hours of gaming which in that case an Intel would use less wattage per day scenario. So YMMV.
babylon52281
post Aug 6 2024, 11:11 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
1024kbps
We talked about X86 backwards compatibility hampering its evolution progress but here is real proof how ridiculous deep it can running software way back to the DOS days.

https://www.techspot.com/news/104118-check-...n-hardware.html

Since X86 could not be weaned off its legacy limitations, there is no option than to move onto modern CPU uarchs that has no such need ie ARM & RISC V.
Wedchar2912
post Aug 6 2024, 01:43 PM

Look at all my stars!!
*******
Senior Member
3,587 posts

Joined: Apr 2019
QUOTE(babylon52281 @ Aug 6 2024, 11:11 AM)
1024kbps
We talked about X86 backwards compatibility hampering its evolution progress but here is real proof how ridiculous deep it can running software way back to the DOS days.

https://www.techspot.com/news/104118-check-...n-hardware.html

Since X86 could not be weaned off its legacy limitations, there is no option than to move onto modern CPU uarchs that has no such need ie ARM & RISC V.
*
I, and I am guessing many people also, actually do prefer the backward compatibility "feature" to be around and continue... at least old systems and softwares can still be used...

not like my old apple products that basically are fine but no longer usable due to software obsolescence.
babylon52281
post Aug 6 2024, 03:44 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Wedchar2912 @ Aug 6 2024, 01:43 PM)
I, and I am guessing many people also, actually do prefer the backward compatibility "feature" to be around and continue... at least old systems and softwares can still be used...

not like my old apple products that basically are fine but no longer usable due to software obsolescence.
*
Its the baggage problem, X86 has too much legacy that it hampers a more efficient uarch/instruction set to replace it. Which is why ARM & Apple M are giving X86 the power efficiency & ipc middle fingers.

Apple did the right thing by cutting out legacy support, look at how good its M CPU when it caters to run modern software only. ARM is clean slate which should be the next evolution for computers. By supporting history software, X86 is history.
kingkingyyk
post Aug 6 2024, 08:44 PM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Aug 6 2024, 03:44 PM)
Its the baggage problem, X86 has too much legacy that it hampers a more efficient uarch/instruction set to replace it. Which is why ARM & Apple M are giving X86 the power efficiency & ipc middle fingers.

Apple did the right thing by cutting out legacy support, look at how good its M CPU when it caters to run modern software only. ARM is clean slate which should be the next evolution for computers. By supporting history software, X86 is history.
*
Not really. Apple M CPU is produced by newer lithography than x86 so the efficiency is better. If Apple were to make it with Samsung node, you will see it performs way worse. wink.gif
1024kbps
post Aug 6 2024, 10:07 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Aug 6 2024, 11:11 AM)
1024kbps
We talked about X86 backwards compatibility hampering its evolution progress but here is real proof how ridiculous deep it can running software way back to the DOS days.

https://www.techspot.com/news/104118-check-...n-hardware.html

Since X86 could not be weaned off its legacy limitations, there is no option than to move onto modern CPU uarchs that has no such need ie ARM & RISC V.
*
no one took the initiative, microsoft? AMD? Intel?, back ward compatible is good but too many ancient stuff is no no.
What AMD did was added x64 extension, it helped to extend x86 cpu life
Intel hit the wall with itanium.

honestly who runs ancient stuff on latest shiny cpu with 5GHz 16 cores? nope.
Old software has too many limitation it will not able to utilize all the cpu.

I still have really old game such as heretic and other dos games run perfectly fine either via Steam or inside VirtualBox.
when ancient stuff can be emulated then it should be removed, other wise it will snowballing, old stuff should be left behind, like what AMD and Nvidia did do their old GPU
they only support old GPU up to certain version of OS, or number or years.


why qualcomm elite cpu can emulate x86 cpu while their own powerful x86 cpu couldnt...?
like, who still writing software optimize for MMX? new instructions are much more faster...
Matchy
post Aug 7 2024, 08:35 AM

Regular
******
Senior Member
1,321 posts

Joined: Jun 2019


QUOTE(1024kbps @ Aug 6 2024, 10:07 PM)
no one took the initiative, microsoft? AMD? Intel?, back ward compatible is good but too many ancient stuff is no no.
What AMD did was added x64 extension, it helped to extend x86 cpu life
Intel hit the wall with itanium.

honestly who runs ancient stuff on latest shiny cpu with 5GHz 16 cores? nope.
Old software has too many limitation it will not able to utilize all the cpu.

I still have really old game such as heretic and other dos games run perfectly fine either via Steam or inside VirtualBox.
when ancient stuff can be emulated then it should be removed, other wise it will snowballing, old stuff should be left behind, like what AMD and Nvidia did do their old GPU
they only support old GPU up to certain version of OS, or number or years.
why qualcomm elite cpu can emulate x86 cpu while their own powerful x86 cpu couldnt...?
like, who still writing software optimize for MMX? new instructions are much more faster...
*
I think the problem is with the enterprise company... many of their apps are designed for x86. If they were to migrate to ARM, it will be costly.
blindmutedeaf
post Aug 7 2024, 09:24 AM

Getting Started
**
Junior Member
270 posts

Joined: Sep 2016
From: Penang lo
QUOTE(Matchy @ Aug 7 2024, 08:35 AM)
I think the problem is with the enterprise company... many of their apps are designed for x86. If they were to migrate to ARM, it will be costly.
*
Yep this is the problem, if you got into those long2 time production company, you still can see DOS and the SW developer is long gone.
So I won't be surprise if whole lot of our daily infra still having those T-Rex SW running.

It is a diff biz model, some people call it a moat?
babylon52281
post Aug 7 2024, 09:53 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(kingkingyyk @ Aug 6 2024, 08:44 PM)
Not really. Apple M CPU is produced by newer lithography than x86 so the efficiency is better. If Apple were to make it with Samsung node, you will see it performs way worse. wink.gif
*
Smaller litho helps to pack in more cores and improve power efficiency but it doesnt explain why Apple M still has better ipc than Intel/AMD X86, its all to do with the uarch being designed from scratch.
babylon52281
post Aug 7 2024, 10:05 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Matchy @ Aug 7 2024, 08:35 AM)
I think the problem is with the enterprise company... many of their apps are designed for x86. If they were to migrate to ARM, it will be costly.
*
QUOTE(blindmutedeaf @ Aug 7 2024, 09:24 AM)
Yep this is the problem, if you got into those long2 time production company, you still can see DOS and the SW developer is long gone.
So I won't be surprise if whole lot of our daily infra still having those T-Rex SW running.

It is a diff biz model, some people call it a moat?
*
Its bcoz these enterprises are too skinflint from investing into modernising their ancient hardware with that reason of 'need to support legacy sw' until there is no other choice then somehow they magically can port over without near killing their business.
Banking ATMS been a good example stubbornly staying on DOS until recently when they migrate to... Win XP lol. Somehow the financial sector didnt collapse. So I doubt there is really any technical reason for companies still relying on old systems.

OTOH there is anecdotal good excuses to stay as in during Crowdstrike outage it was US Southwestern airline that could remain operational due to running on Win 3.1. Somehow nobody mentioned such an outdated system can easily hacked or malware hit.

A ground up new uarch CPU can be hardened for better security against attacks on ancient processes that still exist in X86.

kingkingyyk
post Aug 7 2024, 03:06 PM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Aug 7 2024, 09:53 AM)
Smaller litho helps to pack in more cores and improve power efficiency but it doesnt explain why Apple M still has better ipc than Intel/AMD X86, its all to do with the uarch being designed from scratch.
*
Not really again. x86 can scale up to that level too, but it will be large and too costly to produce. wink.gif

So it is not about x86 is bad in efficiency and ARM is the saver, it is the company that performs the engineering decision. Recall that we had the similar design of Snapdragon vs Dimensity, one made by Samsung and another made by Mediatek, the efficiency curve are completely different.

This post has been edited by kingkingyyk: Aug 7 2024, 03:15 PM
chocobo7779
post Aug 7 2024, 09:22 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(kingkingyyk @ Aug 7 2024, 03:06 PM)
Not really again. x86 can scale up to that level too, but it will be large and too costly to produce. wink.gif

So it is not about x86 is bad in efficiency and ARM is the saver, it is the company that performs the engineering decision. Recall that we had the similar design of Snapdragon vs Dimensity, one made by Samsung and another made by Mediatek, the efficiency curve are completely different.
*
Even SD X Elite isn't that much of a competitor compared to modern x86 chips either, despite being designed by a team ex-Apple Silicon engineers icon_idea.gif
babylon52281
post Aug 7 2024, 10:24 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(kingkingyyk @ Aug 7 2024, 03:06 PM)
Not really again. x86 can scale up to that level too, but it will be large and too costly to produce. wink.gif

So it is not about x86 is bad in efficiency and ARM is the saver, it is the company that performs the engineering decision. Recall that we had the similar design of Snapdragon vs Dimensity, one made by Samsung and another made by Mediatek, the efficiency curve are completely different.
*
Not really three. Raptorlake has about same transistor count (~25bil) as M3 (nonPro) but it still loses in ipc, why? It the uarch.

QUOTE(chocobo7779 @ Aug 7 2024, 09:22 PM)
Even SD X Elite isn't that much of a competitor compared to modern x86 chips either, despite being designed by a team ex-Apple Silicon engineers icon_idea.gif
*
That is SD 1st go at the Windows ecosystem, surely anything pioneer will have lotsa issues and Windows wasnt specifically design for ARM either. So there is a lot of room for optimisation yet to start whacking SD/ARM. Id say give it 3 gen then we will have a better picture where ARM sits with the X86s.
kingkingyyk
post Aug 7 2024, 10:39 PM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(chocobo7779 @ Aug 7 2024, 09:22 PM)
Even SD X Elite isn't that much of a competitor compared to modern x86 chips either, despite being designed by a team ex-Apple Silicon engineers icon_idea.gif
*
Well it depends whether they want to widen the core or not. For now it serves only niche market. icon_idea.gif

QUOTE(babylon52281 @ Aug 7 2024, 10:24 PM)
Not really three. Raptorlake has about same transistor count (~25bil) as M3 (nonPro) but it still loses in ipc, why? It the uarch.
*
You do aware that the CPU itself only occupies a small amount of die size in modern processor (we should call it as SoC now) right? wink.gif

1024kbps
post Aug 8 2024, 07:56 AM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Aug 7 2024, 09:53 AM)
Smaller litho helps to pack in more cores and improve power efficiency but it doesnt explain why Apple M still has better ipc than Intel/AMD X86, its all to do with the uarch being designed from scratch.
*
I think it's more on optimization, x86 desktop app are notoriously power hog, although may parts of the 2d stuff already offloaded to gpu, it's still far from perfect,
When I opened map Google chrome on my amd lappy, shit is so slow I have to enable performance mode and it also turned to jet engine
I can open the same map on my OnePlus 7t pro and all things rendered instantly... The phone still can play all sorts of games albeit slower
kingkingyyk
post Aug 8 2024, 09:18 AM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(1024kbps @ Aug 8 2024, 07:56 AM)
I think it's more on optimization, x86 desktop app are notoriously power hog, although may parts of the 2d stuff already offloaded to gpu, it's still far from perfect,
When I opened map Google chrome on my amd lappy, shit is so slow I have to enable performance mode and it also turned to jet engine
I can open the same map on my OnePlus 7t pro and all things rendered instantly... The phone still can play all sorts of games albeit slower
*
Blame that on inefficient javascript code. biggrin.gif Webassembly will solve that, but only little traction is gain over that side.
babylon52281
post Aug 8 2024, 09:37 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(1024kbps @ Aug 8 2024, 07:56 AM)
I think it's more on optimization, x86 desktop app are notoriously power hog, although may parts of the 2d stuff already offloaded to gpu, it's still far from perfect,
When I opened map Google chrome on my amd lappy, shit is so slow I have to enable performance mode and it also turned to jet engine
I can open the same map on my OnePlus 7t pro and all things rendered instantly... The phone still can play all sorts of games albeit slower
*
QUOTE(kingkingyyk @ Aug 8 2024, 09:18 AM)
Blame that on inefficient javascript code.  biggrin.gif Webassembly will solve that, but only little traction is gain over that side.
*
Its like badly optimised games gets pushed out bcoz developers know they can always rely on 4090s & 4080s to run these titles at acceptable FPS, the same type sloppy developers dont really care for X86 code optimisation as the CPU isnt built up to prioritise efficiency as much as their same software for ARM (you practically have to develop within a power/thermal envelope limit).
kingkingyyk
post Aug 8 2024, 09:43 AM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Aug 8 2024, 09:37 AM)
Its like badly optimised games gets pushed out bcoz developers know they can always rely on 4090s & 4080s to run these titles at acceptable FPS, the same type sloppy developers dont really care for X86 code optimisation as the CPU isnt built up to prioritise efficiency as much as their same software for ARM (you practically have to develop within a power/thermal envelope limit).
*
This is not related to x86 or ARM, but just how developers trade for faster development time. The perception you got for ARM is apps need to consider the low baseline performance of ARM processor, where you still get ancient A76 in new chip now!
https://www.gsmarena.com/mediatek_launches_...-news-64021.php

For website... I don't think they gonna care that much. laugh.gif Time to market is more important.
1024kbps
post Aug 8 2024, 06:44 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Aug 8 2024, 09:37 AM)
Its like badly optimised games gets pushed out bcoz developers know they can always rely on 4090s & 4080s to run these titles at acceptable FPS, the same type sloppy developers dont really care for X86 code optimisation as the CPU isnt built up to prioritise efficiency as much as their same software for ARM (you practically have to develop within a power/thermal envelope limit).
*
depends on publisher,
good publisher like Bethesda who developed Doom series, known for supporting OpenGL at first, then Vulkan rendering support was added later, and all the following games use id Tech 6 and newer have very good performance
Then CDPR released "it just work" game to public, not too bad it still runs like crap, they did keep updating the games.

they collect your system data when crash or probably have telemetry enabled, then optimize accordingly.
Square Enix knows of slow update iirc, most of my Tomb raider games dont get update as much as the other AAA game publisher.

unlike some publisher called ubi kentang lol
Baconateer
post Aug 8 2024, 06:48 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(1024kbps @ Aug 8 2024, 06:44 PM)
depends on publisher,
good publisher like Bethesda who developed Doom series, known for supporting OpenGL at first, then Vulkan rendering support was added later, and all the following games use id Tech 6 and newer have very good performance
Then CDPR released "it just work" game to public, not too bad it still runs like crap, they did keep updating the games.

they collect your system data when crash or probably have telemetry enabled, then optimize accordingly.
Square Enix knows of slow update iirc, most of my Tomb raider games dont get update as much as the other AAA game publisher.

unlike some publisher called ubi kentang lol
*
id software developed Doom

Bethesda is the publisher

bethesda can only develop something like starfield..

which is an abomination when it comes to optimisation when compared to doom
dexeric
post Aug 8 2024, 07:15 PM

Getting Started
**
Junior Member
118 posts

Joined: Oct 2008


Not sure that are the actual topic is but the intel issue mainly is due to (a) having small non ht core sharing ring bus with performance core [different clock] (b) having to clock the ring bus to the moon to get perf cpu faster

-----

Regarding sd x elite vs x86 pc vs m2/3/4

M2/3/4 vs x86 - apple control the whole ecosystem and can make the whole pc to be efficient by doing all crucial part to be 1st party [ram, ssd, software[strict control on execution], and everything else] to achieve state of the art efficiency( at least affect a bit ).

AMD vs sd x elite - amd was design to reduce power envelope to cpu to provide more headroom to their gpu. When compare, xelite cpu is better than amd ryzen ai 9 300 series. But there is not much difference in cpu benchmark, in games there is alot difference.


babylon52281
post Aug 9 2024, 09:29 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
Poorly optimised software exist bcoz they know they can leverage on high power CPUS, high power CPU exist bcoz existing hardware uarch are poorly optimised for power efficiency & ipc, poorly optimised uarch exist bcoz back then the 70s & 80s (when X86 was rising), power efficiency wasnt a thing.

AFAIK I havent heard of a poor Apple or Android/Arm software that sucks more power than needed to run it. I stand to be corrected.

For X86 uarch to move forward, it needs to ditch legacy 16bit & 32bit function, and revamp its 64bit compute to be on par with SD/ Mseries.

Intel problem in this tered is symptomatic of X86 limitations due to pushing power limits when they hit the down nodeing wall. AMD 9000series is also indicative of this, perhaps moreso of the future, when they decided to pull back the TDP limit to 65W, as i suspected seeing TPU review with PBO off there was a little 5% more to gain so AMD too have hit a power performance wall with current uarch & node.
dexeric
post Aug 9 2024, 10:18 AM

Getting Started
**
Junior Member
118 posts

Joined: Oct 2008


QUOTE(babylon52281 @ Aug 9 2024, 09:29 AM)
Poorly optimised software exist bcoz they know they can leverage on high power CPUS, high power CPU exist bcoz existing hardware uarch are poorly optimised for power efficiency & ipc, poorly optimised uarch exist bcoz back then the 70s & 80s (when X86 was rising), power efficiency wasnt a thing.

AFAIK I havent heard of a poor Apple or Android/Arm software that sucks more power than needed to run it. I stand to be corrected.

For X86 uarch to move forward, it needs to ditch legacy 16bit & 32bit function, and revamp its 64bit compute to be on par with SD/ Mseries.

Intel problem in this tered is symptomatic of X86 limitations due to pushing power limits when they hit the down nodeing wall. AMD 9000series is also indicative of this, perhaps moreso of the future, when they decided to pull back the TDP limit to 65W, as i suspected seeing TPU review with PBO off there was a little 5% more to gain so AMD too have hit a power performance wall with current uarch & node.
*
Not sure why you are comparing desktop part to mobile part, to compare should be xelite to amd ai 9 300 series.
Most of the time it show a bit difference in performance and power but it is not big difference.

Regarding the uarch, you cannot run 16bit and 32bit function in windows 10 64. So why point out this. Just because the instruction set still support that, it does not mean it is still used in pc.

Heck 16bit not even supported in now...
Instruction set

AMD64 (x86-64) (AMD64 only support 32bit and 64bit)
Extensions
Crypto AES, SHA
SIMD MMX-plus, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSE4A, SSSE3, FMA3, AVX, AVX2, AVX512
Virtualization AMD-V

The only problem i see in this ARM vs x86 pc is RISC vs CISC design which is only affect by the legacy design of the operating system since the instruction set are different.


This post has been edited by dexeric: Aug 9 2024, 10:19 AM
babylon52281
post Aug 9 2024, 10:46 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(dexeric @ Aug 9 2024, 10:18 AM)
Not sure why you are comparing desktop part to mobile part, to compare should be xelite to amd ai 9 300 series.
Most of the time it show a bit difference in performance and power but it is not big difference.

Regarding the uarch, you cannot run 16bit and 32bit function in windows 10 64. So why point out this. Just because the instruction set still support that, it does not mean it is still used in pc.

Heck 16bit not even supported in now...
Instruction set

AMD64 (x86-64) (AMD64 only support 32bit and 64bit)
Extensions
Crypto AES, SHA
SIMD MMX-plus, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSE4A, SSSE3, FMA3, AVX, AVX2, AVX512
Virtualization AMD-V

The only problem i see in this ARM vs x86 pc is RISC vs CISC design which is only affect by the legacy design of the operating system since the instruction set are different.
*
Ryzen HS and Core HX CPUS are mobile parts no? But they share the same uarch as desktop parts and these are thermally hard to cool unlike SD Elite.

Just bcoz its not used in Windows doesnt mean the hardware to run is not ady there


But its precisely that since Windows no longer interact at such base algo that it makes no sense to keep them, so why not deprecate and remove 16 & 32bit function. Then by optimising 64bit, the transistor saved can be reuse to improve elsewhere or cut the die size down to reduce cost.

And its not like the end for legacy programs as software emulation could be used to run them.

This post has been edited by babylon52281: Aug 9 2024, 10:48 AM
dexeric
post Aug 9 2024, 10:58 AM

Getting Started
**
Junior Member
118 posts

Joined: Oct 2008


QUOTE(babylon52281 @ Aug 9 2024, 10:46 AM)
Ryzen HS and Core HX CPUS are mobile parts no? But they share the same uarch as desktop parts and these are thermally hard to cool unlike SD Elite.

Just bcoz its not used in Windows doesnt mean the hardware to run is not ady there


But its precisely that since Windows no longer interact at such base algo that it makes no sense to keep them, so why not deprecate and remove 16 & 32bit function. Then by optimising 64bit, the transistor saved can be reuse to improve elsewhere or cut the die size down to reduce cost.

And its not like the end for legacy programs as software emulation could be used to run them.
*
https://en.m.wikipedia.org/wiki/FreeDOS

Free dos is 32 bit not 16 bit.

32 bit support is not as big deal as you think...
It might took space but it doesn't hinder performance as the part was not used when it is not used...
The real problem is the operating system. U try run amd processor in linux... and run arm processor in linux... still amd will have better performance there.

Plus there is huge difference in lithography between amd hs processor and sd x elite processor. Better lithography increase efficiency and performance by alot.

1024kbps
post Aug 9 2024, 03:28 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(dexeric @ Aug 9 2024, 10:18 AM)
Not sure why you are comparing desktop part to mobile part, to compare should be xelite to amd ai 9 300 series.
Most of the time it show a bit difference in performance and power but it is not big difference.

Regarding the uarch, you cannot run 16bit and 32bit function in windows 10 64. So why point out this. Just because the instruction set still support that, it does not mean it is still used in pc.

Heck 16bit not even supported in now...
Instruction set

AMD64 (x86-64) (AMD64 only support 32bit and 64bit)
Extensions
Crypto AES, SHA
SIMD MMX-plus, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSE4A, SSSE3, FMA3, AVX, AVX2, AVX512
Virtualization AMD-V

The only problem i see in this ARM vs x86 pc is RISC vs CISC design which is only affect by the legacy design of the operating system since the instruction set are different.
*
32bit is still widely used and most programs ARE 32 bit..,

16 bit is hecking slow but old game runs with dosbox you dont really feel sluggish.
before that you still can run 16bit applications but MS removed the NTVDM on newer windows

user posted image
we're running mixed 32 and 64 bits, some programs still default on 32 due to devs dont see the benefits to compile them exclusively to 64 bits.
1024kbps
post Aug 9 2024, 03:39 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(dexeric @ Aug 9 2024, 10:58 AM)
https://en.m.wikipedia.org/wiki/FreeDOS

Free dos is 32 bit not 16 bit.

32 bit support is not as big deal as you think...
It might took space but it doesn't hinder performance as the part was not used when it is not used...
The real problem is the operating system. U try run amd processor in linux... and run arm processor in linux... still amd will have better performance there.

Plus there is huge difference in lithography between amd hs processor and sd x elite processor. Better lithography increase efficiency and performance by alot.
*
Linux is very scalable, most routers runs on Linux, microcontrollers, also linux powered...
then at my work place my company use Win10 IOT LTSC on some atom processor, shit is hecking slow lol, it should have been on linux.

For ARM processor, perhaps android devices and apple products are best example that Ios and Android (Unix like, and latter is linux) runs very well with ARM cpus (apple sillicon/Qualcomm SD), because they're optimized to run on battery
ARM can be found on supercomputer to mobile devices.
1024kbps
post Aug 9 2024, 03:40 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(Baconateer @ Aug 8 2024, 06:48 PM)
id software developed Doom

Bethesda is the publisher

bethesda can only develop something like starfield..

which is an abomination when it comes to optimisation when compared to doom
*
yeah lol, i always mix them up

anyway id tech engine are top tier game engine, for Wolfenstein 2, i can max out everything on 4K on 60FPS on my ancient vega64... not many games can do that.
too bad not much games use the id tech, hopefully newer games will use it.

This post has been edited by 1024kbps: Aug 9 2024, 03:45 PM
1024kbps
post Aug 9 2024, 03:51 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



By the way we shifted the topic too far away, Intel do provide extended warranty for the CPU,
business ethic wise Intel isnt very good, AMD is not better but hopefully it wont go bankrupt,
we finally have Intel GPU and i can see some developer added the Intel GPU exclusive function to games, eg Cyberpunk 2077.

Can't let the AMD/nVidia duopoly to dominate again as without competition the gpu price will always go wild
babylon52281
post Aug 9 2024, 05:47 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(1024kbps @ Aug 9 2024, 03:51 PM)
By the way we shifted the topic too far away, Intel do provide extended warranty for the CPU,
business ethic wise Intel isnt very good, AMD is not better but hopefully it wont go bankrupt,
we finally have Intel GPU and i can see some developer added the Intel GPU exclusive function to games, eg Cyberpunk 2077.

Can't let the AMD/nVidia duopoly to dominate again as without competition the gpu price will always go wild
*
We did run out from topic but the key thing is; Intel(mobo partners) screwed up with 13/14 Gen, AMD screwed up with X3D cooking itself and 9000series flops, we really need a 3rd CPU gamechanger and so far ARM/SD is the best bet to push CPU evolution to the next stage. Unless someone else comes up with quantum computing that fits into a 2in x 2in square.

Ohh and then theres China but lets not talk about them.

This post has been edited by babylon52281: Aug 9 2024, 05:49 PM
1024kbps
post Aug 9 2024, 09:20 PM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Aug 9 2024, 05:47 PM)
We did run out from topic but the key thing is; Intel(mobo partners) screwed up with 13/14 Gen, AMD screwed up with X3D cooking itself and 9000series flops, we really need a 3rd CPU gamechanger and so far ARM/SD is the best bet to push CPU evolution to the next stage. Unless someone else comes up with quantum computing that fits into a 2in x 2in square.

Ohh and then theres China but lets not talk about them.
*
one chasing raw performance,
the other chasing huge cache
then you have application that don't really use the cpu instructions plus resource hogging Windows OS

for ARM qualcomm and MS need to work out binary translator for x86 app on ARM CPU, similar to Apple Rosetta stone
https://en.wikipedia.org/wiki/Rosetta_(software)

Welp, if apple can do it, MS and Qualcomm can too, hopefully wont be suck
XeactorZ
post Aug 10 2024, 05:36 PM

♥ PandaDog ♥
*********
All Stars
31,607 posts

Joined: Aug 2010
QUOTE(1024kbps @ Aug 9 2024, 03:51 PM)
By the way we shifted the topic too far away, Intel do provide extended warranty for the CPU,
business ethic wise Intel isnt very good, AMD is not better but hopefully it wont go bankrupt,
we finally have Intel GPU and i can see some developer added the Intel GPU exclusive function to games, eg Cyberpunk 2077.

Can't let the AMD/nVidia duopoly to dominate again as without competition the gpu price will always go wild
*
meanwhile waiting MSI to release their bios update laugh.gif
1024kbps
post Aug 12 2024, 01:37 AM

李素裳
*******
Senior Member
6,010 posts

Joined: Feb 2007



QUOTE(XeactorZ @ Aug 10 2024, 05:36 PM)
meanwhile waiting MSI to release their bios update laugh.gif
*
Performance impact mesured after microcode upate
https://www.phoronix.com/review/intel-raptor-lake-0x129

OOF size: large
babylon52281
post Aug 12 2024, 10:32 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
For 13/14 Gen users, be aware there are TWO microcodes to install;
0x125 and the latest 0x129.
You should install both in sequence.

Why is, Intel has defined each code to perform different function on the volt setting, tho I am unsure if the later code has the fixes of the earlier release so just to be safe better to flash both into your bios.
babylon52281
post Aug 14 2024, 04:16 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
First review of the microcode is in and at least this utuber said it doesnt fix the issue


It seems VDD is higher than spec, but Im not sure if VDD was the killer overvolt. Owners will still need to check their selves if still degrades post update, then its an epyc fail from Intel yet again (whats with these brands competing how to fail harder, et tu AMD?)

My advice would still remain same as when the issue blew up, dont trust them, even with the update do manually set limits to Intel specified on their product page:
https://www.intel.com/content/www/us/en/pro...ifications.html
https://www.intel.com/content/www/us/en/pro...ifications.html
https://www.intel.com/content/www/us/en/pro...ifications.html
https://www.intel.com/content/www/us/en/pro...ifications.html

No point trying to push it down to the limit, its ady at the limit.
moiskyrie
post Aug 19 2024, 08:09 AM

Look at all my stars!!
*******
Senior Member
3,212 posts

Joined: Dec 2006
From: City of Neko~~Nyaa~
No wonder my office designer new desktop from lenovo
(13th i7) crash and need to change mobo, cpu and gpu....
I think cpu change 2 time...

Now want buy new desktop,
14400 got affected?
montaguespirit
post Aug 19 2024, 08:58 AM

Getting Started
**
Junior Member
81 posts

Joined: Apr 2015
So far no issue in my experience because we don't overclocking it. Every generation of processor sure have minor issue but it will take time for Intel/AMD and motherboard manufacturer to fix it.
babylon52281
post Aug 19 2024, 09:20 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(moiskyrie @ Aug 19 2024, 08:09 AM)
No wonder my office designer new desktop from lenovo
(13th i7) crash and need to change mobo, cpu and gpu....
I think cpu change 2 time...

Now want buy new desktop,
14400 got affected?
*
If need to change mobo & GPU, your got a different serious problem leh. CPU might be related but I dont see how it could have kill your GPU...

Once you got the replacement, do update both the microcodes (and any if got later), the set TDP limit to Intel PBP, then set VDD limit according to the video posted above. I think after this should be stable unless something else was discovered.
moiskyrie
post Aug 19 2024, 10:14 AM

Look at all my stars!!
*******
Senior Member
3,212 posts

Joined: Dec 2006
From: City of Neko~~Nyaa~
QUOTE(babylon52281 @ Aug 19 2024, 09:20 AM)
If need to change mobo & GPU, your got a different serious problem leh. CPU might be related but I dont see how it could have kill your GPU...

Once you got the replacement, do update both the microcodes (and any if got later), the set TDP limit to Intel PBP, then set VDD limit according to the video posted above. I think after this should be stable unless something else was discovered.
*
The lenovo technician also don't know what happen as after change few time the hardware also still bsod.....
I think first time is change new cpu...
After that gpu...
3rd time cpu and mobo....
First cpu can work for few day and start bsod again....
Even now also sometime still bsod....
babylon52281
post Aug 19 2024, 10:34 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(moiskyrie @ Aug 19 2024, 10:14 AM)
The lenovo technician also don't know what happen as after change few time the hardware also still bsod.....
I think first time is change new cpu...
After that gpu...
3rd time cpu and mobo....
First cpu can work for few day and start bsod again....
Even now also sometime still bsod....
*
Could be something else related; RAM or storage? Or power supply?
montaguespirit
post Sep 17 2024, 10:42 AM

Getting Started
**
Junior Member
81 posts

Joined: Apr 2015
QUOTE(moiskyrie @ Aug 19 2024, 10:14 AM)
The lenovo technician also don't know what happen as after change few time the hardware also still bsod.....
I think first time is change new cpu...
After that gpu...
3rd time cpu and mobo....
First cpu can work for few day and start bsod again....
Even now also sometime still bsod....
*
Back in the days of Windows Vista, I was working in Dell. We received a complaint from end user (customer), and it has similar issue. Basically, the onsite technician changed every single parts already still it has BSOD issue. After that, the customer service decide to exchange a brand new unit for the customer and it solved the problem. So basically nobody know what happen too.
Duckies
post Feb 9 2025, 02:51 PM

Rubber Ducky
*******
Senior Member
9,789 posts

Joined: Jun 2008
From: Rubber Duck Pond


Just to share I finally switch out the backplate to the Thermalright one. It helps about 5-10c! Previously I underclock and undervolt it by setting PL1 to 125w and PL2 to 180w before it gets thermal throttling but now it can go up to 200-220w for PL2! Not that it matters much in day to day usage or gaming but just good to see the temp gets good and since the backplate is not that expensive. But I still couldn't get it to 253W cause it'll get thermal throttling half way. Room ambient temperature probably about 26c since no aircon.

This post has been edited by Duckies: Feb 9 2025, 02:53 PM
babylon52281
post Feb 9 2025, 06:04 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Duckies @ Feb 9 2025, 02:51 PM)
Just to share I finally switch out the backplate to the Thermalright one. It helps about 5-10c! Previously I underclock and undervolt it by setting PL1 to 125w and PL2 to 180w before it gets thermal throttling but now it can go up to 200-220w for PL2! Not that it matters much in day to day usage or gaming but just good to see the temp gets good and since the backplate is not that expensive. But I still couldn't get it to 253W cause it'll get thermal throttling half way. Room ambient temperature probably about 26c since no aircon.
*
Backplate? You mean the ILM bracket holding the CPU rite?
Yeah similar to my result too, reduce by 8oC https://forum.lowyat.net/index.php?showtopi...ost&p=110531724

Your using fishtank case so temps will defo be higher cuz airflow in these type of cases is poor.

Duckies
post Feb 9 2025, 06:34 PM

Rubber Ducky
*******
Senior Member
9,789 posts

Joined: Jun 2008
From: Rubber Duck Pond


QUOTE(babylon52281 @ Feb 9 2025, 06:04 PM)
Backplate? You mean the ILM bracket holding the CPU rite?
Yeah similar to my result too, reduce by 8oC https://forum.lowyat.net/index.php?showtopi...ost&p=110531724

Your using fishtank case so temps will defo be higher cuz airflow in these type of cases is poor.
*
Ya that's right. Was thinking if it's placebo effect until I finally made the decision to do it. So just want to share here again to let others know it really works haha.

But I didn't do it as meticulous as what you shared in your thread. I just screw it tight and that's all lol

Was thinking my AIO is not good enough but then I want an all Lian Li setup that's why I go for this. If the AIO is the Artic Liquid Freezer III then the temperature will probably be better.

This post has been edited by Duckies: Feb 9 2025, 06:59 PM
babylon52281
post Feb 9 2025, 07:04 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Duckies @ Feb 9 2025, 06:34 PM)
Ya that's right. Was thinking if it's placebo effect until I finally made the decision to do it. So just want to share here again to let others know it really works haha.

But I didn't do it as meticulous as what you shared in your thread. I just screw it tight and that's all lol
*
Haha cuz I heard its high risk to screw RAM stability due to wrongly tighten down with unequal pressure, having to redo RAM tuning & testing, stability issue whatnot. Hell no I will go thru that after 2 weeks of RAM tuning. Also I want to make sure doing properly will give it best chance to work and doing it methodically isnt that difficult once I understand the concept. But congrats on your results too!

Now I wonder if anyone will do for LGA1851 as well haha.
Duckies
post Feb 9 2025, 07:11 PM

Rubber Ducky
*******
Senior Member
9,789 posts

Joined: Jun 2008
From: Rubber Duck Pond


QUOTE(babylon52281 @ Feb 9 2025, 07:04 PM)
Haha cuz I heard its high risk to screw RAM stability due to wrongly tighten down with unequal pressure, having to redo RAM tuning & testing, stability issue whatnot. Hell no I will go thru that after 2 weeks of RAM tuning. Also I want to make sure doing properly will give it best chance to work and doing it methodically isnt that difficult once I understand the concept. But congrats on your results too!

Now I wonder if anyone will do for LGA1851 as well haha.
*
LOL if I want to change CPU next time I will skip Intel for a while until they fixed their shit.
babylon52281
post Feb 9 2025, 07:37 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Duckies @ Feb 9 2025, 07:11 PM)
LOL if I want to change CPU next time I will skip Intel for a while until they fixed their shit.
*
Agreed, altho in my history I seem to be with Intel somehow, coming up to 10 systems, PC & laptop inc a work based lappie. Never had an AMD (or Mac) as oddly I went thru without upgrading into Intel 14+++++ era, hanging on with my Ivybridge lappie until my Alderlake PC.
So im quite optimistic that the next time I need change again its perhaps my luck that Intel will have something good.

 

Change to:
| Lo-Fi Version
0.0875sec    0.54    6 queries    GZIP Disabled
Time is now: 28th November 2025 - 06:54 PM