Welcome Guest ( Log In | Register )

11 Pages « < 7 8 9 10 11 >Bottom

Outline · [ Standard ] · Linear+

 Intel 13th/14th gen cpus crashing, degrading

views
     
Wedchar2912
post Aug 6 2024, 01:43 PM

Look at all my stars!!
*******
Senior Member
3,593 posts

Joined: Apr 2019
QUOTE(babylon52281 @ Aug 6 2024, 11:11 AM)
1024kbps
We talked about X86 backwards compatibility hampering its evolution progress but here is real proof how ridiculous deep it can running software way back to the DOS days.

https://www.techspot.com/news/104118-check-...n-hardware.html

Since X86 could not be weaned off its legacy limitations, there is no option than to move onto modern CPU uarchs that has no such need ie ARM & RISC V.
*
I, and I am guessing many people also, actually do prefer the backward compatibility "feature" to be around and continue... at least old systems and softwares can still be used...

not like my old apple products that basically are fine but no longer usable due to software obsolescence.
babylon52281
post Aug 6 2024, 03:44 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Wedchar2912 @ Aug 6 2024, 01:43 PM)
I, and I am guessing many people also, actually do prefer the backward compatibility "feature" to be around and continue... at least old systems and softwares can still be used...

not like my old apple products that basically are fine but no longer usable due to software obsolescence.
*
Its the baggage problem, X86 has too much legacy that it hampers a more efficient uarch/instruction set to replace it. Which is why ARM & Apple M are giving X86 the power efficiency & ipc middle fingers.

Apple did the right thing by cutting out legacy support, look at how good its M CPU when it caters to run modern software only. ARM is clean slate which should be the next evolution for computers. By supporting history software, X86 is history.
kingkingyyk
post Aug 6 2024, 08:44 PM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Aug 6 2024, 03:44 PM)
Its the baggage problem, X86 has too much legacy that it hampers a more efficient uarch/instruction set to replace it. Which is why ARM & Apple M are giving X86 the power efficiency & ipc middle fingers.

Apple did the right thing by cutting out legacy support, look at how good its M CPU when it caters to run modern software only. ARM is clean slate which should be the next evolution for computers. By supporting history software, X86 is history.
*
Not really. Apple M CPU is produced by newer lithography than x86 so the efficiency is better. If Apple were to make it with Samsung node, you will see it performs way worse. wink.gif
1024kbps
post Aug 6 2024, 10:07 PM

李素裳
*******
Senior Member
6,012 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Aug 6 2024, 11:11 AM)
1024kbps
We talked about X86 backwards compatibility hampering its evolution progress but here is real proof how ridiculous deep it can running software way back to the DOS days.

https://www.techspot.com/news/104118-check-...n-hardware.html

Since X86 could not be weaned off its legacy limitations, there is no option than to move onto modern CPU uarchs that has no such need ie ARM & RISC V.
*
no one took the initiative, microsoft? AMD? Intel?, back ward compatible is good but too many ancient stuff is no no.
What AMD did was added x64 extension, it helped to extend x86 cpu life
Intel hit the wall with itanium.

honestly who runs ancient stuff on latest shiny cpu with 5GHz 16 cores? nope.
Old software has too many limitation it will not able to utilize all the cpu.

I still have really old game such as heretic and other dos games run perfectly fine either via Steam or inside VirtualBox.
when ancient stuff can be emulated then it should be removed, other wise it will snowballing, old stuff should be left behind, like what AMD and Nvidia did do their old GPU
they only support old GPU up to certain version of OS, or number or years.


why qualcomm elite cpu can emulate x86 cpu while their own powerful x86 cpu couldnt...?
like, who still writing software optimize for MMX? new instructions are much more faster...
Matchy
post Aug 7 2024, 08:35 AM

Regular
******
Senior Member
1,321 posts

Joined: Jun 2019


QUOTE(1024kbps @ Aug 6 2024, 10:07 PM)
no one took the initiative, microsoft? AMD? Intel?, back ward compatible is good but too many ancient stuff is no no.
What AMD did was added x64 extension, it helped to extend x86 cpu life
Intel hit the wall with itanium.

honestly who runs ancient stuff on latest shiny cpu with 5GHz 16 cores? nope.
Old software has too many limitation it will not able to utilize all the cpu.

I still have really old game such as heretic and other dos games run perfectly fine either via Steam or inside VirtualBox.
when ancient stuff can be emulated then it should be removed, other wise it will snowballing, old stuff should be left behind, like what AMD and Nvidia did do their old GPU
they only support old GPU up to certain version of OS, or number or years.
why qualcomm elite cpu can emulate x86 cpu while their own powerful x86 cpu couldnt...?
like, who still writing software optimize for MMX? new instructions are much more faster...
*
I think the problem is with the enterprise company... many of their apps are designed for x86. If they were to migrate to ARM, it will be costly.
blindmutedeaf
post Aug 7 2024, 09:24 AM

Getting Started
**
Junior Member
270 posts

Joined: Sep 2016
From: Penang lo
QUOTE(Matchy @ Aug 7 2024, 08:35 AM)
I think the problem is with the enterprise company... many of their apps are designed for x86. If they were to migrate to ARM, it will be costly.
*
Yep this is the problem, if you got into those long2 time production company, you still can see DOS and the SW developer is long gone.
So I won't be surprise if whole lot of our daily infra still having those T-Rex SW running.

It is a diff biz model, some people call it a moat?
babylon52281
post Aug 7 2024, 09:53 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(kingkingyyk @ Aug 6 2024, 08:44 PM)
Not really. Apple M CPU is produced by newer lithography than x86 so the efficiency is better. If Apple were to make it with Samsung node, you will see it performs way worse. wink.gif
*
Smaller litho helps to pack in more cores and improve power efficiency but it doesnt explain why Apple M still has better ipc than Intel/AMD X86, its all to do with the uarch being designed from scratch.
babylon52281
post Aug 7 2024, 10:05 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(Matchy @ Aug 7 2024, 08:35 AM)
I think the problem is with the enterprise company... many of their apps are designed for x86. If they were to migrate to ARM, it will be costly.
*
QUOTE(blindmutedeaf @ Aug 7 2024, 09:24 AM)
Yep this is the problem, if you got into those long2 time production company, you still can see DOS and the SW developer is long gone.
So I won't be surprise if whole lot of our daily infra still having those T-Rex SW running.

It is a diff biz model, some people call it a moat?
*
Its bcoz these enterprises are too skinflint from investing into modernising their ancient hardware with that reason of 'need to support legacy sw' until there is no other choice then somehow they magically can port over without near killing their business.
Banking ATMS been a good example stubbornly staying on DOS until recently when they migrate to... Win XP lol. Somehow the financial sector didnt collapse. So I doubt there is really any technical reason for companies still relying on old systems.

OTOH there is anecdotal good excuses to stay as in during Crowdstrike outage it was US Southwestern airline that could remain operational due to running on Win 3.1. Somehow nobody mentioned such an outdated system can easily hacked or malware hit.

A ground up new uarch CPU can be hardened for better security against attacks on ancient processes that still exist in X86.

kingkingyyk
post Aug 7 2024, 03:06 PM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Aug 7 2024, 09:53 AM)
Smaller litho helps to pack in more cores and improve power efficiency but it doesnt explain why Apple M still has better ipc than Intel/AMD X86, its all to do with the uarch being designed from scratch.
*
Not really again. x86 can scale up to that level too, but it will be large and too costly to produce. wink.gif

So it is not about x86 is bad in efficiency and ARM is the saver, it is the company that performs the engineering decision. Recall that we had the similar design of Snapdragon vs Dimensity, one made by Samsung and another made by Mediatek, the efficiency curve are completely different.

This post has been edited by kingkingyyk: Aug 7 2024, 03:15 PM
chocobo7779
post Aug 7 2024, 09:22 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(kingkingyyk @ Aug 7 2024, 03:06 PM)
Not really again. x86 can scale up to that level too, but it will be large and too costly to produce. wink.gif

So it is not about x86 is bad in efficiency and ARM is the saver, it is the company that performs the engineering decision. Recall that we had the similar design of Snapdragon vs Dimensity, one made by Samsung and another made by Mediatek, the efficiency curve are completely different.
*
Even SD X Elite isn't that much of a competitor compared to modern x86 chips either, despite being designed by a team ex-Apple Silicon engineers icon_idea.gif
babylon52281
post Aug 7 2024, 10:24 PM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(kingkingyyk @ Aug 7 2024, 03:06 PM)
Not really again. x86 can scale up to that level too, but it will be large and too costly to produce. wink.gif

So it is not about x86 is bad in efficiency and ARM is the saver, it is the company that performs the engineering decision. Recall that we had the similar design of Snapdragon vs Dimensity, one made by Samsung and another made by Mediatek, the efficiency curve are completely different.
*
Not really three. Raptorlake has about same transistor count (~25bil) as M3 (nonPro) but it still loses in ipc, why? It the uarch.

QUOTE(chocobo7779 @ Aug 7 2024, 09:22 PM)
Even SD X Elite isn't that much of a competitor compared to modern x86 chips either, despite being designed by a team ex-Apple Silicon engineers icon_idea.gif
*
That is SD 1st go at the Windows ecosystem, surely anything pioneer will have lotsa issues and Windows wasnt specifically design for ARM either. So there is a lot of room for optimisation yet to start whacking SD/ARM. Id say give it 3 gen then we will have a better picture where ARM sits with the X86s.
kingkingyyk
post Aug 7 2024, 10:39 PM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(chocobo7779 @ Aug 7 2024, 09:22 PM)
Even SD X Elite isn't that much of a competitor compared to modern x86 chips either, despite being designed by a team ex-Apple Silicon engineers icon_idea.gif
*
Well it depends whether they want to widen the core or not. For now it serves only niche market. icon_idea.gif

QUOTE(babylon52281 @ Aug 7 2024, 10:24 PM)
Not really three. Raptorlake has about same transistor count (~25bil) as M3 (nonPro) but it still loses in ipc, why? It the uarch.
*
You do aware that the CPU itself only occupies a small amount of die size in modern processor (we should call it as SoC now) right? wink.gif

1024kbps
post Aug 8 2024, 07:56 AM

李素裳
*******
Senior Member
6,012 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Aug 7 2024, 09:53 AM)
Smaller litho helps to pack in more cores and improve power efficiency but it doesnt explain why Apple M still has better ipc than Intel/AMD X86, its all to do with the uarch being designed from scratch.
*
I think it's more on optimization, x86 desktop app are notoriously power hog, although may parts of the 2d stuff already offloaded to gpu, it's still far from perfect,
When I opened map Google chrome on my amd lappy, shit is so slow I have to enable performance mode and it also turned to jet engine
I can open the same map on my OnePlus 7t pro and all things rendered instantly... The phone still can play all sorts of games albeit slower
kingkingyyk
post Aug 8 2024, 09:18 AM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(1024kbps @ Aug 8 2024, 07:56 AM)
I think it's more on optimization, x86 desktop app are notoriously power hog, although may parts of the 2d stuff already offloaded to gpu, it's still far from perfect,
When I opened map Google chrome on my amd lappy, shit is so slow I have to enable performance mode and it also turned to jet engine
I can open the same map on my OnePlus 7t pro and all things rendered instantly... The phone still can play all sorts of games albeit slower
*
Blame that on inefficient javascript code. biggrin.gif Webassembly will solve that, but only little traction is gain over that side.
babylon52281
post Aug 8 2024, 09:37 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
QUOTE(1024kbps @ Aug 8 2024, 07:56 AM)
I think it's more on optimization, x86 desktop app are notoriously power hog, although may parts of the 2d stuff already offloaded to gpu, it's still far from perfect,
When I opened map Google chrome on my amd lappy, shit is so slow I have to enable performance mode and it also turned to jet engine
I can open the same map on my OnePlus 7t pro and all things rendered instantly... The phone still can play all sorts of games albeit slower
*
QUOTE(kingkingyyk @ Aug 8 2024, 09:18 AM)
Blame that on inefficient javascript code.  biggrin.gif Webassembly will solve that, but only little traction is gain over that side.
*
Its like badly optimised games gets pushed out bcoz developers know they can always rely on 4090s & 4080s to run these titles at acceptable FPS, the same type sloppy developers dont really care for X86 code optimisation as the CPU isnt built up to prioritise efficiency as much as their same software for ARM (you practically have to develop within a power/thermal envelope limit).
kingkingyyk
post Aug 8 2024, 09:43 AM

10k Club
Group Icon
Elite
15,694 posts

Joined: Mar 2008
QUOTE(babylon52281 @ Aug 8 2024, 09:37 AM)
Its like badly optimised games gets pushed out bcoz developers know they can always rely on 4090s & 4080s to run these titles at acceptable FPS, the same type sloppy developers dont really care for X86 code optimisation as the CPU isnt built up to prioritise efficiency as much as their same software for ARM (you practically have to develop within a power/thermal envelope limit).
*
This is not related to x86 or ARM, but just how developers trade for faster development time. The perception you got for ARM is apps need to consider the low baseline performance of ARM processor, where you still get ancient A76 in new chip now!
https://www.gsmarena.com/mediatek_launches_...-news-64021.php

For website... I don't think they gonna care that much. laugh.gif Time to market is more important.
1024kbps
post Aug 8 2024, 06:44 PM

李素裳
*******
Senior Member
6,012 posts

Joined: Feb 2007



QUOTE(babylon52281 @ Aug 8 2024, 09:37 AM)
Its like badly optimised games gets pushed out bcoz developers know they can always rely on 4090s & 4080s to run these titles at acceptable FPS, the same type sloppy developers dont really care for X86 code optimisation as the CPU isnt built up to prioritise efficiency as much as their same software for ARM (you practically have to develop within a power/thermal envelope limit).
*
depends on publisher,
good publisher like Bethesda who developed Doom series, known for supporting OpenGL at first, then Vulkan rendering support was added later, and all the following games use id Tech 6 and newer have very good performance
Then CDPR released "it just work" game to public, not too bad it still runs like crap, they did keep updating the games.

they collect your system data when crash or probably have telemetry enabled, then optimize accordingly.
Square Enix knows of slow update iirc, most of my Tomb raider games dont get update as much as the other AAA game publisher.

unlike some publisher called ubi kentang lol
Baconateer
post Aug 8 2024, 06:48 PM

Meh..... (TM)
*******
Senior Member
5,088 posts

Joined: Jun 2013
From: Blue Planet


QUOTE(1024kbps @ Aug 8 2024, 06:44 PM)
depends on publisher,
good publisher like Bethesda who developed Doom series, known for supporting OpenGL at first, then Vulkan rendering support was added later, and all the following games use id Tech 6 and newer have very good performance
Then CDPR released "it just work" game to public, not too bad it still runs like crap, they did keep updating the games.

they collect your system data when crash or probably have telemetry enabled, then optimize accordingly.
Square Enix knows of slow update iirc, most of my Tomb raider games dont get update as much as the other AAA game publisher.

unlike some publisher called ubi kentang lol
*
id software developed Doom

Bethesda is the publisher

bethesda can only develop something like starfield..

which is an abomination when it comes to optimisation when compared to doom
dexeric
post Aug 8 2024, 07:15 PM

Getting Started
**
Junior Member
118 posts

Joined: Oct 2008


Not sure that are the actual topic is but the intel issue mainly is due to (a) having small non ht core sharing ring bus with performance core [different clock] (b) having to clock the ring bus to the moon to get perf cpu faster

-----

Regarding sd x elite vs x86 pc vs m2/3/4

M2/3/4 vs x86 - apple control the whole ecosystem and can make the whole pc to be efficient by doing all crucial part to be 1st party [ram, ssd, software[strict control on execution], and everything else] to achieve state of the art efficiency( at least affect a bit ).

AMD vs sd x elite - amd was design to reduce power envelope to cpu to provide more headroom to their gpu. When compare, xelite cpu is better than amd ryzen ai 9 300 series. But there is not much difference in cpu benchmark, in games there is alot difference.


babylon52281
post Aug 9 2024, 09:29 AM

Look at all my stars!!
*******
Senior Member
2,673 posts

Joined: Apr 2017
Poorly optimised software exist bcoz they know they can leverage on high power CPUS, high power CPU exist bcoz existing hardware uarch are poorly optimised for power efficiency & ipc, poorly optimised uarch exist bcoz back then the 70s & 80s (when X86 was rising), power efficiency wasnt a thing.

AFAIK I havent heard of a poor Apple or Android/Arm software that sucks more power than needed to run it. I stand to be corrected.

For X86 uarch to move forward, it needs to ditch legacy 16bit & 32bit function, and revamp its 64bit compute to be on par with SD/ Mseries.

Intel problem in this tered is symptomatic of X86 limitations due to pushing power limits when they hit the down nodeing wall. AMD 9000series is also indicative of this, perhaps moreso of the future, when they decided to pull back the TDP limit to 65W, as i suspected seeing TPU review with PBO off there was a little 5% more to gain so AMD too have hit a power performance wall with current uarch & node.

11 Pages « < 7 8 9 10 11 >Top
 

Change to:
| Lo-Fi Version
0.0211sec    0.24    6 queries    GZIP Disabled
Time is now: 2nd December 2025 - 09:12 AM