Let's say it is 3W vs 30W for idle, but in heavy load the situation is reversed at way larger scale. So it is heavily dependent to the usage pattern. Algorithm in the firmware is involved. Bigger brand has better experience and performs QC with this kind of scenario.
Indeed which is why I did say gaming wise its better per watt. So it really depends on usage whether its primarily just a gaming PC or a daily regular use with some hours of gaming which in that case an Intel would use less wattage per day scenario. So YMMV.
1024kbps We talked about X86 backwards compatibility hampering its evolution progress but here is real proof how ridiculous deep it can running software way back to the DOS days.
Since X86 could not be weaned off its legacy limitations, there is no option than to move onto modern CPU uarchs that has no such need ie ARM & RISC V.
I, and I am guessing many people also, actually do prefer the backward compatibility "feature" to be around and continue... at least old systems and softwares can still be used...
not like my old apple products that basically are fine but no longer usable due to software obsolescence.
Its the baggage problem, X86 has too much legacy that it hampers a more efficient uarch/instruction set to replace it. Which is why ARM & Apple M are giving X86 the power efficiency & ipc middle fingers.
Apple did the right thing by cutting out legacy support, look at how good its M CPU when it caters to run modern software only. ARM is clean slate which should be the next evolution for computers. By supporting history software, X86 is history.
Not really. Apple M CPU is produced by newer lithography than x86 so the efficiency is better. If Apple were to make it with Samsung node, you will see it performs way worse.
Smaller litho helps to pack in more cores and improve power efficiency but it doesnt explain why Apple M still has better ipc than Intel/AMD X86, its all to do with the uarch being designed from scratch.
I think the problem is with the enterprise company... many of their apps are designed for x86. If they were to migrate to ARM, it will be costly.
QUOTE(blindmutedeaf @ Aug 7 2024, 09:24 AM)
Yep this is the problem, if you got into those long2 time production company, you still can see DOS and the SW developer is long gone. So I won't be surprise if whole lot of our daily infra still having those T-Rex SW running.
It is a diff biz model, some people call it a moat?
Its bcoz these enterprises are too skinflint from investing into modernising their ancient hardware with that reason of 'need to support legacy sw' until there is no other choice then somehow they magically can port over without near killing their business. Banking ATMS been a good example stubbornly staying on DOS until recently when they migrate to... Win XP lol. Somehow the financial sector didnt collapse. So I doubt there is really any technical reason for companies still relying on old systems.
OTOH there is anecdotal good excuses to stay as in during Crowdstrike outage it was US Southwestern airline that could remain operational due to running on Win 3.1. Somehow nobody mentioned such an outdated system can easily hacked or malware hit.
A ground up new uarch CPU can be hardened for better security against attacks on ancient processes that still exist in X86.
Not really again. x86 can scale up to that level too, but it will be large and too costly to produce.
So it is not about x86 is bad in efficiency and ARM is the saver, it is the company that performs the engineering decision. Recall that we had the similar design of Snapdragon vs Dimensity, one made by Samsung and another made by Mediatek, the efficiency curve are completely different.
Not really three. Raptorlake has about same transistor count (~25bil) as M3 (nonPro) but it still loses in ipc, why? It the uarch.
QUOTE(chocobo7779 @ Aug 7 2024, 09:22 PM)
Even SD X Elite isn't that much of a competitor compared to modern x86 chips either, despite being designed by a team ex-Apple Silicon engineers
That is SD 1st go at the Windows ecosystem, surely anything pioneer will have lotsa issues and Windows wasnt specifically design for ARM either. So there is a lot of room for optimisation yet to start whacking SD/ARM. Id say give it 3 gen then we will have a better picture where ARM sits with the X86s.
I think it's more on optimization, x86 desktop app are notoriously power hog, although may parts of the 2d stuff already offloaded to gpu, it's still far from perfect, When I opened map Google chrome on my amd lappy, shit is so slow I have to enable performance mode and it also turned to jet engine I can open the same map on my OnePlus 7t pro and all things rendered instantly... The phone still can play all sorts of games albeit slower
QUOTE(kingkingyyk @ Aug 8 2024, 09:18 AM)
Blame that on inefficient javascript code. Webassembly will solve that, but only little traction is gain over that side.
Its like badly optimised games gets pushed out bcoz developers know they can always rely on 4090s & 4080s to run these titles at acceptable FPS, the same type sloppy developers dont really care for X86 code optimisation as the CPU isnt built up to prioritise efficiency as much as their same software for ARM (you practically have to develop within a power/thermal envelope limit).
Poorly optimised software exist bcoz they know they can leverage on high power CPUS, high power CPU exist bcoz existing hardware uarch are poorly optimised for power efficiency & ipc, poorly optimised uarch exist bcoz back then the 70s & 80s (when X86 was rising), power efficiency wasnt a thing.
AFAIK I havent heard of a poor Apple or Android/Arm software that sucks more power than needed to run it. I stand to be corrected.
For X86 uarch to move forward, it needs to ditch legacy 16bit & 32bit function, and revamp its 64bit compute to be on par with SD/ Mseries.
Intel problem in this tered is symptomatic of X86 limitations due to pushing power limits when they hit the down nodeing wall. AMD 9000series is also indicative of this, perhaps moreso of the future, when they decided to pull back the TDP limit to 65W, as i suspected seeing TPU review with PBO off there was a little 5% more to gain so AMD too have hit a power performance wall with current uarch & node.
Not sure why you are comparing desktop part to mobile part, to compare should be xelite to amd ai 9 300 series. Most of the time it show a bit difference in performance and power but it is not big difference.
Regarding the uarch, you cannot run 16bit and 32bit function in windows 10 64. So why point out this. Just because the instruction set still support that, it does not mean it is still used in pc.
Heck 16bit not even supported in now... Instruction set
AMD64 (x86-64) (AMD64 only support 32bit and 64bit) Extensions Crypto AES, SHA SIMD MMX-plus, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSE4A, SSSE3, FMA3, AVX, AVX2, AVX512 Virtualization AMD-V
The only problem i see in this ARM vs x86 pc is RISC vs CISC design which is only affect by the legacy design of the operating system since the instruction set are different.
Ryzen HS and Core HX CPUS are mobile parts no? But they share the same uarch as desktop parts and these are thermally hard to cool unlike SD Elite.
Just bcoz its not used in Windows doesnt mean the hardware to run is not ady there
But its precisely that since Windows no longer interact at such base algo that it makes no sense to keep them, so why not deprecate and remove 16 & 32bit function. Then by optimising 64bit, the transistor saved can be reuse to improve elsewhere or cut the die size down to reduce cost.
And its not like the end for legacy programs as software emulation could be used to run them.
This post has been edited by babylon52281: Aug 9 2024, 10:48 AM
By the way we shifted the topic too far away, Intel do provide extended warranty for the CPU, business ethic wise Intel isnt very good, AMD is not better but hopefully it wont go bankrupt, we finally have Intel GPU and i can see some developer added the Intel GPU exclusive function to games, eg Cyberpunk 2077.
Can't let the AMD/nVidia duopoly to dominate again as without competition the gpu price will always go wild
We did run out from topic but the key thing is; Intel(mobo partners) screwed up with 13/14 Gen, AMD screwed up with X3D cooking itself and 9000series flops, we really need a 3rd CPU gamechanger and so far ARM/SD is the best bet to push CPU evolution to the next stage. Unless someone else comes up with quantum computing that fits into a 2in x 2in square.
Ohh and then theres China but lets not talk about them.
This post has been edited by babylon52281: Aug 9 2024, 05:49 PM
For 13/14 Gen users, be aware there are TWO microcodes to install; 0x125 and the latest 0x129. You should install both in sequence.
Why is, Intel has defined each code to perform different function on the volt setting, tho I am unsure if the later code has the fixes of the earlier release so just to be safe better to flash both into your bios.
First review of the microcode is in and at least this utuber said it doesnt fix the issue
It seems VDD is higher than spec, but Im not sure if VDD was the killer overvolt. Owners will still need to check their selves if still degrades post update, then its an epyc fail from Intel yet again (whats with these brands competing how to fail harder, et tu AMD?)
No wonder my office designer new desktop from lenovo (13th i7) crash and need to change mobo, cpu and gpu.... I think cpu change 2 time...
Now want buy new desktop, 14400 got affected?
If need to change mobo & GPU, your got a different serious problem leh. CPU might be related but I dont see how it could have kill your GPU...
Once you got the replacement, do update both the microcodes (and any if got later), the set TDP limit to Intel PBP, then set VDD limit according to the video posted above. I think after this should be stable unless something else was discovered.
The lenovo technician also don't know what happen as after change few time the hardware also still bsod..... I think first time is change new cpu... After that gpu... 3rd time cpu and mobo.... First cpu can work for few day and start bsod again.... Even now also sometime still bsod....
Could be something else related; RAM or storage? Or power supply?
Just to share I finally switch out the backplate to the Thermalright one. It helps about 5-10c! Previously I underclock and undervolt it by setting PL1 to 125w and PL2 to 180w before it gets thermal throttling but now it can go up to 200-220w for PL2! Not that it matters much in day to day usage or gaming but just good to see the temp gets good and since the backplate is not that expensive. But I still couldn't get it to 253W cause it'll get thermal throttling half way. Room ambient temperature probably about 26c since no aircon.
Ya that's right. Was thinking if it's placebo effect until I finally made the decision to do it. So just want to share here again to let others know it really works haha.
But I didn't do it as meticulous as what you shared in your thread. I just screw it tight and that's all lol
Haha cuz I heard its high risk to screw RAM stability due to wrongly tighten down with unequal pressure, having to redo RAM tuning & testing, stability issue whatnot. Hell no I will go thru that after 2 weeks of RAM tuning. Also I want to make sure doing properly will give it best chance to work and doing it methodically isnt that difficult once I understand the concept. But congrats on your results too!
Now I wonder if anyone will do for LGA1851 as well haha.
LOL if I want to change CPU next time I will skip Intel for a while until they fixed their shit.
Agreed, altho in my history I seem to be with Intel somehow, coming up to 10 systems, PC & laptop inc a work based lappie. Never had an AMD (or Mac) as oddly I went thru without upgrading into Intel 14+++++ era, hanging on with my Ivybridge lappie until my Alderlake PC. So im quite optimistic that the next time I need change again its perhaps my luck that Intel will have something good.