Welcome Guest ( Log In | Register )

11 Pages « < 2 3 4 5 6 > » Bottom

Outline · [ Standard ] · Linear+

 Intel 13th/14th gen cpus crashing, degrading

views
     
stella_purple
post May 11 2024, 02:42 AM

Casual
***
Junior Member
392 posts

Joined: Oct 2011
QUOTE(lee_lnh @ May 11 2024, 02:20 AM)
amd one was do or die... since it burn out.
inhell gonna affect for life..
*
amd one is worse, it may turn into fire hazard laugh.gif

user posted image



This post has been edited by stella_purple: May 11 2024, 02:51 AM
adamtayy
post May 11 2024, 05:24 AM

Regular
******
Senior Member
1,379 posts

Joined: May 2006
From: Penang island



QUOTE(stella_purple @ May 11 2024, 02:42 AM)
amd one is worse, it may turn into fire hazard laugh.gif

user posted image


*
i think, overclock kaw-kaw

thats why....
chocobo7779
post May 11 2024, 07:41 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(imbibug @ Apr 15 2024, 03:01 PM)
Around the 10th gen or so, Intel seems to be juicing up their cpus in order to look competitive with Ryzen cpus.
user posted image
And Intel has done this sort of thing in the past, just not to the extent of what is going on now.

This problem has just gotten way worse since. The current 13th/14th gen cpus will be degraded at stock settings because Intel has ridiculously high power limits. The default limits are already very high and the 'extreme' ICCmax for 150W TDP 13th/14th gen cpus is 400A is clearly crazy high. The average pc builder or gamer is not going to know that leaving the settings at default will degrade their cpus permanently in a few short months.

Recently Nvidia pushed back by telling users with Raptor Lake cpus to contact Intel after getting "out of video memory" errors.
https://www.tomshardware.com/pc-components/...t-intel-support
And even worse, it looks like the Raptor Lake cpus have degraded in a few months to a point where crashing occurs.
https://www.lowyat.net/2024/320284/gamers-r...-cpus-en-masse/

Sad to see how Intel sunk to to the level where they are putting the blame on mobo manufacturers for not enforcing limits LOWER than their own specs for long term reliability.
*
The whole stability mess would have been mitigated had Intel actually tried to compete with AMD's X3D chips by not brute forcing clock speeds just to eke out that tiny little performance advantage (so it can look good on presentations) sweat.gif
IMHO the LGA1700 platform is a weird place; while the 12th gen chips are legitimately good, the 13th gen chips feels like a 12th gen chip with mostly minor improvements, and the 14th gen is just a pure waste of sand. It really speaks volume about Intel's platform 'longevity' especially when you consider Arrow Lake will come on a new platform/socket sweat.gif

QUOTE(Duckies @ Apr 15 2024, 03:11 PM)
Yes...and I have to downvolt + limit power for my 14700k to prevent high temperature...cilaka

In fact AMD CPU and GPU now is the best performance/price. If not because of Intel's good marketing...
*
I really wish both AMD/Intel didn't lock themselves in an all out performance war and they should start designing desktop chips will efficiency in mind - those 5.5-6+ GHz clock speeds are sort of unsustainable and reminds of the Netburst era where little emphasis is put on efficiency. Especially when you consider most x86 chips are clocked way beyond the efficiency point which leads to excessive power consumption. Yes I know you can power tune those CPUs or even use Eco mode but then the hivemind only wanted longer performance bars at any cost so they are more than happy to oblige sweat.gif

This post has been edited by chocobo7779: May 11 2024, 12:50 PM
chocobo7779
post May 11 2024, 07:43 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(pandah @ Apr 15 2024, 03:28 PM)
big reduction in performance? Or generally not noticeable?
*
Probably not noticeable outside of very heavy, multithreaded workloads (in games it will be fine) icon_idea.gif
chocobo7779
post May 11 2024, 07:44 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(imbibug @ Apr 15 2024, 03:29 PM)
The 14700 is not a bad cpu, its just that Intel wants to win and be no. 1  at everything at any cost it seems. Lowering the PL1/PL2/Icc is not going destroy performance, its still going to be good and generate alot less heat.
*
Yup, same goes to AMD as well with their '95C Tjmax' temperatures sweat.gif
chocobo7779
post May 11 2024, 07:48 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(TristanX @ Apr 15 2024, 07:00 PM)
Read the Tom's Hardware news properly. 4096W is not Intel official power limit. 253W is for 13900K and 14900K. Motherboard vendors pushing too much performance on already "overclocked" chips.

You don't need power limit removed to get the most out of their chips too.

user posted image
https://www.techpowerup.com/review/intel-co...-14900k/18.html

It can be tuned to be very efficient too.

*
On the other hand, the whole power limit mess has been already happening for about a decade or so, according to Hardware Unboxed:


Yes, you can tune Intel chips to very efficient levels, but the same goes to AMD as well (it's even better when you consider you can just use 105W Eco mode on the BIOS for the Ryzen 9 chips and you'll have very little performance loss outside of synthetic benchmarks with a significant uplift in efficiency) icon_idea.gif
That being said however it just kind of proves that modern x86 CPUs are massively tuned for performance with very little regard in efficiency icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 08:02 AM
chocobo7779
post May 11 2024, 07:57 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ Apr 15 2024, 05:06 PM)
Buy ARM or Qualcomm stocks. Their CPU destroys both X86 in terms of power efficiency. ARM is the future.
*
From a ISA point of view, both x86 and ARM are quite performant and efficient (there is a common myth that x86 is 'inefficient' because of the legacy bloat), and ISA doesn't really mean much where's there's no native apps to run. Back in the 1990s there are plenty of RISC ISAs that outperformed x86 significantly like DEC Alpha and yet x86 prevailed due to the large library of software written. I know binary translation/dynamic recompilation will help but this will introduce a performance overhead icon_idea.gif
Modern CPUs these days are so complex from the architectural standpoint that the ISA doesn't mean much anymore icon_idea.gif

Ultimately, implementation matters (there's also the economic/business side that needs to consider), not ISA - there's also a YouTube channel that talks about semiconductor design and manufacturing that goes deep dive in CPU architectures:
https://www.youtube.com/@HighYield

https://chipsandcheese.com/2021/07/13/arm-o...-doesnt-matter/

This post has been edited by chocobo7779: May 11 2024, 08:05 AM
chocobo7779
post May 11 2024, 08:05 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(hashtag2016 @ Apr 20 2024, 01:32 PM)
Don't worry , be happy.. I think more and more mobo maker will include something like Intel DEFAULT setting to help users trobleshoot their PC issue(if any).
Since, Asus already Introduce "Intel Baseline Profile option " in their latest BIOS..it is a positive move, I think.. brows.gif

although u have to choose between Stability or Performance .. drool.gif
https://twitter.com/9550pro/status/1781481593972129929

p/s: IMHO, I think even gen 12nd and some of the non-K might also face similar issue. (that's just my personal thought, don't quote me please). brows.gif  brows.gif
*
Probably for the better, but this just makes the 14th gen K CPUs look like a rebranded 13th gen and even more of a waste of sand sweat.gif
adamtayy
post May 11 2024, 08:07 AM

Regular
******
Senior Member
1,379 posts

Joined: May 2006
From: Penang island



QUOTE(chocobo7779 @ May 11 2024, 07:44 AM)
Yup, same goes to AMD as well with their '95C Tjmax' temperatures sweat.gif
*
Attached Image
Tk-maxx, U.K
chocobo7779
post May 11 2024, 08:22 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ Apr 21 2024, 10:14 PM)
Futureproofing is fallacy. Recall the AM4 300series to 400series issues and also their limited growth (no PCIE4, no USB 3.2) by todays standards even if you can technically still "upgrade" with newer CPU.

AM5 1st gen is held back by limited DDR5 speeds. This will fast gets old as DDR5 tech matures and speeds goes over 10k.

Do not buy for something you think might happen future, just buy for the best today.
*
No PCIe 4? Not that much of a problem for most GPUs unless you are talking something about RX6400/6500XT with their nerfed PCIe x4 interface. Even the mighty 4090 only loses about 2% performance on PCIe 3.0 x16:
user posted image

On the SSD side of things there's not much difference between PCIe 3.0 and 4.0 either for gaming icon_idea.gif

USB 3.2? That's more or less belong to the 'nice to have' territory rather than must haves, and even that can be done with a PCIe card if you really need one icon_idea.gif
chocobo7779
post May 11 2024, 08:46 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ Apr 22 2024, 09:43 AM)
AMD mistake is keeping AM5 prices too f**king high, HUB recently did a review and their basement budget Mobos are basically all crap so you cannot do a budget build with AM5 either, which is why AM4 is still hanging around and not getting replaced as it was supposed to be. This is a failure from product marketing POV.

And for a fabless maker, they also failed to prioritise software side of the CPU leading to systems being less stable (DOCP/EXPO, CCD priority) than Intel build (who still has to run a fab business). In that regards, heck even Nvidia has better grip on their various software ecosystems for other uses of their GPU.

Both have their faults and surprising it is Apple, another fabless brand, that is showing how to design CPU hardware with their superb M2 Ultra, with near flawless software integration. And then there is ARM...

Apple, ARM > Intel, AMD

All you AMD & Intel fanbois can balik rumah cry
*
QUOTE
AMD mistake is keeping AM5 prices too f**king high, HUB recently did a review and their basement budget Mobos are basically all crap so you cannot do a budget build with AM5 either, which is why AM4 is still hanging around and not getting replaced as it was supposed to be. This is a failure from product marketing POV.
? Depending on where you live you can get a B650 board that will happily run a 7950X at full power, here's one for USD109:
https://www.microcenter.com/product/664700/...atx-motherboard

The problem with most of these boards are the fact that they simply fail at pricing them right - I mean why bother buying those boards where the HDV will happily outperform them at the same price, if not lower? Perhaps it may work better for those who only uses 65W chips or even the 7800X3D, but that kind of defeats the point of AM5, so that's why HUB more or less advised users to stay away from them icon_idea.gif
AM4 is still very viable and there's really nothing wrong with that especially with their X3D chips that will still happy perform quite well compared with modern Intel equivalents icon_idea.gif

Software side is tricky, but let's not forget that Intel/Nvidia has their fair share of software/driver issues icon_idea.gif

QUOTE
Both have their faults and surprising it is Apple, another fabless brand, that is showing how to design CPU hardware with their superb M2 Ultra, with near flawless software integration. And then there is ARM...
I mean, the M2 Ultra is a very expensive chip in terms of cost and transistor budget - it had what, 100 billion transistors which is much closer to something like AMD's MI300A than most consumer hardware (Nvidia's AD103 only has around 76 billion transistors). Flawless software integration isn't hard to do with Apple as they have much more R&D budget to spend with a highly vertically integrated ecosystem (this is the nice part of a walled garden) icon_idea.gif
Sure AMD/Intel can design those chips but why bother selling them at the consumer market where they can make vastly more selling them as a HPC/AI chip as the consumer market is fundamentally a low margin business icon_idea.gif

Again, ISA doesn't matter, see above post, and is not like standard ARM cores are a gold standard in performance/efficiency icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 08:48 AM
chocobo7779
post May 11 2024, 08:55 AM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(1024kbps @ Apr 22 2024, 07:56 AM)
ARM and RISC-V isn't too far behind

If both AMD and Intel keep making mistake they will become history
*
Unless Snapdragon X actually eats up market then no - the whole thing about ARM/RISC-V will take over x86 almost reminds on the age old sentence [insert year] will be the year of the Linux desktop sweat.gif

This post has been edited by chocobo7779: May 11 2024, 08:56 AM
chocobo7779
post May 11 2024, 12:57 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(lolzcalvin @ Apr 23 2024, 03:09 PM)
for now. microsoft is currently betting on snapdragon x elite for their Windows on ARM. snapdragon x elite does look promising.

however, x86-64 may not be going anywhere soon despite the rising popularity of RISC-based instruction set architectures (ISAs).

the reason why windows still requiring x86-64 (for now) is because they have a lot of legacy codes that are dependent on x86 ISA. many wizards we use in windows today can be dated back to win95. this is why x86-64 hangs around for so long -- backwards compatibility for the past 20+ years. hell even intel 8086-based software can run on modern x86 CPUs with little tweaking. people may attribute better efficiency to ARM, but it really matter not on the difference in ISAs, but how CPU vendors are pushing at which direction while designing their CPUs. for so long both AMD and Intel been pushing towards high performance (both of them compete in high performance computing while having a lower priority on battery life), while Qualcomm and Apple have been pushing for efficiency (both of them compete in mobile space and therefore having handful of experience on low power operations).

a good read of RISC vs CISC if you're into a bit of technicality: https://chipsandcheese.com/2021/07/13/arm-o...-doesnt-matter/
if u don't already know, x86-64 is CISC-based while ARM and RISC-V are RISC-based
*
To be fair to AMD/Intel, it's really not hard to match Apple in terms of peak performance/efficiency in mobile chips, but what Apple dominates is the performance at low/mid power range which matters a lot more in real world as it's very unlikely for most modern workloads to utilize peak performance for any long period of time, due to power/heat constraints

This is sometimes why synthetic benchmarks like Cinebench/Geekbench can be misleading on power constrained devices as those only measure peak performance, but not sustained performance which is a lot more important (most x86 CPUs tend to focus a lot on peak performance, due to the fact on desktops they can be run at an indefinite period as long as the power/heat budget allows) idea

I really wish Intel didn't can their Y series chips that early though, as those chips could really become Apple M series competitor if they kept on iterating on it sad.gif

There's also the very large, captive markets that require x86, like government/education/corporate/manufacturing sectors that often uses specialized, in-house software and are not COTS, and cannot be ported to ARM easily, even if those software had their source codes available icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 03:04 PM
chocobo7779
post May 11 2024, 01:03 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(awol @ Apr 23 2024, 03:35 PM)
i agree with you. lets see how SD X Elite fare against intel/amd on laptop and its performance against M3 SoC.

then again, it still limited to laptop/mobile.
power user still depend on high end desktop.
*
It will be quite interesting to see where Strix Point and Lunar Lake headed, pretty excited to see what x86 can offer in the next few years hmm.gif
That being said though, the X Elite chips are promising, but compatibility, pricing and availability will make or break this chip icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 01:07 PM
chocobo7779
post May 11 2024, 01:32 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(TristanX @ Apr 23 2024, 07:26 PM)
Its not that easy. Still limited to process nodes. Why would Intel and AMD push power usage up to 90-95C depending on cooling? Its physical limitations. Die shrinks becoming increasingly more difficult as well. Unless someone comes out with a new material for the processors.
*
...more like core design, really icon_idea.gif
Apple cores are specifically targeting high IPCs, so they can clock them lower relative to x86 incumbents to achieve high single threaded performance with excellent efficiency. This is why Apple cores tend to be quite large and wide compared to your garden variety x86 chips. One of the downsides is that they are often 'inefficient' in terms of SoC/die size and transistor budget, so it is very expensive to make, and this partly explains why Apple charges very high prices for RAM/storage upgrades for their hardware, presumably that they can subsidize the cost of manufacturing those SoCs icon_idea.gif

Of course x86 cores can be 'fatter and wider' to increase performance without increasing clock speeds to stratospheric heights, but that's not going to be cheap to make, and note that AMD/Intel will need to cater to vastly different markets and use cases ranging from cheap laptops to multimillion dollar supercomputers as opposed to Apple where their targeted audience and software ecosystems are much more focused and locked down icon_idea.gif

Those are speculations of course, so correct me if I'm wrong icon_idea.gif


QUOTE
Why would Intel and AMD push power usage up to 90-95C depending on cooling?
Simply put, the enthusiasts wanted longer performance bars on benchmarks so there is an all out performance war with very little regard to efficiency icon_idea.gif

The big problem with modern CPUs right now isn't really the CPU core itself, but rather the memory subsystem, as the improvements on CPUs are vastly outclassing the improvements on memory speeds/bandwidth which leads to memory bottlenecks, and moving data between the CPU and memory isn't exactly power efficient either. This is why you have things like memory on package (like Apple chips/upcoming Lunar Lake CPUs) and die stacking like what AMD does to their X3D lineup of CPUs icon_idea.gif

QUOTE
Die shrinks becoming increasingly more difficult as well. Unless someone comes out with a new material for the processors.

That's why you have things like chiplets, backside power delivery such as PowerVia (could be revolutionary) and new chip packaging/interconnect methods to make further die shrinks possible icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 01:58 PM
chocobo7779
post May 11 2024, 01:47 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(babylon52281 @ Apr 24 2024, 10:31 AM)
Arguably Alderlake is also a major breakthru being the 1st consumer big.little hybrid X86 CPU, next breakthru would be Lunarlake with proper tiled design desktop CPU.
*
Lunar Lake is mobile only though sweat.gif

There's also Arrow Lake with PowerVia backside power delivery icon_idea.gif
chocobo7779
post May 11 2024, 01:49 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(TristanX @ Apr 23 2024, 11:11 PM)
Last major breakthrough is Sandy Bridge. Its been a while. Today, Intel is still behind when comes to the hardware. They still able to keep up. Just with a lot more power.

There are candidates for new materials. Like 100Ghz nanocarbon I think. I think no one able to get it stable.

Lots of things to consider too. Now we have a lot of hackers cracking it. Security patch usually nerfs the processors.
*
QUOTE
Lots of things to consider too. Now we have a lot of hackers cracking it. Security patch usually nerfs the processors.
Usually it's often multithreading that often causes lots of security loopholes, that's why Intel is rumored to remove them in Arrow Lake chips, and replacing them with rentable units (if the rumors are correct) icon_idea.gif
That being said though it's very unlikely for your regular machine to get hacked using CPU vulnerabilities unless you do a lot of *ahem* stuff, or you fail to do the bare basics of computer security icon_idea.gif

QUOTE
Last major breakthrough is Sandy Bridge. Its been a while. Today, Intel is still behind when comes to the hardware. They still able to keep up. Just with a lot more power.
Alder Lake would like a word with you though, but yeah x86 is going to be a lot more exciting in the next few years laugh.gif

This post has been edited by chocobo7779: May 11 2024, 01:50 PM
chocobo7779
post May 11 2024, 01:53 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(imbibug @ May 4 2024, 11:29 PM)
Intel's current problems are alot more serious than the teething problems AMD had with Ryzen. The early Ryzens had memory incompatibility/instability with took some bios updates to resolve. The 5000 series had the USB dropout issue. Now Intel probably has more memory incompatibility/instability issues than Ryzen putting aside the problems with voltage/power.

Someone at chiphell took the time to organise testing of hundreds of intel 13/14th gen cpus and found very poor stability at auto out of the box settings. If 5/10 13th gen and 2/10 14th gen managed to pass, its a clear sign that its complete garbage and not just a few bad apples.
https://wccftech.com/only-5-out-of-10-core-...ability-issues/

Performance takes a hit as expected with Intels baseline bios fix - "This is reported to be up to -30% in multi-threaded applications and up to -15% in games which is quite big".
Hardware unboxed ran gaming benchmarks with the new bios fix and the performance hit was 10%-20%. IIRC 20% was the perf hit for low 1% fps.
https://www.youtube.com/watch?v=OdF5erDRO-c&t=520s
And then you have to consider Intel's current issues with the Meltdown bug. The last downfall patch supposedly had a big performance hit on older cpus, up to 39%.
*
You almost forgot the Ryzen 3000 boost clock bug which AMD was able to fix in the ABBA version of their AGESA icon_idea.gif
lolzcalvin
post May 11 2024, 02:26 PM

shibe in predicament
******
Senior Member
1,586 posts

Joined: Mar 2014
From: 75°26'11.6"S, 136°16'16.0"E


QUOTE(chocobo7779 @ May 11 2024, 12:57 PM)
...but what Apple dominates is the performance at low/mid power range which matters a lot more in real world...
*
and that is what apple is doing extremely right, and at times their low power operation performance is rivaling AMD/Intel high end CPUs which are using 4-5x more power for the same operation. their M4 has just been released recently, with MT performance closing in a 13700K, and ST performance obliterating many, if not all, modern x86 CPUs. a BASE M4 is doing that? at 5x less power? node advantage + SME aside, cannot dismiss what Apple has been doing and they're definitely putting more pressure on AMD/Intel. ESPECIALLY INTEL.

M2/M3 era has already seen the chip performing faster than x86 counterparts in a number of applications such as in Adobe apps, DaVinci Resolve and Handbrake. new M4 era will be another eye opener similar to M2.

with M4 being released this early, Qualcomm is shitting themselves too. I mentioned I had faith previously on X Elite but things do change fast within a month. X Elite is due for >1 year now. after their shoddy X Plus reveal a few weeks ago, rough rumors are saying they're in a very messy situation rn. we'll see how Qualcomm handles this.

QUOTE(chocobo7779 @ May 11 2024, 12:57 PM)
...There's also the very large, captive markets that require x86, like government/education/corporate/manufacturing sectors that often uses specialized, in-house software and are not COTS, and cannot be ported to ARM easily, even if those software had their source codes available icon_idea.gif
*
hence why x86 is living for backwards compatibility to cater for relic systems. been so long since 8086 era.

however, it really isn't x86 fault for the "slowness" simply because it's just an ISA. a good uarch (microarchitecture, or simply CPU design) will yield great results. Apple has greatly improved their uarch to gain higher frequency, as well as shoving in ARM SME into it, even with small IPC gain (and still yield ~25% improvement over M3).
chocobo7779
post May 11 2024, 03:02 PM

Power is nothing without control
********
All Stars
14,673 posts

Joined: Sep 2010
QUOTE(lolzcalvin @ May 11 2024, 02:26 PM)
and that is what apple is doing extremely right, and at times their low power operation performance is rivaling AMD/Intel high end CPUs which are using 4-5x more power for the same operation. their M4 has just been released recently, with MT performance closing in a 13700K, and ST performance obliterating many, if not all, modern x86 CPUs. a BASE M4 is doing that? at 5x less power? node advantage + SME aside, cannot dismiss what Apple has been doing and they're definitely putting more pressure on AMD/Intel. ESPECIALLY INTEL.

M2/M3 era has already seen the chip performing faster than x86 counterparts in a number of applications such as in Adobe apps, DaVinci Resolve and Handbrake. new M4 era will be another eye opener similar to M2.

with M4 being released this early, Qualcomm is shitting themselves too. I mentioned I had faith previously on X Elite but things do change fast within a month. X Elite is due for >1 year now. after their shoddy X Plus reveal a few weeks ago, rough rumors are saying they're in a very messy situation rn. we'll see how Qualcomm handles this.
hence why x86 is living for backwards compatibility to cater for relic systems. been so long since 8086 era.

however, it really isn't x86 fault for the "slowness" simply because it's just an ISA. a good uarch (microarchitecture, or simply CPU design) will yield great results. Apple has greatly improved their uarch to gain higher frequency, as well as shoving in ARM SME into it, even with small IPC gain (and still yield ~25% improvement over M3).
*
Mind you, the Snapdragon X series is about 1 year late, and it seems like Qualcomm is sort of sandbagging right now (heck even the original intent for Qualcomm to acquire Nuvia was to compete in servers, not laptops) icon_idea.gif

That being said though I'm not sure how x86 incumbents can compete with it though (perhaps wider core designs and decoding/execution units, but that's not exactly cheap to implement without potentially cannibalizing much more lucrative markets) hmm.gif

But yeah, that's the nice part of having practically unlimited R&D and transistor budget to play with icon_idea.gif

This post has been edited by chocobo7779: May 11 2024, 03:13 PM

11 Pages « < 2 3 4 5 6 > » Top
 

Change to:
| Lo-Fi Version
0.0195sec    0.73    6 queries    GZIP Disabled
Time is now: 28th November 2025 - 10:09 AM