Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

 Next Gen Console: PS3 vs XBOX 360 vs. Wii, Next Gen speculation discussion

views
     
ray_
post Jun 3 2005, 01:26 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
QUOTE(ikanayam @ Jun 3 2005, 01:10 PM)
Nah, it does not have internal RAM like the SPE. Cache would serve a CPU better. The SPE is different because all it is supposed to do is crunch data, it's not made to work with many things a general purpose CPU has to deal with. The internal RAM on the SPE sounds to me like a dumbed down cache from what i've read.
*
Despite popular believes, most processor has a small internal RAM. Usually either a single-ported (single access) or dual-ported (dual access) one. This is in addition to the L1 and L2 cache. These caches were meant to cache slow external SRAM or DRAM and were never designed to cache internal RAM. The logic for this is simple, internal RAM are fast running RAM usually synchronized at core speed. smile.gif

EDIT: There's a high probability that the PPE would have an internal RAM. But like always, it always bad to assume and good to find out. biggrin.gif

This post has been edited by ray_: Jun 3 2005, 01:29 PM
ray_
post Jun 3 2005, 01:59 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
QUOTE(ikanayam @ Jun 3 2005, 01:28 PM)
Hm... any examples of processors that use such internal RAM?
*
Just google:
Microprocessor "internal RAM"

The result can be overwhelming. smile.gif

But it must be said that most off-the-shelf processor does not have an internal RAM and relies heavily on cache. But processor that powers PC/Mac represents just a small percentage of the total processor used in the world, most of them in embedded systems.

Top of the list are some of the architecture that uses internal RAM:
1) OMAP
2) MCore
3) 68k
4) variants of PowerPCs

This post has been edited by ray_: Jun 3 2005, 02:00 PM
ray_
post Jun 3 2005, 02:35 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
Surprise find in embedded.com:

EET: Will game-title development also be a challenge for such a high-performance processor?

Kutaragi: It should be the reverse. I want it to be the reverse. The greater the limitations of the hardware, the more labor is needed for software development. With earlier hardware, direct access [from a game program to machine-language-level] hardware or some tricky programming was required to pull off the full performance.

The PS2, for example, made full use of the semiconductor technology available at the time, but only a software effort on the part of the game developers enabled high performance titles like Gran Turismo [a realistic car racing game that has sold more than 43 million copies worldwide]. The more closely they accessed the hardware, the higher the performance they achieved.

Cell is not like that. Application programs can no longer directly access the hardware; instead they will have to be written in high-level, object-oriented language. That was done for security reasons: If processors of high performance and wide bandwidth like the Cell were linked together without sufficient security, a worldwide system crash could occur with one attack.

The big feature of the processor is that multiple operating systems run on it. From the beginning, I wanted multiple operating systems to run on the processor simultaneously.

The Cell processor has a kernel called Level 0 at the bottom. This level is not disclosed and is kept secure. Level 1 handles operations close to the kernel, such as scheduling, the real-time kernel and device drivers. Level 2, which we call the guest OS layer, is for general-purpose operating systems such as Linux and PC OSes and operating systems for the Playstation. All operating systems and applications run on Level 2 or higher. Programmers can concentrate on their targeted area of concern without worrying about other operating systems.


Read the rest here:
http://www.embedded.com/showArticle.jhtml?...cleID=163702001
ray_
post Jun 3 2005, 03:29 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
Bait works. smile.gif

These are 32bit processors. The term SoC and processor are interchangeable.

QUOTE(silkworm @ Jun 3 2005, 02:46 PM)
Uses ARM7/9/11 cores. ARMs are traditionally cacheless.
*
Hence my point.

QUOTE(silkworm @ Jun 3 2005, 02:46 PM)
Again, the RAM is not internal to the actual processing core itself. The 68HCxx are 8-bit (`05, `08) or 16-bit (`11, `12) parts.
*
Last I've check, the 68k is a 32bit microcontroller.

QUOTE(silkworm @ Jun 3 2005, 02:46 PM)
There are several pretty obvious differences between these chips and the PPE that's being discussed. Most of the time these ICs are self-sufficient. Program code is stored on, and run directly from ROM/EEPROM. Unless they are performing exceptionally data hungry tasks, the internal SRAM is all the memory they will ever use. In such cases, cache is largely unnecessary.
*
I don't think cell-phones or PDAs are running from ROM/EEPROM or just internal RAM. Modern RTOS neccessitates some form of external RAM.

Now you could argue that PPE processor core is different to microcontroller in embedded systems. But you must realize that early iterations of the computer are infact embedded systems. They are intricately linked as demonstrated by the use of internal RAM on the SPE.

QUOTE(silkworm @ Jun 3 2005, 02:46 PM)
Rather than to leave it hanging just like that, a conclusion should be in order:
The PPE is a processor core. Microcontrollers like the examples given are built around cores.
*
EDIT: I see where you're going with this one. You're saying, the inclusion of an internal RAM into the processing core is different from the inclusion of internal RAM to the processor (SoC). Strictly speaking, you're right. And these aren't the best examples. Infact I was assuming that the PPE is an SoC.

Now, I could have covered my hinny by saying that I was giving examples of "processor's with internal RAM" and did not mentioned anything about processing core. But that would make me look bad. So I won't (* now where did I read that before).



This post has been edited by ray_: Jun 3 2005, 04:28 PM
ray_
post Jun 3 2005, 05:27 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
Awesome rebuttal. But have to explain 1 point and disagree with another. smile.gif

QUOTE(silkworm @ Jun 3 2005, 04:39 PM)
The last of the 68K series was the 68332, after which Motorola/Freescale evolved the architecture into the ColdFire series. You just used "68HC". The 8 and 16 bit freescale MCU families are commonly called by "HCxx", eg, HC05, HC12, etc.
*
Yes, I've edited it for correctness.

QUOTE(silkworm @ Jun 3 2005, 04:39 PM)
A RTOS only needs as much RAM as necessary for a process table and storing context information (Instruction pointer, stack pointer, flags, GP Registers) for each process. Applications code can still be loaded direct from non-volatile storage like Flash ROM. Application heap space may be in external RAM. I admit we are seeing more lower end phones being able to run user installed programs, namely java games, but for the bare essential workings of a phone, larger RAM is unnecessary.
*
Even with everything statically binded, modern function-packed tech. gadgets (qualified as embedded system), would still be requiring some sort of external RAM. You do not need to have a cell-phone that runs Java games to be needing an external RAM, the cell-phone's core functionality itself is sufficient enough to be requiring one.

In a typical cell-phone, you would have RTOS or some form of scheduler, disportionate amount of interrupt sources with corresponding amount of service routines, codes to handle signalling, codes to handle ergonomic, low-level hardware drivers, numerous different level of abstract interfaces (POSIX...etc), DSP algorithm, and huge amount of temporary storage required for real-time data. These translate to a large amount of volatile memory real estates required.

I think you would find most of the paraphernalia running on 32bit core, requiring an external RAM. Now that's lots.
ray_
post Jun 3 2005, 06:59 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
Here's the bomb.

Consoles are traditionally embedded systems. With the introduction of the PS3 and the cell technology, Sony wants them to be called "computers":

Now there are numerous definition for embedded system. Just goggling it returns this:
http://www.google.co.uk/search?hl=en&lr=&c...Embedded+system

Some of the descriptions are ridiculous ("four-digit dates or leap days??"). But some do represent the concept of an embedded system and even fit the bill for our next generation consoles.

But to me, I've always identified a system as an embedded system if there's a need for a cross-compiler to build an application that runs on that system (tool association??). It's a weird definition, but I'll stick to that. Computer lets you write and run application locally in its native language, but embedded system requires a cross-compiler to translates it into target readable instructions and be transfered and loaded into the target's memory space using specialized tools.

With that said, it would qualify consoles as embedded systems. Unless of course, Sony lets its tool chains run natively on the PS3.

The line has blurred so much that it's almost indistinguishable. Who knows, H@H@ might need to move "Console Couch" back to "Computer" and back again several times. Now, would you call these next generation consoles, computers or embedded systems?

This post has been edited by ray_: Jun 3 2005, 07:01 PM
ray_
post Jun 6 2005, 12:05 AM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
QUOTE(silkworm @ Jun 4 2005, 12:17 AM)
Quite a logical definition, but perhaps a bit narrow. For physically small or resource limited systems, this rings true. However, once certain criteria of performance (CPU/RAM) and user interface are met, an embedded system is entirely capable of self-hosting its development environment. For instance, one needs a decent method of character input, a screen that displays a reasonable amount of text, and storage for the source files and compilation tools.  My example of the "embedded PC" above could be one such system.
*
The more I think of the term "embedded PC", the more it sounds like an oxymoron. biggrin.gif Production lines do have systems doing specialized function. But they'd probably have a few specialized HW and real-time constraints, and thus would be suitably associated to embedded system rather than PC. They could probably be reconfigured to run spreadsheets, but they wouldn't do it as well.

Having written my version of what constitutes an embedded system, I was reminded of the Agilent Logic Analyzer that sits at my cube not so long ago. It's an embedded system no doubt, but it has also a Linux kernel and thus could build and run application on board.

I guess there isn't one all encompassing definition for an embedded system. But the concensus seems to be that the next gen. consoles would still be firmly rooted as embedded systems, until they start sprouting spreadsheets and word processors (i.e. be capable of general purpose computing).

And no H@H@, don't move Adidas shoes to computer. smile.gif
ray_
post Jun 13 2005, 11:53 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
QUOTE(Matrix @ Jun 13 2005, 09:36 AM)
Oooi...why no debate here last few days one?? Must be running out of rumours and news and steams liao...

btw, to all those techie gurus, i just wanna ask this:

Why do you think the PS3 CELL has 1 "redundant SPE" and runs 25GFlops(theorotically) less?? IMO, redundant means FAULTY. SONY must be having headache trying to get enough good yield to get all 8 SPE working and thus probably getting a high % of literally.."half baked"..sillicons with one SPE still "uncooked"(or toasted...whichever u prefer...hee-hee).

I remember previously that SONY mentioned that the CELL will not be ready in time for the PS3, but obviously they squeeze the chip development schedule ahead to make the CELL ready for the PS3, but in turn have to sacrifice some time to further fine tune and improve the design and also the time needed for the manufacturing process to get mature to have high % of good yield.

So their best bet would be going for 7 SPE, which in their opinion is achievable without throwing away too many CELL chips (those < 7 SPE working).

Maybe a year or two later, they'll have a good enough manufacturing process to get enough good yield for 8 working SPE, then we have an 8 SPE PS3?

As for XBOX360, I think ATI unified shader is very interesting and certainly sounds a lot more flexible than Nvidia "Dual SLI GT6xxxx on a chip". But there's a rumour that the shaders can only do EITHER pixel or vertex shader PER CYCLE, meaning it's not as flexible as it seems, anyone have any more info about this?

Okay...finally the PS4!! Okay, i'm a bit far ahead, but if the CELL proves successful,  in another 6 years time, the manufacturing process will be good, the design has improve and they can make lots of CELL chip at rock bottom prices. And since the CELL is all multi CPU ready. The # of SPE can also increase per CELL CPU by then. By just packing in a couple of cheap CELL chips, they'll have the next PS a real powerhouse at a cheap price...which i think that's when their CELL hardware investment really pays off.(but of course, the PS3 would have guarantee them some ROI in terms of software royalty and other potential electronic products(TOSHIBA washing machine!!! Oh yeah, the next TOSHIBA washing machine will have a GIGABIT network port and linkable to your PS3!! Maybe you can control the swirling motion of the water from your PS3 BATARANG!! J/K) might give some cash back also)..after all, the according to IBM/SONY/Toshiba, the CELL was designed to cater for the next 10 years or so.

Still, currently, the CELL as it is, sounds good ( the multi SPE) but at the same time, it's a IN-ORDER CPU (like the XBOX360 PPC CPU also), which is much less complicated than Intel/AMD CPU's for PCs which is OUT OF ORDER CPU. I think OOO CPU is more powerful as it can predict instructions and acts accordingly while, IO CPU is simplistic and just acts on whatever it's fed.

Of course the benefit of the IO CPU is reduced size, meaning less HEAT and faster requency...so you can't exactly compare a PPC or Intel chip running at 3.2Ghz to a 3.2Ghz CELL.

Thus in order to fully utilize the CELL CPU, the PS3 archictecture have to have a huge bandwidth and streamline the instructions to the SPE via the Power PC core(what do they call that? PPE?), I've no idea how are they intending to overcome the shortcoming of the in-order instruction sets, but most likely, like it's predecessor, the EE and GS chip, by some the force of raw brute bandwidth, which hopefully is enough to feed it optimally.(the CELL is sorta evolution and further improvement from the EE/GS combo me thinks).

Finally a stab at the SPECS. All those bally-hoo about 2 TERAFLOPS ...but no mention on whether it is single precision or double precision..no mention of how the benchmark was done...same for MS. Basically, both are crappio marketing scheme.

I believe the CELL will be adequately powerful, but i don't buy it to be THAT powerful...just like the EE/GS back in it's time. But i think the real payoff of the CELL is not now, but the next 5 years, when the manufacturing process has improved and design has been further finetune and when multiple cheap CELLS are easily available and linkable in a single motherboard to power the washing machine, TV, fridge,steam iron and whatelse...PS4..smile.gif

Okay guys...shoot away. smile.gif
*
Nice post. thumbup.gif

Well... this is more geared towards solid state junkies. So perhaps ikanayam or Silkworm could comment (especially the part on IO and OOO). I'll pass on most but a couple of these. smile.gif

Just a note on redundancy. It's there to provide fail-safe operation, thus is not really a fault by itself. Could you direct us to the source of that information, it''ll interesting to know that Sony is packing up 9 SPEs to provide improved yield, something unheard of (by me at least) so far.

The point on the SPE "streamlining" instructions from the PPE is not exactly correct. Like we've said before, prior to executing any task on the SPE, all data and instructions had to be copied into the LS. This should be done using the dedicated DMA of the SPE via memory spaces mapped by the TLB/SLB of the PPE MMU. So there would be no data or instruction fetch from the external memory, instead instruction and data would be fetched from the internal memory (i.e. Local Storage) and executed in a timely manner given the proximity and the speed of the SRAM. I do not however know if the internal bus would introduce any redundancies. If ever there be, I'm sure we will be looking at an insignificantly small figure.
ray_
post Jun 16 2005, 01:17 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
QUOTE(Matrix @ Jun 15 2005, 09:08 AM)
Thanks to Silkworm and Ray for some explanation....i don't pretend to understand everything Silkworm put up there...maybe 30% smile.gif, but good enough for me.

So that means the PPC and CELL has "branch prediction" which achieves similar results to OOO execution, rite?
9 SPE? No, i was saying 7 SPE. It is in the official PS3 SPECS all over the web. It's mentioned "1 SPE reserved for redundancy" or something like that.

Added:

Here's a nice link for reading the CELL...haven't finish reading it myself...hurts my head...LOL.

http://www-306.ibm.com/chips/techlib/techl...icle-021405.pdf
*
I think if what you've said in regards to redundancy is true, I would presume that the redundant SPE is there to be swapped in when all 7 SPEs are swarmed to maintain throughput. Or, a non-maskable error such as alignment faults is causing one of the SPE to be decommissioned until it's reset.

I would anticipate a new programming paradigm for Cell. Well, at least for the developer of the Cell's RTOS. God forbid that the memory management of the SPE would ever be passed into the hands of the application developer.

Actually, I would anticipate a few scenarios that could be implemented into the memory management portion of the SPE in the RTOS:
1) Each SPE's task footprint would be made small enough to fit into the 256K LS. Task would be queued in a shared virtual task pipe and fed into the SPE based on either first-come-first-served basis, round-robin or priority scheduling. Any any rate, there would be frequent memory swap accomodated by the PPE MMU.
2) Each SPE would be assigned its own dedicated virtual task pipe and fed into the SPE based on either first-come-first-served basis, round-robin or priority scheduling. Memory swap is accomodated by the PPE MMU.
3) There aren't enough task to load all 7 SPEs and the redundant SPE(s) could be programmed to power the PS3 grill as required. (*yum...)
ray_
post Jun 16 2005, 03:19 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
QUOTE(silkworm @ Jun 16 2005, 02:14 PM)
We can have a peek at the Linux "model" of programming the SPEs in these kernel patches released a couple of months ago. SPEs exist as "files" in the filesystem and that's how program code and data are transferred onto it. The SPEs are idle until a DMA "kick" command is sent to it, after which it looks like it's controlled by the standard POSIX threading API. At least, that's what I've managed to glean from the raw patch data. I'd need to actually patch the full kernel source to see the context of some parts, like the interrupt controller and the memory management unit.

One might also gain insight on the programming model of Cell from the archived presentation/webcast linked from Power.org, from the Barcelona Power conference held last week. I haven't had a chance to view it yet and the webcast isn't downloadable for offline viewing. sad.gif
*
Where do you get all these links anyway? (* shake fist)

Looks like linux folks are implementing method 2). But IMHO, 1) would spread the processing load more evenly across all SPEs.

"We will need our own address space operations as soon as we allow the SPU context to be scheduled away from the physical SPU into page cache."

PPE MMU will be segmenting the memory into pages thus one could swap a code or data page into the LS as one deem necessary. Page granularity in the context of the PPE MMU should be adjustable. Given the small size of the LS, it would be more useful to set a larger PPE MMU page size to load the task to its entirety. But having a smaller page size would give the programmer the flexibility to apply data protection mechanism of the MMU at a smaller scale.

It would be interesting to see how Sony plans to implement its Level 1 kernel.

EDIT: It would be nice if these segmented swap memory has some scheduling capability built in (hence a programming paradigm) where in the case of linux, scheduling information can be embedded into the mem file section headers. An a dispatcher could be implemented to arbitrate between task.

This post has been edited by ray_: Jun 16 2005, 03:29 PM
ray_
post Jun 16 2005, 04:21 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
From fa.linux.kernel:
"spu_run suspends the current thread from the host CPU and transfers the flow of execution to the SPU.


Wait, I thought the operation of the PPE is independent to the SPE. Why is there a need to stall the PPE to service the SPE? They should both run independently.

This post has been edited by ray_: Jun 16 2005, 04:25 PM
ray_
post Jun 16 2005, 05:17 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
QUOTE(silkworm @ Jun 16 2005, 04:39 PM)
Take it easy, the host thread is stalled, not the entire PPE. The SPU/SPE is going to be encapsulated into a Pthread, as you will see in the "bpathread.c" module.
*
Ah....I remember now. The kernel scheduling is centralized on the PPE because the SPE does not have a resident RTOS.

If the SPE were to have a resident RTOS, things would be very different.




ray_
post Jul 6 2005, 03:18 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
H@H@, thanks for the l33t upgrade.

Now I can pwn and cum as I please. Yay.
ray_
post Aug 12 2005, 11:44 AM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
QUOTE(silkworm @ Aug 9 2005, 08:37 AM)
The PPE got a bump up in size in the "DD2" revision of the Cell, but there hasn't been any concrete explanation as to what was changed, just more speculation. Anyway, more goodies for the technically minded: papers at IBM research

The TRE demo paper is a good one. smile.gif
*
"Each SPE is responsible for four regions of the screen and the vertical cuts are processed in a round robin fashion, one vertical cut per region, and left to right within a
each region, so no synchronization is need on the output as no two SPEs will ever attempt to modify the same locations in the accumulation buffer even with two vertical cuts in flight (double buffering) per SPE."

I wonder if the vertical cut processing on the SPE, done in a round-robin fashion, is due to the limitation of the DMA. Since SPE basically screams parallelism, the bottleneck would be the result of DMAs prioritized in a way such that each SPE has equal rights to the DMA thus resulting in a round-robin stalemate.

Paralellism lost in translation. You'll make it big if you could patent something that makes DMA bus arbitration and memory coherency a thing of the past.
ray_
post Aug 13 2005, 01:59 AM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
QUOTE(silkworm @ Aug 12 2005, 10:20 PM)
I think you might have gone off-track somewhere. The round-robin scheduling is applied to the four regions being processed within each SPE, not SPE-to-SPE. There is a significant delay in DMA fetches, but that is being hidden by the double-buffering of input and output per SPE. As they explain further down, at any one time an SPE is calculating the ray intersection and downloading data for the next cut into LS. I believe they are doing it in this way to leverage the fact that the SPE is capable of issuing a SIMD arithmetic op and a load/store/DMA channel op per clock.
*
That sentence quoted above mentioned that round robin is applied to the vertical cut processing. I'm assuming each region has several vertical cuts (I'm not a graphic expert here so correct me if I'm wrong) and that each vertical cut process accesses a memory index containing the mathematical description, where the index is auto-incremented by the DMA controller. Also I'm assuming that "one vertical cut per region" relates to the round-robin fashion that the vertical cuts are processed, left to right, one region at a time and one vertical cut at a time.

If this is correct, the four regions you've mentioned relates more to DMA frames (or block) with its parameters (base address, length and data size) initialized through the command parser. It makes sense since each SMF would need to arbitrate between one another for bus access, thus the usage of the round-robin method.

This post has been edited by ray_: Aug 13 2005, 02:21 AM
ray_
post Sep 30 2005, 05:52 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
More documents on Cell Broadband Engine available to all.

Cell Broadband Engine

Enjoy....

Also: There shouldn't be any more speculation on whether the Revolution processing power is superior to that of 360 or PS3. Iwata has conceded that the Revolution is "less powerful". Read source here. Nifty controller for the pwn.

This post has been edited by ray_: Sep 30 2005, 05:59 PM
ray_
post May 10 2006, 05:55 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
I tend to pronounce Wii as "Why" and VIIV as "Vith". The most preposterous marketing lingo IMO is NGSCM, MS now deceased domain separator. It's presumably pronounceable.

Then there's RAZR and SLVR. What ever happened to generic product names like Phonemaster 8000 or Super Playmaster 900?

Wii = Super Duper Playmaster 1000 FX2i Turbo

Ah... the good old days.

This post has been edited by ray_: May 10 2006, 05:56 PM
ray_
post May 14 2006, 04:48 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
You know?

Does it all matter. The reality in Malaysia is, whichever console gets modded first, it will take off in a disproportionate manner.

Until of course the console trend reverse and it makes more market sense to crack and clone Nintendo console game once more. Then the impetus to get Wii would really take off here.

Sad but true.

Just for the record, I do not condone piracy.

This post has been edited by ray_: May 14 2006, 04:50 PM
ray_
post Jun 7 2006, 03:48 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
Embedded.com published a series of Cell related write-ups:

http://www.embedded.com/showArticle.jhtml?...cleID=188103194
http://www.embedded.com/showArticle.jhtml?...cleID=188101999


ray_
post Jun 8 2006, 03:15 PM

Getting Started
Group Icon
Elite
169 posts

Joined: Mar 2005
From: Wallowing in my Pool of Ignorance (splat..splat..)
Let us not forget that PS3 does compute and that Cell is a pretty good looking beast. So, technically speaking PS3 is a "computer".

But what PS3 is not, although Sony exec disagrees, is as a credible PC contender. Point 1 being that linux has never truly taken over as a Windows replacement. Also, the PS3 looks too flimsy as an office desktop replacement and is certainly not mobile enough to be carried around.

I do however think that the PS3 would make a nice entry level linux server and as Cell's linux development platform. But they have to contend with millions of obsolete PC hardwares running free open-source linux.

If history were to taught us something it is that developers dislike change and love compatibility. Itanium didn't take off because people are comfortable with the original Intel Architecture and wants their legacy codes to just work. AMD64 fills that need in a huge way, prompting Intel to rethinks its strategy, bringing forward EM64T and the compatible XEON platform. Sony should learn a few thing from Intel's blunder.

Code shops are already busy porting PPC codes to support Mac's Intel platform natively, I sincerely doubt that they would welcome Sony dubious forray into general purpose computing.

This post has been edited by ray_: Jun 8 2006, 03:17 PM

3 Pages < 1 2 3 >Top
 

Change to:
| Lo-Fi Version
0.0435sec    0.65    7 queries    GZIP Disabled
Time is now: 9th December 2025 - 08:01 AM