What do I need to know?
RAM
Current graphics cards have 32MB to 256MB of RAM on them, either SDR (single data rate) or DDR (double data rate). Any serious gamer these days should be able to afford a 64MB card, if not one of the awesome 128MB cards. DDR RAM is RAM that moves twice as much data in a single clock cycle as SDR RAM or 'normal' RAM, meaning that 100MHz DDR RAM is just as fast as 200MHz SDR RAM (ignoring latencies and such). This has taken graphics cards to new heights because an increase in the RAM clock speed results in a double performance increase for DDR RAM! Fast-paced 3D games (as if there's any other kind of game show the biggest performance gain with DDR RAM because of the speed required to render all the frames of animation. I suggest a DDR-based graphics card because game performance is much better and price isn't much more than SDR.
GPU
The 'graphics processor unit', coined by NVidia after the CPU or main computer processor, is a good term for the number-crunching chips on current graphics cards. Past graphics cards used a basic collection of transistors for hardware acceleration of 2D and 3D images, but current graphics cards use full-blown processors that are about as powerful as a Pentium4!
ATI
Radeon 7000 Series
256-bit GPU, 0.18 or 0.15 micron process, 30 million transistors, 128-bit or 64-bit memory bus width, hardware DVD decoding (with ATI's special DVD player), 350MHz RAMDAC, and 3D imaging capabilities up to 2048 x 1536.
Radeon 7000 - 0.18 micron process
183MHz GPU core speed, 32MB or 64 MB of 183MHz RAM (128-bit SDR or 64-bit DDR)
183 Megapixels per second, 550 Megatexels per second
14 million triangles per second
2.9 GB per second memory bandwidth
One pipeline with 3 textures per pipeline
- Recommendation: This is clearly the lowest-end graphics card that ATI offers and it shows! With less power than the feeble GeForce2 MX 200, this card is only for those who want better graphics performance than onboard video for their older games.
Radeon 7200 - 0.18 micron process
166MHz GPU core speed, 32MB or 64MB of 166MHz RAM (128-bit SDR)
333 Megapixels per second, 1 Gigatexel per second
25 million triangles per second
2.7 GB per second memory bandwidth
Two pipelines with 3 textures per pipeline
- Recommendation: Slower than the 7000 in pure speed, but having the full two pipelines sure does boost performance! And that performance isn't bad, just not up to snuff for the latest games.
Radeon 7500 (RV200) - All-in-Wonder, 0.15 micron process
260MHz GPU core speed, 64MB of 180MHz DDR RAM (360MHz effectively)
520 Megapixels per second, 1.56 Gigatexels per second
40 million triangles per second
5.8 GB per second memory bandwidth
Two pipelines with 3 textures per pipeline
Dual 400MHz RAMDAC
S-Video, Analog/Component video ports, in and out
- Recommendation: No Firewire ports like the Radeon 8500DV A-i-W, but analog ports are sufficient for such a low price and they both share two of the fastest RAMDAC's I've seen on a consumer-level graphics card, so keeping that MPEG-2 video file at full frame rates should be no problem. Memory bandwidth suffers yet again, but compared to the 7000 and 7200 this All-in-Wonder has what it takes to play games rather respectably! This AIW is suitable for user who want do some video editing.
Radeon 7500 (RV200) - 0.15 micron process
290MHz GPU core speed, 64MB of 230MHz DDR RAM (460MHz effectively)
580 Megapixels per second, 1.74 Gigatexels per second
45 million triangles per second
7.4 GB per second memory bandwidth
Two pipelines with 3 textures per pipeline
- Recommendation: Performance of this card can compare with GF 2 card, but I think GF 2 still win in games performance
Radeon 8500 Series (R200):
256-bit GPU, 0.15 micron process, 60 million trasistors, 128-bit memory bus width, hardware DVD decoding (with ATI's special DVD player), 350MHz RAMDAC, 3D imaging capabilities up to 2048 x 1536, and HRAA capabilities.
Radeon 8500DV - All-in-Wonder
230MHz GPU core speed, 64MB of 190MHz DDR RAM (380MHz effectively)
1.08 Gigapixels per second, 2.17 Gigatexels per second
62.8 million triangles per second
6.1 GB per second memory bandwidth
Four programmable pipelines with 2 textures per pipeline
Dual 400MHz RAMDAC
S-Video, Firewire/IEEE 1394, and Component video ports, in and out
- Recommendation: The memory bandwidth on this 8500 suffers severely compared to standard specs. However, this graphics card is an 'All-in-Wonder' so video input/output and manipulation are its thing; again, take a look at the speedy dual RAMDAC's. And it does it well, especially with a much needed Firewire port for digital cameras and camcorders. As I already said, the AIW card is good to some video editing user.
Radeon 8500LE
230MHz GPU core speed, 64MB of 230MHz DDR RAM (460MHz effectively)
1.08 Gigapixels per second, 2.17 Gigatexels per second
62.8 million triangles per second
7.4 GB per second memory bandwidth
Four programmable pipelines with 2 textures per pipeline
- Recommendation: Almost a contradiction in existence, the 8500LE is a value graphics card based on a top-of-the-line GPU! For those who can't afford a full-powered 8500 but want their vertexes shaded a la DirectX 8.1, the LE version may be for you.
Radeon 8500 (original)
275MHz GPU core speed, 64MB of 275MHz DDR RAM (550MHz effectively)
1.3 Gigapixels per second, 2.6 Gigatexels per second
75 million triangles per second
8.8 GB per second memory bandwidth
Four programmable pipelines with 2 textures per pipeline
- Recommendation: Now known as the 'original' 8500, the 64MB version is certainly powerful enough to handle any game on the market. And of course, that nifty DirectX 8.1 support makes everything look all pretty . This card already replaced with rebatch card. Radeon 9100.
Radeon 8500 128
275MHz GPU core speed, 128MB of 275MHz DDR RAM (550MHz effectively)
1.3 Gigapixels per second, 2.6 Gigatexels per second
75 million triangles per second
8.8 GB per second memory bandwidth
Four programmable pipelines with 2 textures per pipeline
- Recommendation: With twice the video memory of the 'original' Radeon 8500, which should prove useful for this summer's programmable shader games, this is the current top dog in ATI's pack. Current games show almost no performance gain with 128MB of video memory, but then again current games have little to no programmable shader support, either. I will say this is the best budget graphic card for gaming. The performance is in the middle of TI & GF3
The Radeon9000/Pro
275MHz for RADEON 9000 PRO and 250MHz for RADEON 9000, 64MB or 128MB of DDR (550 MHz), 64MB version: composite or s-video connector
128MB version: composite connector, 1.1 billion pixels per second, 43 million triangles per second
DirectX 8.1 programmable pixel and vertex shaders, 1.4 pixel shaders, 1.1 vertex shaders,
400MHz Dual integrated DACs
Four programmable pipelines with 1 textures per pipeline
- Recommendation: It is clocked @ 275/275 or 250/200. Performance is faster than GF4MX440 but 1-10% slower than Radeon8500. 1GHz or more cpu needed to bring out full performance. For me, people who want buy new GF4 MX card, better grab this card because it can beat GF MX card easily especially Radeon 9000pro
The Radeon9500Pro
275 MHz engine clock, 64Mb DDR or 128MB DDR (pro), 400 MHz DACs, 1.1 Gigapixel/s Fillrate, 2.2 Gigapixel/s (pro), 540 MHz memory speed, 128-bit DDR memory interface,
AGP 8X support, Direct X 9 card.
1x/2x/4x/8x, 8.8 GB/s memory bandwith
Eight programmable pipelines with 1 textures per pipeline
- Recommendation: Same GPU as the Radeon9700, the R300, but it is only clocked @ 275/270. 128bit , it is basically the same as the Radeon9700 except with less bandwidth. It performs around the speed of a GF4 Ti4400 normally, but may outperform the Ti4600 sometimes. When Aniso & FSAA is applied, it is faster than the Ti4600 though. I will say this is the best direct X 9 budget card in the market now. This is the best!! Unfortunenly this card will dies when new Radeon 9600pro available in market to replace it.
The Radeon9700Pro
325/310 clock, 128MB DDR memory, 256-bit memory interface, 8-pixel pipeline architecture, AGP 8X,
DirectX?9.0, 19.8GB/s of bandwidth
Eight programmable pipelines with 1 textures per pipeline
-Recommendation: So, I will say this is the best card in in the market before avaibility of FX 5800 & Radeon 9800pro. Offers performance 100-200% faster than the GF4 Ti4600 when aniso and aa is on and usually at least 30% faster than GF4 Ti4600. CPU must be as fast as possible. Anything slower than 1GHz is better off using other cards like the GF4 Ti4200 or Radeon8500 other than this. The non pro version is slightly slower as it runs at a lower clock.
Radeon 9200/pro (RV280)
250 core clock, 250(500) mhz memory, 4 rendering pipelines, AGP 8X, 128MB of DDR memory, Support for a 128MB frame buffer, Dual integrated DACs, 128-bit, Direct X 8.1, 0.15 micron
Four programmable pipelines with 1 textures per pipeline
RADEON 9200 PRO DVI/VGA/Video-out
RADEON 9200 VGA/RCA/S-video, 400MHz
Four programmable pipelines with 1 textures per pipeline
- Recommendation: The 128MB card has the same performance as the RADEON 9000, and it makes no sense to rename the card because of the born-dead AGP 8x function
Radeon 9600/pro
400mhz coreclock, 300Mhz (600 DDR),4 rendering pipelines, AGP 8X, 128MB of DDR memory,
0.13 Micron, 256bit, 9.6 GB/s memory bandwith, 1.6 Gigapixel/s fillrate,
1x/2x/4x/8x, Direct X 9, New smoothvision 2.1
Four programmable pipelines with 1 textures per pipeline
- Recommendation: Well, some time this card give performance slower than Radeon 9500, but this card will replace the Radeon 9500 card. So can say this only rebatch of Radeon 9500 card. If compare with 9500pro, this card more high in core clock & memory clock, but only have 4 pixels pipelines. Maby this make it slower than Radeon 9500pro.
Radeon 9800pro (R350)
380/340 core clock, 0.15 micron, 256-bit DDR/DDR2, 128/256MB Memory (680Mhz), 107 Million transistor, 21.8 GB/s memory bandwith, 3.04 Gigapixel/s pixel fillrate, 18.2 Billion AA Samples/s AA fillrate, 380 M Triangles/s, 1x/2x/4x/8x, 2.8ns memory,
Eight pixel pipelines and four vertex shader units.
- Recommendation: This is a another good graphic from ATI, no overheat problem & silent card. But it outperform not lot of difference with 9700pro. So I will suggest user who already have the Radeon 9700pro better hold that budget first & wait for the new incoming card.
Nvidia
GeForce2 Series: 256-bit GPU, 0.18 micron process, 25 million transistors, 64-bit or 128-bit memory bus width, 350MHz RAMDAC, and 3D imaging capabilities up to 2048 x 1536.
GeForce2 MX 400 (NV 11)
200MHz GPU core speed, 32MB or 64MB of 166MHz 6ns DDR RAM (64-bit bus width, 333MHz effectively)
400 Megapixels per second, 800 Megatexels per second
25 million triangles per second
2.7 GB per second memory bandwidth
Two pipelines with 2 textures per pipeline
- Recommendation: The MX 400 is still the best of the GF2 MX series, but the only one I'm still recommending to anyone who plays modern 3D games. One look at the specs and you'll know why, but mainly it's because you can pick up a retail version for about the cost of the latest game; ONE GAME, quite a steal for any graphics card.
GeForce2 Ti (NV15) (Titanium)
250MHz GPU core speed, 64MB of 200MHz 5ns DDR RAM (400MHz effectively)
1 Gigapixel per second, 2 Gigatexels per second
31 million triangles per second
6.4 GB per second memory bandwidth
Four pipelines with 2 textures per pipeline
- Recommendation: If you don't want the humiliation of having a low-end card but are still strapped for cash (and willing to hold out on DirectX 8/8.1 until the majority of games use it), then this would be the card for you. More than acceptable performance for all games, as long as you can do without anti-aliasing.
GeForce4 MX Series (NV17)
256-bit GPU, 0.15 micron process, 29 million transistors, 128-bit memory bus width, 350MHz RAMDAC, 3D imaging capabilities up to 2048 x 1536, and HRAA capabilities. Warning! Although the GeForce4 MX series has a '4' in its name, it is not related to the GeForce4 Ti GPU's at all, and is, in fact, merely a replacement for the GeForce2 MX series of GPU's. No programmable pixels shaders or DirectX 8 support. Actually all MX family is a GF 2 base card that come with high core clock & memory only. IMHO, this is another worst card form Nvidia.
GeForce4 MX 420
250MHz GPU core speed, 64MB of 166MHz SDR RAM
500 Megapixels per second, 1 Gigatexel per second
31 million triangles per second
2.7 GB per second memory bandwidth
Two pipelines with 2 textures per pipeline
- Recommendation: As a replacement for the GF2 MX series, this one rocks the house! Hindered by memory bandwidth, yes, but nonetheless impressive for its price. You could probably even play the latest games... at 1024 x 768 or lower, however.
GeForce4 MX 440
270MHz GPU core speed, 64MB of 200MHz DDR RAM (400MHz effectively)
550 Megapixels per second, 1.1 Gigatexels per second
34 million triangles per second
6.4 GB per second memory bandwidth
Two pipelines with 2 textures per pipeline
- Recommendation: A value card with full 128-bit DDR memory? Gaming on the cheap has never been this good! For a little more green than the MX 400 you can now get about 4 times the performance. This was pretty much inconceivable a year ago, when the GeForce2 Ultra was still the king. One of the famous budget card in Low Yat. But if user have some more knowledge about graphic card will choose Radeon 9000.
GeForce4 MX 460
300MHz GPU core speed, 64MB of 275MHz DDR RAM (550MHz effectively)
600 Megapixels per second, 1.2 Gigatexels per second
38 million triangles per second
8.8 GB per second memory bandwidth
Two pipelines with 2 textures per pipeline
- Recommendation: Better than the MX 440 (and possibly the GF2 Ti), to be sure, which says a lot, but the current pricing scale leaves much to be desired! It's difficult to justify the cost of this value, non-vertex shading, lack of performance kind of graphics card & still Direct X 7 card.
GeForce3 Series (NV20)
256-bit GPU, 0.15 micron process, 60 million transistors, 128-bit memory bus width, 350MHz RAMDAC, 3D imaging capabilities up to 2048 x 1536, and HRAA capabilities.
GeForce3 Ti 200
175MHz GPU core speed, 64MB of 200MHz 5ns DDR RAM (400MHz effectively)
700 Megapixels per second, 1.4 Gigatexels per second
80 million triangles per second, 2.8 billion anti-aliased samples per second
6.4 GB per second memory bandwidth
Four programmable pipelines with 2 textures per pipeline
2X and 4X FSAA up to 1024 x 768
- Recommendation: An itty-bitty value version of the standard GF3. Despite its specs, the GF3 is designed to be more powerful than a GF2 (or GF4 MX) of similar speed and can make better use of memory bandwidth. Excellent performance for intense gaming, excellent mid-range price
GeForce3
200MHz GPU core speed, 64MB of 230MHz 4ns DDR RAM (460MHz effectively)
800 Megapixels per second, 1.6 Gigatexels per second
92 million triangles per second, 3.2 billion anti-aliased samples per second
7.4 GB per second memory bandwidth
Four programmable pipelines with 2 textures per pipeline
2X and 4X FSAA (see below) up to 1024 x 768
- Recommendation: The hottest buzzwords in the graphics market today are "programmable vertex shading" Anyone who owns the original GF3 has never regretted the purchase
GeForce3 Ti 500
240MHz GPU core speed, 64MB of 250MHz 4ns DDR RAM (500MHz effectively)
960 Megapixels per second, 1.9 Gigatexels per second
109 million triangles per second, 3.8 billion anti-aliased samples per second
8 GB per second memory bandwidth
Four programmable pipelines with 2 textures per pipeline
2X and 4X FSAA up to 1024 x 768
- Recommendation: These 'Ultra' versions are usually the first to go! By that I mean the new 128MB graphics cards (such as the GF4 Ti's) have pretty much taken over the market formerly held by the GF3 Ti 500. Not that this is a bad card anymore, just that its price/performance ratio has been shot to swiss cheese! If you see one of these cards, make sure it's the same price as a GF4 Ti 4200, otherwise I recommend you skip it.
GeForce4 Ti Series (NV25)
256-bit GPU, 0.15 micron process, 63 million transistors, 128-bit memory bus width, dual 350MHz RAMDAC, 3D imaging capabilities up to 2048 x 1536, and HRAA capabilities. Notice the dual RAMDAC? The GF4 Ti GPU has built-in support for an extra monitor or LCD screen by default! The GF4 Ti GPU also includes another programmable vertex shader, bringing the total to two (the GF3 had one).
GeForce4 Ti 4200
250MHz GPU core speed, 128MB of 250MHz DDR RAM (500MHz effectively)
900 Megapixels per second, 1.8 Gigatexels per second
102 million triangles per second, 3.6 billion AA samples per second
8 GB per second memory bandwidth
Two programmable pipelines with 2 textures per pipeline
- Recommendation: NVidia didn't just set another generation speed record! And here I thought that it was supposed to take a year to come out with another GPU! This is the best card for mid range user.
GeForce4 Ti 4400
275MHz GPU core speed, 128MB of 275MHz DDR RAM (550MHz effectively)
1.1 Gigapixels per second, 2.2 Gigatexels per second
125 million triangles per second, 4.4 billion AA samples per second
8.8 GB per second memory bandwidth
Two programmable pipelines with 2 textures per pipeline
- Recommendation: Next in line we have the mandatorily faster, higher model numbered bigger brother of the 4200, the 4400! I think we're starting to leave the Radeon 8500 128 behind!
GeForce4 Ti 4600
300MHz GPU core speed, 128MB of 325MHz DDR RAM (650MHz effectively)
1.2 Gigapixels per second, 2.4 Gigatexels per second
136 million triangles per second, 4.8 billion AA samples per second
10.4 GB per second memory bandwidth
Two programmable pipelines with 2 textures per pipeline
- Recommendation: Even it one of the high end card from nvidia, but did'nt support direct x 9.
GeForce FX 5200/Ultra
325Mhz GPU core speed (225-275Mhz non ultra), 64 or 128MB 325MHz (650MHz effectively)
(400mhz non ultra)1.3 Gigapixels per second,
81 million triangles per second
10.4 GB per second memory bandwidth
350 RAMDAC, 0.15 micron, 8x card, Direct X 9
Four programmable pipelines with 2 textures per pipeline
- Recommendation: Well, the new card from nvidia.This cards sometimes lose, sometimes win. Many of you compare this card with TI 4200. I will say this is a MX 440 card that just come with some extra features (non-ultra). Maby this card can boost more performance when play a direct X 9 game especially when enable the anisotropic & AA. But it this card can give enough core/memory clock for playing high end game still is a question mark.
FAQ
AGP 8x,4X/2X/1X
AGP 8x,4X/2X/1X should all be compatabile with 8X/4X/2X/1X cards. But you might want to take note of the 1.5v/3.3v signaling voltage to prevent burning your card out. Notice that some motherboard maby facing with not compatible problem.
AGP slot/card support status:
If AGP 1.0 or 2.0 card is running in 2x, 1x mode it will use 3.3v
If AGP 2.0 or 3.0 card is running in 4x mode it will use 1.5v
If AGP 3.0 card is running in 8x mode it will use 0.8v
AGP 1.0 slot supports 1x and 2x at 3.3v
AGP 2.0 slot supports 1x and 2x at 3.3v
AGP 2.0 slot supports 4x at 1.5v
AGP 3.0 slot supports 4x at 1.5v
AGP 3.0 slot supports 8x at 0.8v
This is the speed by which AGP transfers data from the VGA Card. The larger the number of x, the faster. 1x = 266MB/S, 2X 533MB/S & so on
The differ. performance increase 8X AGP over 4x is only 1-2% usually.
Note: Different configurations can yield different results. To use the more powerful graphics card, a cpu of 1GHz or faster is recommended. A CPU of 800MHz is the minimum to be able to make use of at least 80% of the card's performance.
More video memory will help in games with large textures and at high resolutions. It also helps to boost speed if you set it such that all textures are stored in your video memory and not some in your agp memory(part of your ram is used for this). Moreover, it helps at high resolution and also with AA & Aniso.
Aniso helps to improve the quality of the texture in the games. This makes it look much better overall. They are available in 1x, 2x, 4x, 8x, 16x forms. The higher the level, the more detailed the textures. Normally, we need to set at least 4x to see a significant difference. I usually use 16x for max quality. 16x is only supported by the Radeon8500,9000,9500,9700. GF4 series only 8x.
Latest Official Drivers
Nvidia: www.nvidia.com
Matrox: www.matrox.com
ATI: www.ati.com
Unofficial Drivers Recommended By Me
Omega : www.omegadrivers.net
Prospect Chart for Best Graphic card (Based on www.digit-life.com for July 2004)
NVIDIA GeForce 6800 Ultra 256MB, 450/1100 MHz
ATI RADEON X800XT 256MB, 525/1150 MHz
NVIDIA GeForce 6800 Ultra 256MB, 400/1100 MHz
NVIDIA GeForce 6800 GT 256MB, 350/1000 MHz
ATI RADEON X800 PRO 256MB, 475/890 MHz
NVIDIA GeForce 6800 128MB, 325/700 MHz
ATI RADEON 9800XT 256MB, 412/730 MHz
ATI RADEON 9800 PRO 256MB, 380/680 MHz
ATI RADEON 9800 PRO 128MB, 380/680 MHz
NVIDIA GeForce FX 5950 Ultra GT 256MB, 520/950 MHz
NVIDIA GeForce FX 5950 Ultra 256MB, 475/950 MHz
NVIDIA GeForce FX 5900 Ultra 256MB, 450/850 MHz
ATI RADEON 9800 128MB, 325/590 MHz
NVIDIA GeForce FX 5900 128MB, 400/850 MHz
NVIDIA GeForce FX 5900XT 128MB, 390/700 MHz
NVIDIA GeForce FX 5700 Ultra 128MB, 475/900 MHz
NVIDIA GeForce FX 5700 Ultra 128MB 475/950 MHz DDR3
ATI RADEON 9600XT 128MB, 525/650 MHz
ATI RADEON 9600XT 128MB, 500/600 MHz
ATI RADEON 9800 SE 128MB 256bit, 380/680 MHz
ATI RADEON 9600 PRO 128MB, 400/600 MHz
NVIDIA GeForce FX 5700 128MB, 425/550 MHz
ATI RADEON 9550XT 128MB 400/500 MHz
NVIDIA GeForce FX 5600 128MB, 325/550 MHz
NVIDIA GeForce FX 5700LE 128MB, 250/400 MHz
ATI RADEON 9800 SE 128MB 128bit, 325/540 MHz
ATI RADEON 9600 256MB, 325/400 MHz
ATI RADEON 9600 128MB, 325/400 MHz
ATI RADEON 9550 256MB 250/400 MHz
NVIDIA GeForce FX 5500 128MB 270/400 MHz
NVIDIA GeForce FX 5200 128MB, 250/400 MHz
NVIDIA GeForce FX 5600XT 128MB 128bit, 235/400 MHz
ATI RADEON 9600SE 128MB 64bit, 325/400 MHz
NVIDIA GeForce FX 5600XT 128MB 64bit, 235/400 MHz
NVIDIA GeForce FX 5200 128MB, 64bit, 250/333 MHz
Intregrated graphics card
To now, none are powerful even if they are the newest ones as they are limited by bandwidth and heat constrains. The most powerful ones available would be the NForce, NForce2, IGP320M etc. The Intel Extreme Graphics would not be too bad either but it is worst than those few above. However, these graphics cores will not speed up much even if you allocate more memory but it will help a little. They are useful for playing games that do not require much GPU power and are not video intensive like starcraft, tiberian sun, red alert 2 etc.
True Speed of ram
SDR DDR
7ns 143MHz 286MHz
6ns 166MHz 333MHz
5.5ns 183MHz 366MHz
5ns 200MHz 400MHz
4.5ns 222MHz 444MHz
4ns 250MHz 500MHz
3.8ns 263MHz 526MHz
3.6ns 275MHz 550MHz
2.8ns 357MHz 714MHz
Sometimes the ram are not capable of reaching their rated speed as they are of a bad batch or they are of a first batch that are not very good.
Glossary
Hardware Transform and Lighting (TL or TCL)
New games will soon use hardware T&L to it's limits, I do not recommend purchasing a new video card that does not have it UNLESS you are not going to play any new games. Hardware Transform and lighting is the process of doing the transform and lighting calculations on the video processor instead of on the CPU of the computer. The nVidia Geforce and Geforce 2 series have hardware TL, ATI's Radeon has hardware TCL, and the S3 Savage 2000 has hardware TL.
Full screen anti-aliasing
just because it has it, doesn't mean you need it, some people like the crisp (jaggy) look of their 3D graphics, while others want it blurry/smoothed out. If you enable it, it will sufficiently reduce your video performance, no matter what card you use.
Full-scene anti-aliasing is something that the former 3dfx graphics company pioneered on PCs, and it looks great! Basically, it smooths angled lines, eliminating the 'stair-case look'. Pixels are tiny rectangles that are hardly noticeable, until you start playing 3D games and look at a straight line at an angle. An angled line is made up of shorter lines that are moved over one pixel from eachother, creating a boxy look that has been coined the 'stair-case look' because of the jagged edges. Anti-aliasing smooths the jagged edges at the affected pixels by creating a softer transition from one pixel line to the next. The result is amazing, and for full-screen PC games, nothing will do except to implement anti-aliasing over the full-screen (or scene as the technology is named)
Video RAM
If you plan on playing new games, get a video card with at least 32MB's of RAM, More is better in the case of video systems. DDR (Double Data Rate) RAM will be faster than SDR (Single Data Rate) RAM.
Video Capture
Do you want to do some video capturing, or perhaps play your game console on your computer screen? Then this is a nice feature to have. You can also run your Digital Cable, Digital Satellite, Analog Satellite, or Analog cable (with a external cable box/tuner) through this and watch it on your computer monitor.
TV-out/Video-out
This feature is typically useless except for when you have a big-screen TV, but is only good for movies and games that do not have tiny-text.
TV-Tuner
Only if you have analog cable can you use this feature, and since analog cable is being replaced by digital cable, this is should not be a high-priority feature.
DVI support
Only the ATI Radeon has this feature at the moment, and you will only need this feature if you purchase a flat-panel screen with a DVI interface.
DVD / Mpeg-2 acceleration features
Don't be confused here, all the companies claim to have some type of DVD acceleration, but to make sure, look for the iDCT or Inverse Discrete Cosine Transform feature.
OpenGL support
Most Video card vendors supply some version of OpenGL with their hardware. The High-end video cards are fine-tuned for High-end 3D modeling software, whereas the general consumer cards are fine-tuned for games.
Hardware Overlay
This feature is used with some High-end Professional 3D modeling software.
Pixels and Texels: Fillrate
A texel is a textured pixel meaning that detail is added to it, instead of a bland polygon-like look. Trust me, if you play games, you'll appreciate textures if you haven't already seen them. They really add a lot of realism to a scene and are more easily recognised by the human eye. With that said, you need a lot of textures in your games! Thanks to recent advancements in GPU's, textures are actually more plentiful than pixels so you won't run out of them The number of pixels and texels that can be generated by the GPU in one second is termed the 'fillrate' of that GPU (related to how well it can 'fill' your monitor with images). Fillrate is now measured in hundreds of millions and even billions of pixels and texels per second, but don't expect this much all the time or even half the time. Since this spec is merely a measure of maximum potential, if everything works to its fullest, it's really only useful to compare fillrates rather than to choose a graphics card based solely on fillrate.
Polygons, Triangles, and Anti-Aliased Samples?
All polygons are made up of one or more triangles, but polygon sounds better so NVidia uses that word when they really mean the simplest polygon - a triangle - just like ATI measures. Actually, because of the wide disparity in triangles per second specs, I suspect that ATI is really measuring a bit more than NVidia. As for 'anti-aliased samples', as far as I have been able to calculate, there are about 28.8 million triangles for every one billion 'anti-aliased samples'.
HRAA
High-resolution anti-aliasing is basically the same thing as FSAA, except that HRAA is capable of eliminating that 'stair-case look' beyond 1024 x 768, into the high resolutions that we now demand Although HRAA is just getting started, anti-aliasing techniques are getting more robust at the same time that graphics processors are becoming exponentially more powerful. I predict that, in a couple years, anti-aliasing will be as common-place as textures - it will be a part of your graphics card and games without mention.
32-bit color
There seems to be a lot of confusion over what 32-bit color offers over 24-bit color, and rightly so because they both are capable of 16 million colors (16,777,216 to be precise) or 2^24th power. 32-bit hardware support was also something that was hyped at the time without much explanation, such as that NVidia's TNT2 could do 32-bit color but not 3dfx's VooDoo3. So if the total number of colors doesn't change, what are the extra 8 bits for? Alpha blending. It's a value that conveniently takes up 8 bits and refers to how transparent the rendered object is. Everything from completely opaque (like solid walls) to somewhat translucent (like water and fog) to completely transparent (like clear glass) can be described using these 8 extra bits. 32-bit games can then use this alpha blending value to create superbly realistic lighting effects with semi-substantial substances like variable fog thickness and light refraction due to glass and water. First-person shooters were the first games to adopt this technology, but since then, other 3D (flight sims and action games) and even 2D games (such as Diablo II) are using 32-bit color.
RAM bus width
For those who know how to calculate bandwidth (number of bytes transferred per second), you know that RAM speed and RAM bus width are required. Without going into too much detail for those new to 'tech specs', the bus width is measured in bits and more are better. For example, 100MHz RAM on a 128-bit bus width doesn't sound all that impressive compared to 200MHz RAM on a 64-bit bus width, mostly because speed is considered first, but a quick calculation reveals that both RAM set-ups have the same bandwidth and thus the same performance.
Nanosecond Rating
This can be used to calculate the theoretical speed limit of RAM, if you ever decide to do some RAM overclocking on your graphics card For instance, a 10ns RAM rating means a speed limit of 100MHz, while a 7ns RAM rating means a speed limit of about 143MHz. This calculation is actually quite simple and fast with the standard calculator program included with Windows. Take the ns number, divide by 1 billion (9 zeros), and hit the "1/x" or inverse button and there's your number in Hz. Divide by 1 million to get the MHz number, but you should be able to look at the Hz number and do the MHz conversion in your head. Imagine what 1ns RAM could do
Refresh Rate
Measured in hertz, the refresh rate is the number of times per second that a monitor screen updates. Low refresh rates of 60 Hz or less can cause perceptible flicker, as they are not fast enough to trick the eye into thinking the display is a solid image. This is fatiguing to the eyes and even causes headaches. When buying a graphics card, make certain it supports a refresh rate of 75 Hz or higher at the maximum resolutions and colour depths you plan to work with.
Note: Please PM me if there are any errors. System configuration and results may vary. Some guides is taken from others web( of course I refer to others web). I very Appreciate if any of U can add some info especially about new graphic cards specification to my guides.
References :
U can view all result from 1st 1998 graphic card until the latest 2003 graphic card here
63 Accelerators on the Pentium II 350 MHz in Three Tests
www.ati.com
www.nvidia.com
www.tomshardware.com
www.digit-life.com
Resources : nvidia.com, ati.com. digit-life.com. tomshardware.com. guru3d.com. Special thanx to : Someguy at hwz.com, Kisai- Canadian PC Technician & computernuts- Programming student.
Aug 11 2004, 11:49 AM, updated 22y ago
Quote
0.0156sec
0.43
5 queries
GZIP Disabled