Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

 Little Bit of HDD Info, basic introduction

views
     
TSavenger
post Nov 27 2004, 12:48 AM, updated 21y ago

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Basic Hard Disk Drive Components

Many types of hard disk drives are on the market, but nearly all share the same basic physical components. Some differences might exist in the quality of these components (and in the quality of the materials used to make them), but the operational characteristics of most drives are similar. The basic components of a typical hard disk drive are as follows:

Disk platters

Read/write heads

Head actuator mechanism

Spindle motor (inside platter hub)

Logic board (controller or printed circuit board)

Cables and connectors

Configuration items (such as jumpers or switches)
user posted image

This post has been edited by avenger: Nov 27 2004, 12:49 AM
TSavenger
post Nov 27 2004, 12:49 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Hard Disk Platters (Disks)

A hard disk drive has one or more platters, or disks.

Most hard disk drives have two or more platters, although some of the smaller drives used in portable systems have only one. The number of platters a drive can have is limited by the drive's vertical physical size. The maximum number of platters I have seen in any 3.5-inch drive is 12; however, most drives have six or fewer.

Platters have traditionally been made from an aluminum/magnesium alloy, which provides both strength and light weight. However, manufacturers' desire for higher and higher densities and smaller drives has led to the use of platters made of glass (or, more technically, a glass-ceramic composite).

One such material, produced by the Dow Corning Corporation, is called MemCor. MemCor is made of a glass-ceramic composite that resists cracking better than pure glass. Glass platters offer greater rigidity than metal (because metal can be bent and glass cannot) and can therefore be machined to one-half the thickness of conventional aluminum disks-sometimes less. Glass platters are also much more thermally stable than aluminum platters, which means they do not expand or contract very much with changes in temperature. Several hard disk drives made by companies such as IBM, Seagate, Toshiba, Areal Technology, and Maxtor currently use glass or glass-ceramic platters. In fact, Hitachi Global Storage Technologies (Hitachi and IBM's joint hard disk venture) is designing all new drives with only glass platters. For most other manufacturers as well, glass disks will probably replace the standard aluminum/magnesium substrate over the next few years.

Recording Media
No matter which substrate is used, the platters are covered with a thin layer of a magnetically retentive substance, called the medium, on which magnetic information is stored. Three popular types of magnetic media are used on hard disk platters:

Oxide media

Thin-film media

AFC (antiferromagnetically coupled) media

Oxide Media
The oxide medium is made of various compounds, containing iron oxide as the active ingredient. The magnetic layer is created on the disk by coating the aluminum platter with a syrup containing iron-oxide particles. This syrup is spread across the disk by spinning the platters at a high speed; centrifugal force causes the material to flow from the center of the platter to the outside, creating an even coating of the material on the platter. The surface is then cured and polished. Finally, a layer of material that protects and lubricates the surface is added and burnished smooth. The oxide coating is normally about 30 millionths of an inch thick. If you could peer into a drive with oxide-coated platters, you would see that the platters are brownish or amber.

As drive density increases, the magnetic medium needs to be thinner and more perfectly formed. The capabilities of oxide coatings have been exceeded by most higher-capacity drives. Because the oxide medium is very soft, disks that use it are subject to head-crash damage if the drive is jolted during operation. Most older drives, especially those sold as low-end models, use oxide media on the drive platters. Oxide media, which have been used since 1955, remained popular because of their relatively low cost and ease of application. Today, however, very few drives use oxide media.

Thin-Film Media
The thin-film medium is thinner, harder, and more perfectly formed than the oxide medium. Thin film was developed as a high-performance medium that enabled a new generation of drives to have lower head-floating heights, which in turn made increases in drive density possible. Originally, thin-film media were used only in higher-capacity or higher-quality drive systems, but today, virtually all drives use thin-film media.

The thin-film medium is aptly named. The coating is much thinner than can be achieved by the oxide-coating method. Thin-film media are also known as plated or sputtered media because of the various processes used to deposit the thin film on the platters.

Thin-film-plated media are manufactured by depositing the magnetic medium on the disk with an electroplating mechanism, in much the same way that chrome plating is deposited on the bumper of a car. The aluminum/magnesium or glass platter is immersed in a series of chemical baths that coat the platter with several layers of metallic film. The magnetic medium layer itself is a cobalt alloy about 1 µ-inch thick.

Thin-film sputtered media are created by first coating the aluminum platters with a layer of nickel phosphorus and then applying the cobalt-alloy magnetic material in a continuous vacuum-deposition process called sputtering. This process deposits magnetic layers as thin as 1 µ-inch or less on the disk, in a fashion similar to the way that silicon wafers are coated with metallic films in the semiconductor industry. The same sputtering technique is then used again to lay down an extremely hard, 1 µ-inch protective carbon coating. The need for a near-perfect vacuum makes sputtering the most expensive of the processes described here.

The surface of a sputtered platter contains magnetic layers as thin as 1 µ-inch. Because this surface also is very smooth, the head can float more closely to the disk surface than was possible previously. Floating heights as small as 10nm (nanometers, or about 0.4 µ-inch) above the surface are possible. When the head is closer to the platter, the density of the magnetic flux transitions can be increased to provide greater storage capacity. Additionally, the increased intensity of the magnetic field during a closer-proximity read provides the higher signal amplitudes needed for good signal-to-noise performance.

Both the sputtering and plating processes result in a very thin, hard film of magnetic medium on the platters. Because the thin-film medium is so hard, it has a better chance of surviving contact with the heads at high speed. In fact, modern thin-film media are virtually uncrashable. If you could open a drive to peek at the platters, you would see that platters coated with the thin-film medium look like mirrors.

AFC Media
The latest advancement in drive media is called antiferromagnetically coupled (AFC) media, which is designed to allow densities to be pushed beyond previous limits. Anytime density is increased, the magnetic layer on the platters must be made thinner and thinner. Areal density (tracks per inch times bits per inch) has increased in hard drives to the point where the grains in the magnetic layer used to store data are becoming so small that they become unstable over time, causing data storage to become unreliable. This is referred to as the superparamagnetic limit, which has been determined to be between 30 and 50Gbit/sq. in. Drives today have already reached 35Gbit/sq. in., which means the superparamagnetic limit is now becoming a factor in drive designs.

AFC media consists of two magnetic layers separated by a very thin 3-atom (6 angstrom) film layer of the element ruthenium. IBM has coined the term "pixie dust" to refer to this ultra-thin ruthenium layer. This sandwich produces an antiferromagnetic coupling of the top and bottom magnetic layers, which causes the apparent magnetic thickness of the entire structure to be the difference between the top and bottom magnetic layers. This allows the use of physically thicker magnetic layers with more stable, larger grains to function as if they were really a single layer that was much thinner overall.

IBM has introduced AFC media into several drives, starting with the 2.5-inch Travelstar 30GN series of notebook drives introduced in 2001, the first drives on the market to use AFC media. In addition, IBM has introduced AFC media in desktop 3.5-inch drives starting with the Deskstar 120 GXP. I expect other manufacturers to introduce AFC media into their drives as well. The use of AFC media is expected to allow areal densities to be extended to 100Gbit/sq. in. and beyond.
TSavenger
post Nov 27 2004, 12:50 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Read/Write Heads

A hard disk drive usually has one read/write head for each platter surface (meaning that each platter has two sets of read/write heads-one for the top side and one for the bottom side). These heads are connected, or ganged, on a single movement mechanism. The heads, therefore, move across the platters in unison.

Mechanically, read/write heads are simple. Each head is on an actuator arm that is spring-loaded to force the head into contact with a platter. Few people realize that each platter actually is "squeezed" by the heads above and below it. If you could open a drive safely and lift the top head with your finger, the head would snap back down into the platter when you released it. If you could pull down on one of the heads below a platter, the spring tension would cause it to snap back up into the platter when you released it.

When the drive is at rest, the heads are forced into direct contact with the platters by spring tension, but when the drive is spinning at full speed, air pressure develops below the heads and lifts them off the surface of the platter. On a drive spinning at full speed, the distance between the heads and the platter can be anywhere from 0.5 to 5 µ-inch or more in a modern drive.

In the early 1960s, hard disk drive recording heads operated at floating heights as large as 200-300 µ-inch; today's drive heads are designed to float as low as 10nm (nanometers) or 0.4 µ-inch above the surface of the disk. To support higher densities in future drives, the physical separation between the head and disk is expected to drop even further, such that on some drives there will even be contact with the platter surface. New media and head designs will be required to make full or partial contact recording possible.

QUOTE
Caution

The small size of the gap between the platters and the heads is why you should never open the disk drive's HDA except in a clean-room environment. Any particle of dust or dirt that gets into this mechanism could cause the heads to read improperly or possibly even to strike the platters while the drive is running at full speed. The latter event could scratch the platter or the head, causing permanent damage.
To ensure the cleanliness of the interior of the drive, the HDA is assembled in a class-100 or better clean room. This specification means that a cubic foot of air cannot contain more than 100 particles that measure up to 0.5 microns (19.7 µ-inch). A single person breathing while standing motionless spews out 500 such particles in a single minute! These rooms contain special air-filtration systems that continuously evacuate and refresh the air. A drive's HDA should not be opened unless it is inside such a room.

Although maintaining a clean-room environment might seem to be expensive, many companies manufacture tabletop or bench-size clean rooms that sell for only a few thousand dollars. Some of these devices operate like a glove box; the operator first inserts the drive and any tools required and then closes the box and turns on the filtration system. Inside the box, a clean-room environment is maintained, and a technician can use the built-in gloves to work on the drive.

In other clean-room variations, the operator stands at a bench where a forced-air curtain maintains a clean environment on the bench top. The technician can walk in and out of the clean-room field by walking through the air curtain. This air curtain is very similar to the curtain of air used in some stores and warehouses to prevent heat from escaping in the winter while leaving a passage wide open.

Because the clean environment is expensive to produce, few companies, except those that manufacture the drives, are properly equipped to service hard disk drives.

As disk drive technology has evolved, so has the design of the read/write head. The earliest heads were simple iron cores with coil windings (electromagnets). By today's standards, the original head designs were enormous in physical size and operated at very low recording densities. Over the years, head designs have evolved from the first simple Ferrite Core designs into the Magneto-Resistive and Giant Magneto-Resistive types available today.

user posted image
TSavenger
post Nov 27 2004, 12:51 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Head Actuator Mechanisms

Possibly more important than the heads themselves is the mechanical system that moves them: the head actuator. This mechanism moves the heads across the disk and positions them accurately above the desired cylinder.

The voice coil actuators used in hard disk drives made today use a feedback signal from the drive to accurately determine the head positions and adjust them, if necessary. This arrangement provides significantly greater performance, accuracy, and reliability than traditional stepper motor actuator designs.

A voice coil actuator works by pure electromagnetic force. The construction of the mechanism is similar to that of a typical audio speaker, from which the term voice coil is derived. An audio speaker uses a stationary magnet surrounded by a voice coil, which is connected to the speaker's paper cone. Energizing the coil causes it to move relative to the stationary magnet, which produces sound from the cone. In a typical hard disk drive's voice coil system, the electromagnetic coil is attached to the end of the head rack and placed near a stationary magnet. No physical contact occurs between the coil and the magnet; instead, the coil moves by pure magnetic force. As the electromagnetic coils are energized, they attract or repulse the stationary magnet and move the head rack. Systems such as these are extremely quick and efficient and usually much quieter than systems driven by stepper motors.

Voice coil actuators use a guidance mechanism called a servo to tell the actuator where the heads are in relation to the cylinders and to place the heads accurately at the desired positions. This positioning system often is called a closed-loop feedback mechanism. It works by sending the index (or servo) signal to the positioning electronics, which return a feedback signal that is used to position the heads accurately. The system also is called servo controlled, which refers to the index or servo information that is used to dictate or control head-positioning accuracy.

A voice coil actuator with servo control is not affected by temperature changes, as a stepper motor is. When temperature changes cause the disk platters to expand or contract, the voice coil system compensates automatically because it never positions the heads in predetermined track positions. Rather, the voice coil system searches for the specific track, guided by the prewritten servo information, and then positions the head rack precisely above the desired track, wherever it happens to be. Because of the continuous feedback of servo information, the heads adjust to the current position of the track at all times. For example, as a drive warms up and the platters expand, the servo information enables the heads to "follow" the track. As a result, a voice coil actuator is sometimes called a track-following system.

Automatic Head Parking

When you power off a hard disk drive using a CSS (contact start stop) design, the spring tension in each head arm pulls the heads into contact with the platters. The drive is designed to sustain thousands of takeoffs and landings, but it is wise to ensure that the landings occur at a spot on the platter that contains no data. Older drives required manual head parking; you had to run a program that positioned the drive heads to a landing zone, usually the innermost cylinder, before turning the system off. Modern drives automatically park the heads, so park programs are no longer necessary.

Some amount of abrasion occurs during the landing and takeoff process, removing just a "micro puff" from the magnetic medium-but if the drive is jarred during the landing or takeoff process, real damage can occur. Newer drives that use load/unload designs incorporate a ramp positioned outside the outer surface of the platters to prevent any contact between the heads and platters, even if the drive is powered off. Load/unload drives automatically park the heads on the ramp when the drive is powered off.

One benefit of using a voice coil actuator is automatic head parking. In a drive that has a voice coil actuator, the heads are positioned and held by magnetic force. When the power to the drive is removed, the magnetic field that holds the heads stationary over a particular cylinder dissipates, enabling the head rack to skitter across the drive surface and potentially cause damage. In the voice coil design, the head rack is attached to a weak spring at one end and a head stop at the other end. When the system is powered on, the spring is overcome by the magnetic force of the positioner. When the drive is powered off, however, the spring gently drags the head rack to a park-and-lock position before the drive slows down and the heads land. On some drives, you could actually hear the "ting...ting...ting...ting" sound as the heads literally bounce-park themselves, driven by this spring.

On a drive with a voice coil actuator, you activate the parking mechanism by turning off the computer; you do not need to run a program to park or retract the heads. In the event of a power outage, the heads park themselves automatically. (The drives unpark automatically when the system is powered on.)
TSavenger
post Nov 27 2004, 12:51 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Air Filters

Nearly all hard disk drives have two air filters. One is called the recirculating filter, and the other is called either a barometric or breather filter. These filters are permanently sealed inside the drive and are designed never to be changed for the life of the drive, unlike many older mainframe hard disks that had changeable filters.

Although it is vented, a hard disk does not actively circulate air from inside to outside the HDA or vice versa. The recirculating filter permanently installed inside the HDA is designed to filter only the small particles scraped off the platters during head takeoffs and landings (and possibly any other small particles dislodged inside the drive). Because hard disk drives are permanently sealed and do not circulate outside air, they can run in extremely dirty environments.

The HDA in a hard disk drive is sealed but not airtight. The HDA is vented through a barometric or breather filter element that enables pressure equalization (breathing) between the inside and outside of the drive. For this reason, most hard drives are rated by the drive's manufacturer to run in a specific range of altitudes, usually from 1,000 feet below to 10,000 feet above sea level. In fact, some hard drives are not rated to exceed 7,000 feet while operating because the air pressure would be too low inside the drive to float the heads properly. As the environmental air pressure changes, air bleeds into or out of the drive, so internal and external pressures are identical. Although air does bleed through a vent, contamination usually is not a concern because the barometric filter on this vent is designed to filter out all particles larger than 0.3 microns (about 12 µ-inch) to meet the specifications for cleanliness inside the drive. You can see the vent holes on most drives, which are covered internally by this breather filter. Some drives use even finer-grade filter elements to keep out even smaller particles.

I conducted a seminar in Hawaii several years ago, and several of the students were from one of the astronomical observatories atop Mauna Kea. They indicated that virtually all the hard disk drives they had tried to use at the observatory site had failed very quickly, if they worked at all. This was no surprise because the observatories are at the 13,796-foot peak of the mountain, and at that altitude, even people don't function very well! At the time, they had to resort to solid-state (RAM) disks, tape drives, or even floppy disk drives as their primary storage medium. Since then, IBM's Adstar division (which produces all IBM hard drives) has introduced a line of rugged 3.5-inch drives that are hermetically sealed (airtight), although they do have air inside the HDA. Because they carry their own internal air under pressure, these drives can operate at any altitude and can also withstand extremes of shock and temperature. The drives are designed for military and industrial applications, such as systems used aboard aircraft and in extremely harsh environments. They are, of course, more expensive than typical hard drives that operate under ambient air pressure.

Hard Disk Temperature Acclimation

Because hard drives have a filtered port to bleed air into or out of the HDA, moisture can enter the drive, and after some period of time, it must be assumed that the humidity inside any hard disk is similar to that outside the drive. Humidity can become a serious problem if it is allowed to condense-and especially if you power up the drive while this condensation is present. Most hard disk manufacturers have specified procedures for acclimating a hard drive to a new environment with different temperature and humidity ranges, and especially for bringing a drive into a warmer environment in which condensation can form. This situation should be of special concern to users of laptop or portable systems. If you leave a portable system in an automobile trunk during the winter, for example, it could be catastrophic to bring the machine inside and power it up without allowing it to acclimate to the temperature indoors.

user posted image

This post has been edited by avenger: Nov 27 2004, 12:52 AM
TSavenger
post Nov 27 2004, 12:53 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Spindle Motors

The motor that spins the platters is called the spindle motor because it is connected to the spindle around which the platters revolve. Spindle motors in hard disk drives are always connected directly; no belts or gears are involved. The motor must be free of noise and vibration; otherwise, it can transmit a rumble to the platters, which can disrupt reading and writing operations.

The spindle motor also must be precisely controlled for speed. The platters in hard disk drives revolve at speeds ranging from 3,600rpm to 15,000rpm (60-250 revolutions per second) or more, and the motor has a control circuit with a feedback loop to monitor and control this speed precisely. Because the speed control must be automatic, hard drives do not have a motor-speed adjustment. Some diagnostics programs claim to measure hard drive rotation speed, but all these programs do is estimate the rotational speed by the timing at which sectors pass under the heads.

There is actually no way for a program to measure the hard disk drive's rotational speed; this measurement can be made only with sophisticated test equipment. Don't be alarmed if some diagnostics program tells you that your drive is spinning at an incorrect speed; most likely, the program is wrong, not the drive. Platter rotation and timing information is not provided through the hard disk controller interface. In the past, software could give approximate rotational speed estimates by performing multiple sector read requests and timing them, but this was valid only when all drives had the same number of sectors per track and spun at the same speed. Zoned-bit recording-combined with the many various rotational speeds used by modern drives, not to mention built-in buffers and caches-means that these calculation estimates cannot be performed accurately by software.

On most drives, the spindle motor is on the bottom of the drive, just below the sealed HDA. Many drives today, however, have the spindle motor directly built in to the platter hub inside the HDA. By using an internal hub spindle motor, the manufacturer can stack more platters in the drive because the spindle motor takes up no vertical space.

QUOTE
Note

Spindle motors, particularly on the larger form-factor drives, can consume a great deal of 12-volt power. Most drives require two to three times the normal operating power when the motor first spins the platters. This heavy draw lasts only a few seconds or until the drive platters reach operating speed. If you have more than one drive, you should try to sequence the start of the spindle motors so the power supply does not have to provide such a large load to all the drives at the same time. Most SCSI and some ATA drives have a delayed spindle-motor start feature.
TSavenger
post Nov 27 2004, 12:53 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Fluid Dynamic Bearings

Traditionally, spindle motors have used ball bearings in their design, but limitations in their performance have now caused drive manufacturers to look for alternatives. The main problem with ball bearings is that they have approximately 0.1 micro-inch (millionths of an inch) of runout, which is lateral side-to-side play in the bearings. Although that may seem small, with the ever-increasing density of modern drives it has become a problem. This runout allows the platters to move randomly that distance from side to side, which causes the tracks to wobble under the heads. Additionally, the runout plus the metal-to-metal contact nature of ball bearings allows an excessive amount of mechanical noise and vibration to be generated, and that is becoming a problem for drives that spin at higher speeds.

The solution is a new type of bearing, called a fluid dynamic bearing, that uses a highly viscous lubricating fluid between the spindle and sleeve in the motor. This fluid serves to dampen vibrations and movement, allowing runout to be reduced to 0.01 micro-inches or less. Fluid dynamic bearings also allow for better shock resistance, improved speed control, and reduced noise generation. Several of the more advanced drives on the market today already incorporate fluid dynamic bearings, especially those designed for very high spindle speeds, high areal densities, or low noise. Over the next few years I expect to see fluid dynamic bearings become standard issue in most hard drives.

Logic Boards

All hard disk drives have one or more logic boards mounted on them. The logic boards contain the electronics that control the drive's spindle and head actuator systems and present data to the controller in some agreed-upon form. On ATA drives, the boards include the controller itself, whereas SCSI drives include the controller and the SCSI bus adapter circuit.

Many disk drive failures occur in the logic board, not in the mechanical assembly. (This statement does not seem logical, but it is true.) Therefore, you sometimes can repair a failed drive by replacing the logic board rather than the entire drive. Replacing the logic board, moreover, enables you to regain access to the data on the drive-something that replacing the entire drive does not provide. Unfortunately, none of the drive manufacturers sell logic boards separately. The only way to obtain a replacement logic board for a given drive would be to purchase a functioning identical drive and then cannibalize it for parts. Of course, it doesn't make sense to purchase an entire new drive just to repair an existing one, except in the case where data recovery from the old drive is necessary.

If you have an existing drive that contains important data, and the logic board fails, you will be unable to retrieve the data from the drive unless the board is replaced. Because the value of the data in most cases will far exceed the cost of the drive, a new drive that is identical to the failed drive can be purchased and cannibalized for parts such as the logic board, which can be swapped onto the failed drive. This method is common among companies that offer data-recovery services. They will stock a large number of popular drives that they can use for parts to allow data recovery from the defective customer drives they receive.

Most of the time the boards are fairly easy to change with nothing more than a screwdriver. Merely removing and reinstalling a few screws as well as unplugging and reconnecting a cable or two are all that is required to remove and replace a typical logic board.
TSavenger
post Nov 27 2004, 12:54 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


SMART

SMART (Self-Monitoring, Analysis, and Reporting Technology) is an industry standard providing failure prediction for disk drives. When SMART is enabled for a given drive, the drive monitors predetermined attributes that are susceptible to or indicative of drive degradation. Based on changes in the monitored attributes, a failure prediction can be made. If a failure is deemed likely to occur, SMART makes a status report available so the system BIOS or driver software can notify the user of the impending problems, perhaps enabling the user to back up the data on the drive before any real problems occur.

Predictable failures are the types of failures SMART attempts to detect. These failures result from the gradual degradation of the drive's performance. According to Seagate, 60% of drive failures are mechanical, which is exactly the type of failures SMART is designed to predict.

Of course, not all failures are predictable, and SMART cannot help with unpredictable failures that occur without any advance warning. These can be caused by static electricity, improper handling or sudden shock, or circuit failure, such as thermal-related solder problems or component failure.

SMART originated in technology that was created by IBM in 1992. That year IBM began shipping 3.5-inch hard disk drives equipped with Predictive Failure Analysis (PFA), an IBM-developed technology that periodically measures selected drive attributes and sends a warning message when a predefined threshold is exceeded. IBM turned this technology over to the ANSI organization, and it subsequently became the ANSI-standard SMART protocol for SCSI drives, as defined in the ANSI-SCSI Informational Exception Control (IEC) document X3T10/94-190.

Interest in extending this technology to ATA drives led to the creation of the SMART Working Group in 1995. Besides IBM, other companies represented in the original group were Seagate Technology, Conner Peripherals (now a part of Seagate), Fujitsu, Hewlett-Packard, Maxtor, Quantum, and Western Digital. The SMART specification produced by this group and placed in the public domain covers both ATA and SCSI hard disk drives and can be found in most of the more recently produced drives on the market.

The SMART design of attributes and thresholds is similar in ATA and SCSI environments, but the reporting of information differs.

In an ATA environment, driver software on the system interprets the alarm signal from the drive generated by the SMART "report status" command. The driver polls the drive on a regular basis to check the status of this command and, if it signals imminent failure, sends an alarm to the operating system, where it will be passed on via an error message to the end user. This structure also enables future enhancements, which might allow reporting of information other than drive failure conditions. The system can read and evaluate the attributes and alarms reported in addition to the basic "report status" command.

SCSI drives with SMART communicate a reliability condition only as either good or failing. In a SCSI environment, the failure decision occurs at the disk drive, and the host notifies the user for action. The SCSI specification provides for a sense bit to be flagged if the drive determines that a reliability issue exists. The system then alerts the end user via a message.

The basic requirements for SMART to function in a system are simple. All you need are a SMART-capable hard disk drive and a SMART-aware BIOS or hard disk driver for your particular operating system. If your BIOS does not support SMART, utility programs are available that can support SMART on a given system. These include Norton Disk Doctor from Symantec, EZ-Drive from StorageSoft, and Data Advisor from Ontrack Data International.

Note that traditional disk diagnostics, such as Scandisk and Norton Disk Doctor, work only on the data sectors of the disk surface and do not monitor all the drive functions that are monitored by SMART. Most modern disk drives keep spare sectors available to use as substitutes for sectors that have errors. When one of these spares is reallocated, the drive reports the activity to the SMART counter but still looks completely "defect free" to a surface analysis utility, such as Scandisk.

Drives with SMART monitor a variety of attributes that vary from one manufacturer to another. Attributes are selected by the device manufacturer based on their capability to contribute to the prediction of degrading or fault conditions for that particular drive. Most drive manufacturers consider the specific set of attributes being used and the identity of those attributes as vendor specific and proprietary.

Some drives monitor the floating height of the head above the magnetic media. If this height changes from a nominal figure, the drive could fail. Other drives can monitor different attributes, such as ECC (error-correction code) circuitry that indicates whether soft errors are occurring when reading or writing data. Some of the attributes monitored on various drives include the following:

Head floating height

Data throughput performance

Spin-up time

Reallocated (spared) sector count

Seek error rate

Seek time performance

Drive spin-up retry count

Drive calibration retry count

Each attribute has a threshold limit that is used to determine the existence of a degrading or fault condition. These thresholds are set by the drive manufacturer, can vary among manufacturers and models, and cannot be changed.

The basic requirements for SMART to function in a system are simple. All you need is a SMART-capable hard disk drive and a SMART-aware BIOS or hard disk driver for your particular operating system. If your BIOS does not support SMART, utility programs are available that can support SMART on a given system. These include Norton Utilities from Symantec, EZ Drive from StorageSoft, and Data Advisor from Ontrack.

Any drives reporting a SMART failure should be considered likely to fail at any time. Of course, you should back up the data on such a drive immediately, and you might consider replacing the drive before any actual data loss occurs. When sufficient changes occur in the monitored attributes to trigger a SMART alert, the drive sends an alert message via an ATA or a SCSI command (depending on the type of hard disk drive you have) to the hard disk driver in the system BIOS, which then forwards the message to the operating system. The operating system then displays a warning message as follows:

Immediately back up your data and replace your hard disk drive. A failure may be imminent.

The message might contain additional information, such as which physical device initiated the alert, a list of the logical drives (partitions) that correspond to the physical device, and even the type, manufacturer, and serial number of the device.

The first thing to do when you receive such an alert is to heed the warning and back up all the data on the drive. It also is wise to back up to new media and not overwrite any previous good backups you might have, just in case the drive fails before the backup is complete.

After backing up your data, what should you do? SMART warnings can be caused by an external source and might not actually indicate that the drive itself is going to fail. For example, environmental changes, such as high or low ambient temperatures, can trigger a SMART alert, as can excessive vibration in the drive caused by an external source. Additionally, electrical interference from motors or other devices on the same circuit as your PC can induce these alerts.

If the alert was not caused by an external source, a drive replacement might be indicated. If the drive is under warranty, contact the vendor and ask whether they will replace it. If no further alerts occur, the problem might have been an anomaly, and you might not need to replace the drive. If you receive further alerts, replacing the drive is recommended. If you can connect both the new and existing (failing) drive to the same system, you might be able to copy the entire contents of the existing drive to the new one, saving you from having to install or reload all the applications and data from your backup.
TSavenger
post Nov 27 2004, 12:55 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Performance

When you select a hard disk drive, one of the important features you should consider is the performance (speed) of the drive. Hard drives can have a wide range of performance capabilities. As is true of many things, one of the best indicators of a drive's relative performance is its price. An old saying from the automobile-racing industry is appropriate here: "Speed costs money. How fast do you want to go?"

Normally the speed of a disk drive is measured in several ways:

Interface (external) transfer rate

Media (internal) transfer rates

Average access time

Average Seek Time

Average seek time, normally measured in milliseconds (ms), is the average amount of time it takes to move the heads from one cylinder to another a random distance away. One way to measure this specification is to run many random track-seek operations and then divide the timed results by the number of seeks performed. This method provides an average time for a single seek.

The standard method used by many drive manufacturers when reporting average seek times is to measure the time it takes the heads to move across one-third of the total cylinders. Average seek time depends only on the drive itself; the type of interface or controller has little effect on this specification. The average seek rating is primarily a gauge of the capabilities of the head actuator mechanism.

QUOTE
Note

Be wary of benchmarks that claim to measure drive seek performance. Most ATA and SCSI drives use a scheme called sector translation, so any commands the drive receives to move the heads to a specific cylinder might not actually result in the intended physical movement. This situation renders some benchmarks meaningless for those types of drives. SCSI drives also require an additional step because the commands first must be sent to the drive over the SCSI bus. These drives might seem to have the fastest access times because the command overhead is not factored in by most benchmarks. However, when this overhead is factored in by benchmark programs, these drives receive poor performance figures.

TSavenger
post Nov 27 2004, 12:55 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Transfer Rates

The transfer rate is probably more important to overall system performance than any other statistic, but it is also one of the most misunderstood specifications. The problem stems from the fact that several transfer rates can be specified for a given drive; however, the most important of these is usually overlooked.

A great deal of confusion arises from the fact that drive manufacturers can report up to seven different transfer rates for a given drive. Perhaps the least important of these (but the one people seem to focus on the most) is the raw interface transfer rate, which for the 2.5-inch ATA drives used in portable systems is 100MBps. Unfortunately, few people seem to realize that the drives actually read and write data much slower than that. The most important transfer rate specifications are the media (or internal) transfer rates, which express how fast a drive can actually read or write data. Media transfer rates can be expressed as a raw maximum, raw minimum, formatted maximum, formatted minimum, or averages of any of these. Few report the averages, but they can be easily calculated.

The media transfer rate is far more important than the interface transfer rate because it is the true rate at which data can be read from (or written to) the disk. In other words, it tells how fast data can be moved to and from the drive platters (media). It is the rate that any sustained transfer can hope to achieve. This rate will normally be reported as a minimum and maximum figure, although many drive manufacturers report the maximum only.

Media transfer rates have minimum and maximum figures because drives today use zoned recording with fewer sectors per track on the inner cylinders than the outer cylinders. Typically, a drive is divided into 16 or more zones, with the inner zone having about half the sectors per track (and therefore about half the transfer rate) of the outer 0zone. Because the drive spins at a constant rate, data can be read twice as fast from the outer cylinders than from the inner cylinders.

Two primary factors contribute to transfer rate performance: rotational speed and the linear recording density or sector-per-track figures. When two drives with the same number of sectors per track are being compared, the drive that spins more quickly will transfer data more quickly. Likewise, when two drives with identical rotational speeds are being compared, the drive with the higher recording density (more sectors per track) will be faster. A higher-density drive can be faster than one that spins faster-both factors have to be taken into account to know the true score.

To find the transfer specifications for a given drive, look in the data sheet or preferably the documentation or manual for the drive. These can usually be downloaded from the drive manufacturer's Web site. This documentation will often report the maximum and minimum sector-per-track specifications, which-combined with the rotational speed-can be used to calculate true formatted media performance. Note that you would be looking for the true number of physical sectors per track for the outer and inner zones. Be aware that many drives (especially zoned-bit recording drives) are configured with sector translation, so the number of sectors per track reported by the BIOS has little to do with the actual physical characteristics of the drive. You must know the drive's true physical parameters rather than the values the BIOS uses.

When you know the true sector per track (SPT) and rotational speed figures, you can use the following formula to determine the true media data transfer rate in millions of bytes per second (MBps):


Media Transfer Rate (MBps) = SPTx512 bytesxrpm/60 seconds/1,000,000 bytes


For example, the Hitachi/IBM Travelstar 7K60 drive spins at 7,200rpm and has an average of 540 sectors per track. The average media transfer rate for this drive is figured as follows:


540x512x(7,200/60)/1,000,000 = 33.18 MBps


Some drive manufacturers don't give the sector per track values for the outer and inner zones, instead offering only the raw unformatted transfer rates in Mbps (megabits per second). To convert raw megabits per second to formatted megabytes per second in a modern drive, divide the figure by 11. For example, Toshiba reports transfer rates of 373Mbps maximum and 203Mbps minimum for its MK6022GAX 60GB drive. This is an average of 288Mbps, which equates to an average formatted transfer rate of about 26MBps.

As you can see from the table, even though all these drives have a theoretical interface transfer rate of 100MBps, the fastest 2.5-inch drive has an average true media transfer rate of just over 33MBps. As an analogy, think of the drive as a tiny water faucet, and the ATA interface as a huge firehouse connected to the faucet that is being used to fill a swimming pool. No matter how much water can theoretically flow through the hose, you will only be able to fill the pool at the rate the faucet can flow water.

The cache in a drive allows for burst transfers at the full interface rate. In our analogy, the cache is like a bucket that, once filled, can be dumped at full speed into the pool. The only problem is that the bucket is also filled by the faucet, so any data transfer larger than the size of the bucket can proceed only at the rate that the faucet can flow water.

When you study drive specifications, it is true that larger caches and faster interface transfer rates are nice, but in the end, they are limited by the true transfer rate, which is the rate at which data can be read from or written to the actual drive media. In general, the media (also called internal or true) transfer rate is the most important specification for a drive.

QUOTE
Note

There is a price to pay for portability, both in actual cost and performance. In general, the smaller a drive, the more expensive it is, and the slower the transfer rate performance. For example, a 2.5-inch notebook drive generally costs twice as much as the same capacity 3.5-inch desktop drive. Also, it will be slower. The fastest of the larger 3.5-inch drives normally used in desktop systems has an average transfer rate of between 44MBps and 50MBps, which is significantly faster than the 20MBps to 33MBps average rates of the fastest 2.5-inch notebook drives. Very small drives such as the Hitachi MicroDrive cost even more. A 1GB MicroDrive costs more than an 80GB 2.5-inch drive or a 250GB 3.5-inch drive, and it transfers at an average of just under 5MBps, significantly slower than any other drive.

TSavenger
post Nov 27 2004, 12:57 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


BIOS Limitations

If your current hard drive is 8GB or smaller, your system might not be able to handle a larger drive without a BIOS upgrade, because many older (pre-1998) BIOSes can't handle drives above the 8.4GB limit, and others (pre-2002) have other limits, such as 137GB. Although most ATA hard drives ship with a setup disk containing a software BIOS substitute such as OnTrack's Disk Manager or Phoenix Technologies' EZ-Drive (Phoenix purchased EZ-Drive creator StorageSoft in January 2002), I don't recommend using a software BIOS replacement. EZ-Drive, Disk Manager, and their OEM offshoots (Drive Guide, MAXBlast, Data Lifeguard, and others) can cause problems if you need to boot from floppy or CD media or if you need to repair the nonstandard master boot record these products use.

If your motherboard ROM BIOS dates before 1998 and is limited to 8.4GB, or dates before 2002 and is limited to 137GB, and you wish to install a larger drive, I recommend you first contact your motherboard (or system) manufacturer to see if an update is available. Virtually all motherboards incorporate a Flash ROM, which allows for easy updates via a utility program.

Operating System Limitations

Newer operating systems such as Windows Me as well as Windows 2000 and XP fortunately don't have any problems with larger drives; however, older operating systems may have limitations when it comes to using large drives.

DOS will generally not recognize drives larger than 8.4GB because those drives are accessed using LBA (logical block addressing), and DOS versions 6.x and lower only use CHS (cylinder, head, sector) addressing.

Windows 95 has a 32GB hard disk capacity limit, and there is no way around it other than upgrading to Windows 98 or a newer version. Additionally, the retail or upgrade versions of Windows 95 (also called Windows 95 OSR 1 or Windows 95a) are further limited to using only the FAT16 (16-bit file allocation table) file system, which carries a maximum partition size limitation of 2GB. This means if you had a 30GB drive you would be forced to divide it into 15 2GB partitions, each appearing as a separate drive letter (drives C: through Q: in this example). Windows 95B and 95C can use the FAT32 file system, which allows partition sizes up to 2TB (terabytes). Note that due to internal limitations, no version of FDISK can create partitions larger than 512MB.

Windows 98 supports large drives, but a bug in the FDISK program included with Windows 98 reduces the reported drive capacity by 64GB for drives over that capacity. The solution is an updated version of FDISK that can be downloaded from Microsoft. Another bug appears in the FORMAT command with Windows 98. If you run FORMAT from a command prompt on a partition over 64GB, the size isn't reported correctly, although the entire partition will be formatted.
TSavenger
post Nov 27 2004, 12:58 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


ATA Standards

Today what we call the ATA interface is controlled by an independent group of representatives from major PC, drive, and component manufacturers. This group is called Technical Committee T13 (www.t13.org) and is responsible for all interface standards relating to the parallel AT Attachment storage interface. T13 is a part of the International Committee on Information Technology Standards (INCITS), which operates under rules approved by the American National Standards Institute (ANSI), a governing body that sets rules that control nonproprietary standards in the computer industry as well as many other industries. A second group, called the Serial ATA Working Group (www.serialata.org), has formed to create the Serial ATA standards that will also come under ANSI control. Although these are different groups, many of the same people are in both of them. It seems as if little further development will be done on Parallel ATA past the ATA-7 (ATA/133) specification. The further evolution of ATA will be in the Serial ATA form (discussed later in this chapter).

The rules these committees operate under are designed to ensure that voluntary industry standards are developed by the consensus of people and organizations in the affected industry. INCITS specifically develops Information Processing System standards, whereas ANSI approves the process under which they are developed and then publishes them. Because T13 is essentially a public organization, all the working drafts, discussions, and meetings of T13 are open for all to see.

The Parallel ATA interface has evolved into several successive standard versions, introduced as follows:

ATA-1 (1986-1994)

ATA-2 (1995; also called Fast-ATA, Fast-ATA-2, or EIDE)

ATA-3 (1996)

ATA-4 (1997; also called Ultra-ATA/33)

ATA-5 (1998-present; also called Ultra-ATA/66)

ATA-6 (2000-present; also called Ultra-ATA/100)

ATA-7 (2001-present; also called Ultra-ATA/133)

Each version of ATA is backward compatible with the previous versions. In other words, older ATA-1 or ATA-2 devices work fine on ATA-6 and ATA-7 interfaces. In cases in which the device version and interface version don't match, they work together at the capabilities of the lesser of the two. Newer versions of ATA are built on older versions and with few exceptions can be thought of as extensions of the previous versions. This means that ATA-7, for example, is generally considered equal to ATA-6 with the addition of some features.

ATA-1

Although ATA-1 had been used since 1986 before being published as a standard, and although it was first published in 1988 in draft form, ATA-1 wasn't officially approved as a standard until 1994 (committees often work slowly). ATA-1 defined the original AT Attachment interface, which was an integrated bus interface between disk drives and host systems based on the ISA (AT) bus. Here are the major features introduced and documented in the ATA-1 specification:

40/44-pin connectors and cabling

Master/slave or cable select drive configuration options

Signal timing for basic PIO (Programmed I/O) and DMA (Direct Memory Access) modes

CHS (cylinder, head, sector) and LBA (logical block address) drive parameter translations supporting drive capacities up to 228-220 (267,386,880) sectors, or 136.9GB

ATA-1 was officially published as "ANSI X3.221-1994, AT Attachment Interface for Disk Drives," and was officially withdrawn on August 6, 1999. ATA-2 and later are considered backward-compatible replacements.

Although ATA-1 supported theoretical drive capacities up to 136.9GB (228-220 = 267,386,880 sectors), it did not address BIOS limitations that stopped at 528MB (1,024x16x63 = 1,032,192 sectors). The BIOS limitations would be addressed in subsequent ATA versions because, at the time, no drives larger than 528MB had existed.

ATA-2

Approved in 1996, ATA-2 was a major upgrade to the original ATA standard. Perhaps the biggest change was almost a philosophical one. ATA-2 was updated to define an interface between host systems and storage devices in general and not only disk drives. The major features added to ATA-2 as compared to the original ATA standard include the following:

Faster PIO and DMA transfer modes.

Support for power management.

Support for removable devices.

PCMCIA (PC card) device support.

More information reported from the Identify Drive command.

Defined standard CHS/LBA translation methods for drives up to 8.4GB in capacity.

The most important additions in ATA-2 were the support for faster PIO and DMA modes as well as the methods to enable BIOS support up to 8.4GB. The BIOS support was necessary because, although even ATA-1 was designed to support drives of up to 136.9GB in capacity, the PC BIOS could originally only handle drives of up to 528MB. Adding parameter-translation capability now allowed the BIOS to handle drives up to 8.4GB. This is discussed in more detail later in this chapter.

ATA-2 also featured improvements in the Identify Drive command, which enabled a drive to tell the software exactly what its characteristics are. This is essential for both Plug and Play (PnP) and compatibility with future revisions of the standard.

ATA-2 was also known by unofficial marketing terms, such as fast-ATA or fast-ATA-2 (Seagate/Quantum) and EIDE (Enhanced IDE, Western Digital). ATA-2 was officially published as "ANSI X3.279-1996 AT Attachment Interface with Extensions."

ATA-3

First published in 1997, ATA-3 was a comparatively minor revision to the ATA-2 standard that preceded it. It consisted of a general cleanup of the specification and had mostly minor clarifications and revisions. The most major changes included the following:

Eliminated single-word (8-bit) DMA transfer protocols.

Added SMART (Self-Monitoring, Analysis, and Reporting Technology) support for the prediction of device performance degradation.

LBA mode support was made mandatory (previously it had been optional).

Added security mode, allowing password protection for device access.

Made recommendations for source and receiver bus termination to solve noise issues at higher transfer speeds.

ATA-3 has been officially published as "ANSI X3.298-1997, AT Attachment 3 Interface."

ATA-3, which builds on ATA-2, adds improved reliability, especially of the faster PIO Mode 4 transfers; however, ATA-3 does not define any faster modes. ATA-3 also adds a simple password-based security scheme, more sophisticated power management, and SMART. This enables a drive to keep track of problems that might result in a failure and thus avoid data loss. SMART is a reliability prediction technology that was initially developed by IBM.

ATA/ATAPI-4

First published in 1998, ATA-4 included several important additions to the standard. It included the Packet Command feature, known as the AT Attachment Packet Interface (ATAPI), which allowed devices such as CD-ROM and CD-RW drives, LS-120 SuperDisk floppy drives, tape drives, and other types of storage devices to be attached through a common interface. Until ATA-4 came out, ATAPI was a separately published standard. ATA-4 also added the 33MBps transfer mode known as Ultra-DMA or Ultra-ATA. ATA-4 is backward compatible with ATA-3 and earlier definitions of the ATAPI. The major revisions added in ATA-4 were as follows:

Ultra-DMA (UDMA) transfer modes up to Mode 2, which is 33MBps (called UDMA/33 or Ultra-ATA/33).

Integral ATAPI support.

Advanced power-management support.

Defined an optional 80-conductor, 40-pin cable for improved noise resistance.

Compact Flash Adapter (CFA) support.

Introduced enhanced BIOS support for drives over 9.4ZB (zettabytes, or trillion gigabytes) in size (even though ATA was still limited to 136.9GB).

ATA-4 was published as "ANSI NCITS 317-1998, ATA-4 with Packet Interface Extension."

The speed and level of ATA support in your system is mainly dictated by your motherboard chipset. Most motherboard chipsets come with a component called either a South Bridge or an I/O controller hub that provides the ATA interface (as well as other functions) in the system. Check the specifications for your motherboard or chipset to see whether yours supports the faster ATA/33, ATA/66, ATA/100, or ATA/133 mode.

ATA-4 made ATAPI support a full part of the ATA standard; therefore, ATAPI was no longer an auxiliary interface to ATA but rather was merged completely within. This promoted ATA for use as an interface for many other types of devices. ATA-4 also added support for new Ultra-DMA modes (also called Ultra-ATA) for even faster data transfer. The highest-performance mode, called UDMA/33, had 33MBps bandwidth-twice that of the fastest programmed I/O mode or DMA mode previously supported. In addition to the higher transfer rate, because UDMA modes relieve the load on the processor, further performance gains were realized.

An optional 80-conductor cable (with cable select) is defined for UDMA/33 transfers. Although this cable was originally defined as optional, it would later be required for the faster ATA/66, ATA/100, and ATA/133 modes in ATA-5 and later.

Also included was support for queuing commands, similar to that provided in SCSI-2. This enabled better multitasking as multiple programs make requests for ATA transfers.

ATA/ATAPI-5

This version of the ATA standard was approved in early 2000 and builds on ATA-4. The major additions in the standard include the following:

Ultra-DMA (UDMA) transfer modes up to Mode 4, which is 66MBps (called UDMA/66 or Ultra-ATA/66).

80-conductor cable now mandatory for UDMA/66 operation.

Added automatic detection of 40- or 80-conductor cables.

UDMA modes faster than UDMA/33 are enabled only if an 80-conductor cable is detected.

ATA-5 includes Ultra-ATA/66 (also called Ultra-DMA or UDMA/66), which doubles the Ultra-ATA burst transfer rate by reducing setup times and increasing the clock rate. The faster clock rate increases interference, which causes problems with the standard 40-pin cable used by ATA and Ultra-ATA. To eliminate noise and interference, the new 40-pin, 80-conductor cable has now been made mandatory for drives running in UDMA/66 or faster modes. This cable was first announced in ATA-4 but is now mandatory in ATA-5 to support the Ultra-ATA/66 mode. This cable adds 40 additional ground lines between each of the original 40 ground and signal lines, which help shield the signals from interference. Note that this cable works with older non-Ultra-ATA devices as well because it still has the same 40-pin connectors.

For reliability, Ultra-DMA modes incorporate an error-detection mechanism known as cyclical redundancy checking (CRC). CRC is an algorithm that calculates a checksum used to detect errors in a stream of data. Both the host (controller) and the drive calculate a CRC value for each Ultra-DMA transfer. After the data is sent, the drive calculates a CRC value, and this is compared to the original host CRC value. If a difference is reported, the host might be required to select a slower transfer mode and retry the original request for data.

ATA/ATAPI-6

ATA-6 began development during 2000 and was officially published as a standard early in 2002. The major changes or additions in the standard include the following:

Ultra-DMA (UDMA) Mode 5 added, which allows 100MBps transfers (called UDMA/100, Ultra-ATA/100, or just ATA/100).

Sector count per command increased from 8 bits (256 sectors or 131KB) to 16 bits (65,536 sectors or 33.5MB), allowing larger files to be transferred more efficiently.

LBA addressing extended from 228 to 248 (281,474,976,710,656) sectors, supporting drives up to 144.12PB (petabyte = quadrillion bytes).

CHS addressing made obsolete. Drives must use 28-bit or 48-bit LBA addressing only.

ATA-6 includes Ultra-ATA/100 (also called Ultra-DMA or UDMA/100), which increases the Ultra-ATA burst transfer rate by reducing setup times and increasing the clock rate. As with ATA-5, the faster modes require the improved 80-conductor cable. Using the ATA/100 mode requires both a drive and motherboard interface that supports that mode.

Besides adding the 100MBps UDMA Mode 5 transfer rate, ATA-6 also extended drive capacity greatly, and just in time. ATA-5 and earlier standards supported drives of up to only 137GB in capacity, which was becoming a limitation as larger drives became available. Commercially available 3.5-inch drives exceeding 137GB were introduced during 2001 but originally were available only in SCSI versions because SCSI doesn't share the same limitations as ATA. With ATA-6, the sector addressing limit has been extended from (228) sectors to (248) sectors. What this means is that LBA addressing previously could use only 28-bit numbers, but with ATA-6 LBA addressing can use larger, 48-bit numbers if necessary. With 512 bytes per sector, this raises the maximum supported drive capacity to 144.12PB. That is equal to more than 144.12 quadrillion bytes! Note that the 48-bit addressing is optional and necessary only for drives larger than 137GB. Drives 137GB or less can use either 28-bit or 48-bit addressing.

ATA/ATAPI-7

Work on ATA-7 began late in 2001 and is still underway at the present. As with the previous ATA standards, ATA-7 is built on the previous standard (ATA-6), with some additions.

The primary additions to ATA-7 include the following:

Ultra-DMA (UDMA) Mode 6 added, which allows for 133MBps transfers (called UDMA/133, Ultra-ATA/133, or just ATA/133). As with UDMA Mode 5 (100MBps) and UDMA Mode 4 (66MBps), the use of an 80-conductor cable is required.

Added support for long physical sectors, which allows a device to be formatted so that there are multiple logical sectors per physical sector. Each physical sector stores an ECC field, so long physical sectors allow increased format efficiency with fewer ECC bytes used overall.

Added support for long logical sectors, which allows additional data bytes to be used per sector (520 or 528 bytes instead of 512 bytes) for server applications. Devices using long logical sectors are not backward compatible with devices or applications that use 512-byte sectors, meaning standard desktop and laptop systems.

Incorporated Serial ATA as part of the ATA-7 standard.

Split the ATA-7 document into three volumes: Volume 1 covers the command set and logical registers, Volume 2 covers the parallel transport protocols and interconnects, and Volume 3 covers the serial transport protocols and interconnects.

The ATA/133 transfer mode was actually proposed by Maxtor, and so far it is the only drive manufacturer to adopt this mode. Other drive manufacturers have not adopted the 133MBps interface transfer rate because most drives have actual media transfer rates that are significantly slower than that. VIA, ALi, and SiS have added ATA/133 support to their latest chipsets, but Intel has decided to skip ATA/133 in lieu of adding Serial ATA (150MBps) instead. This means that even if a drive can transfer at 133MBps from the circuit board on the drive to the motherboard, data from the drive media (platters) through the heads to the circuit board on the drive moves at less than half that rate. For that reason, running a drive capable of UDMA Mode 6 (133MBps) on a motherboard capable of only UDMA Mode 5 (100MBps) really won't slow things down much, if at all. Likewise, upgrading your ATA host adapter from one that does 100MBps to one that can do 133MBps won't help much if your drive can only read data off the disk platters at half that speed. Always remember that the media transfer rate is far more important than the interface transfer rate when selecting a drive, because the media transfer rate is the limiting factor.

ATA-7 is still a work in progress, so further changes may come. As a historical note, ATA-7 represents the combining of the venerable Parallel ATA standard and the newer Serial ATA standard under a single specification.

user posted image
TSavenger
post Nov 27 2004, 12:59 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


Serial ATA

With the introduction of ATA-7, it seems that the Parallel ATA standard that has been in use for more than 10 years is running out of steam. Sending data at rates faster than 133MBps down a parallel ribbon cable is fraught with all kinds of problems because of signal timing, electromagnetic interference (EMI), and other integrity problems. The solution is in a new ATA interface called Serial ATA (SATA), which is an evolutionary backward-compatible replacement for the Parallel ATA physical storage interface. Serial ATA is backward compatible in that it is compatible with existing software, which will run on the new architecture without any changes. In other words, the existing BIOS, operating systems, and utilities that work on Parallel ATA will also work on Serial ATA. This means Serial ATA supports all existing ATA and ATAPI devices, including CD-ROM and CD-RW drives, DVD drives, tape devices, SuperDisk drives, and any other storage device currently supported by Parallel ATA.

Of course, they do differ physically-that is, you won't be able to plug Parallel ATA drives into Serial ATA host adapters, and vice versa. The physical changes are all for the better because Serial ATA uses much thinner cables with only seven pins that are easier to route inside the PC and easier to plug in with smaller redesigned cable connectors. The interface chip designs also are improved with fewer pins and lower voltages. These improvements are all designed to eliminate the design problems inherent in Parallel ATA.

Serial ATA won't be integrated into systems overnight; however, it is clear to me that it will eventually replace Parallel ATA as the de facto standard internal storage device interface found in both desktop and portable systems. The transition from ATA to SATA is a gradual one, and during this transition Parallel ATA capabilities will continue to be available. I would also expect that with more than a 10-year history, Parallel ATA devices will continue to be available even after most PCs have gone to SATA.

Development for Serial ATA started when the Serial ATA Working Group effort was announced at the Intel Developer Forum in February 2000. The initial members of the Serial ATA Working Group included APT Technologies, Dell, IBM, Intel, Maxtor, Quantum, and Seagate. The first Serial ATA 1.0 draft specification was released in November 2000 and officially published as a final specification in August 2001. The Serial ATA II extensions to this specification, which make Serial ATA suitable for network storage, were released in October 2002. Both can be downloaded from the Serial ATA Working Group Web site at www.serialata.org. Since forming, the group has added more than 100 Contributor and Adopter companies to the membership from all areas of industry. Systems using Serial ATA were first released in late 2002.

The performance of SATA is impressive, although current hard drive designs can't fully take advantage of its bandwidth. Three variations of the standard are proposed that all use the same cables and connectors; they differ only in transfer rate performance. Initially, only the first version will be available, but the roadmap to doubling and quadrupling performance from there has been clearly established. Table 9.25 shows the specifications for the current and future proposed SATA versions; the next-generation 300MBps version is not expected until 2005, whereas the 600MBps version is not expected until 2007.

user posted image

From the table, you can see that Serial ATA sends data only a single bit at a time. The cable used has only seven wires and is a very thin design, with keyed connectors only 14mm (0.55 inches) wide on each end. This eliminates problems with airflow around the wider, Parallel ATA ribbon cables. Each cable has connectors only at each end and connects the device directly to the host adapter (normally on the motherboard). There are no master/slave settings because each cable supports only a single device. The cable ends are interchangeable-the connector on the motherboard is the same as on the device, and both cable ends are identical. Maximum SATA cable length is 1 meter (39.37 inches), which is considerably longer than the 18-inch maximum for Parallel ATA. Even with this thinner, longer, and less expensive cable, transfer rates initially of 150MBps (nearly 13% greater than Parallel ATA/133), and in the future up to 300MBps and even 600MBps, are possible.

Serial ATA uses a special encoding scheme called 8B/10B to encode and decode data sent along the cable. The 8B/10B transmission code originally was developed (and patented) by IBM in the early 1980s for use in high-speed data communications. This encoding scheme is now used by many high-speed data-transmission standards, including Gigabit Ethernet, Fibre Channel, FireWire, and others. The main purpose of the 8B/10B encoding scheme is to guarantee that there are never more than four 0s (or 1s) transmitted consecutively. This is a form of Run Length Limited (RLL) encoding (called RLL 0,4) in which the 0 represents the minimum and the 4 represents the maximum number of consecutive 0s in each encoded character.

8B/10B encoding also ensures that there are never more than six or fewer than four 0s (or 1s) in a single encoded 10-bit character. Because 1s and 0s are sent as voltage changes on a wire, this ensures that the spacing between the voltage transitions sent by the transmitter will be fairly balanced, with a more regular and steady stream of pulses. This presents a more steady load on the circuits, increasing reliability. The conversion from 8-bit data to 10-bit encoded characters for transmission leaves a number of 10-bit patterns unused. Several of these additional patterns are used to provide flow control, delimit packets of data, perform error checking, or perform other special needs.

The physical transmission scheme for SATA uses what is called differential NRZ (Non Return to Zero). This uses a balanced pair of wires, each carrying plus or minus 0.25V (one-quarter volt). The signals are sent differentially: If one wire in the pair is carrying +0.25V, the other wire is carrying -0.25V, where the differential voltage between the two wires is always 0.5V (a half volt). This means that for a given voltage waveform, the opposite voltage waveform is sent along the adjacent wire. Differential transmission minimizes electromagnetic radiation and makes the signals easier to read on the receiving end.

A 15-pin power cable and power connector are optional with SATA, providing 3.3V power in addition to the 5V and 12V provided via the industry-standard 4-pin device power connectors. Although it has 15 pins, this new power connector design is only 24mm (0.945 inches). With three pins designated for each of the 3.3V, 5V, and 12V power levels, enough capacity exists for up to 4.5 amps of current at each voltage, which is ample for even the most power-hungry drives. For compatibility with existing power supplies, SATA drives can be made with either the original, standard 4-pin device power connector or the new 15-pin SATA power connector, or both. If the drive doesn't have the type of connector you needed, adapters are available to convert from one type to the other.

user posted image

The configuration of Serial ATA devices is also much simpler because the master/slave or cable select jumper settings used with Parallel ATA are no longer necessary.

Serial ATA is ideal for laptop and notebook systems, and it will eventually replace Parallel ATA in those systems as well. In late 2002, Fujitsu demonstrated a prototype 2.5-inch SATA drive. Most 2.5-inch hard drive manufacturers are waiting for mobile chipsets supporting SATA to be delivered before officially introducing mobile SATA drives. It is expected that during 2004 many of the mobile chipsets will incorporate SATA.
TSavenger
post Dec 25 2004, 02:21 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


QUOTE(teq @ Dec 19 2004, 12:11 AM)
very good guide thumbup.gif thumbup.gif thumbup.gif

one question from me : is 137gb mobo bios limitation is because of firmware or hardware? ( i.e. is because of the mobo can support hdd >137gb, but the bios need to be updated.. or.. the mobo hardware doesn't support hdd >137gb, not because of firmware? ) ? Thanks.. notworthy.gif
*
the limitation is by BIOS

but windows no problem smile.gif
ycs
post Jan 17 2005, 12:14 AM

MEMBER
*******
Senior Member
4,238 posts

Joined: Jan 2003
From: Selangor



too long to read all, just wanna ask how mtbf is arrived at i.e. how are hdd stressed?
TSavenger
post Jan 17 2005, 02:44 AM

What is there to put here?
******
Senior Member
1,467 posts

Joined: Jan 2003
From: Online wirelessly


http://www.storagereview.com/php/tiki/tiki...x.php?page=MTBF

 

Change to:
| Lo-Fi Version
0.0232sec    1.16    5 queries    GZIP Disabled
Time is now: 19th December 2025 - 03:10 AM