# Need recommendations for RAID disks

## 1clue

Hi,

It's been awhile, I'm looking to make a (software) RAID setup. I'm interested in what disks people are buying, and what modes. I want what you like and what you dislike, and why if possible.

Historically I've used raid1 or raid10. I'm open to one of the other modes.

Usage:

Network shares

VM data

Common data directories.

Most recent backups.

Tons of photos and home video.

Music

Comments:

This is not a backup solution. I'll keep the most recent backup on here but there will be a detached backup of critical files as well.

There will be CIFS/NFS shares, but probably not high traffic.

I'll try to keep VM disk off this setup, but in the cases where VMs have data that needs redundancy I'll put a drive on this setup.

I don't need hot swap.

I don't need incredible speeds. I have SSDs and non-RAID drives for that.

I'm interested in the higher volume drives, I don't have enough space in the box for more than 3 devices. The box has multiple gigabit adapters, but probably won't saturate more than a gigabit or two with traffic to this RAID device.

Thanks.

----------

## szatox

RAID5 is the cheapest one among those providing actual redundancy.

10 is good when you need very high performance in addition to redundancy.

Regarding drives, it's quite tricky:

I've always seen single-brand raids in enterprise environments, which kinda makes sense when you want to go for the best performance - it's as fast as the slowest device.

It kinda defeats the "REDUNDANT" part of RAID though, as it makes common factor failures more likely to happen.

Especially when they stuff like 15 devices with consecutive serial numbers in a single array, I half expect to test the disaster recovery plan soon.

Hot-swap can be done with any SATA drive. Quite funny, I personally like cheap drives. Perhaps it would change if I put more stress on them, but low-end drives have been perfect for low-duty use so far.

A quick glance at shopping offers suggests 3TB drives for low cost, low performance and high volume storage. (The lowest price per GB). If you feel like you need more, you can even get 6TB drives, but getting SAS controller and a bunch of smaller drives would likely provide better performance at fewer $$ per GB

----------

## 1clue

Whatever I get will be the same type of drive throughout. I definitely get what you're saying about similar drives failing together, from personal experience.

I never really gave it much thought until 8 or 10 years ago, I bought about a dozen drives. They turned out to be WD green drives, and more than half went into a couple different RAID arrays. It turns out that WD green drives are the worst possible choice for a Linux RAID array. I had a few go bad in record time, had them replaced, and then tried the tweaking you can find for them online, and then lost most of the rest. Including the ones replaced by early failure. I have 1 drive left somewhere. I will never get one of those drives again, and so far have given WD a pass altogether for wasting my money, my time and my data.

At any rate, as I said before this is NOT a backup drive. Or rather it will be, where laptops backup data onto a shared volume on this drive, and then I back that up onto offline storage. But this is not going to be the actual backup.

At the moment I think this setup will be "low volume" in terms of transfer rate. But I definitely want something made for raid/nas duty, meaning it won't spin down at first possible opportunity, or any other non-linux-friendly crap.

----------

## Zucca

A small (side) note on raid5...

If you have more disks use raid6 in place of raid5. If one drive drops, you'll remove it and put a replacement in place of it and start rebuilding the array. While the array is rebuilding there will be a lot of disk-io and is some other disk dies while the rebuild is going on, you'll, possibly, lose all the data.

It takes two disks as "parity drives", so you "lose" one disk worth of storage, but it's more realiable.

Balance between raid5 and raid6 and whether you keep backups or not of the data on that array.

----------

## 1clue

I've been partial to raid1, since each drive contains all the data. RAID has historically screwed me over big time, both at home and at work. I don't generally just make something RAID for the heck of it.

I've personally had:

Several drives go bad on the same array, over a weekend. (see WD Green comments above)

RAID controller go bad, no backup controller available, lost everything.

RAID-IS-NOT-A-BACKUP problem (not my fault, but my problem to deal with it)

So really I would rather no RAID at all, but here I find myself in a situation where I need it. The data will be backed up.

----------

## Buffoon

WD red.

----------

## frostschutz

The model of drive is not that important. Sometimes there is a bad egg (such as the deathstar or more recently seagate dm001 or whatever), but as a whole, pretty much all (of the few remaining) brands are as reliable as hard disks ever will be...

Much more important is to run regular selftests. Otherwise you will simply not notice disk problems until it's too late. Disk errors can go unnoticed for years. If you don't detect errors, and don't immediately replace disks that have an error, even raid6 won't be enough to save you. If you don't test your disk, and you replace one drive, and the RAID resyncs... that resync will also be the first ever read test for all the other drives in years. So if your RAID fails during resync, that's usually your own fault, due to lack of testing your disk, not a true same-time-failure. Detect errors early, replace disks immediately.

Make sure you set up your sendmail and configure email addresses and monitoring so both mdadm and smartd can notify you immediately of any issues.

True simultaneous failures are super rare. Wear & tear is different for every single disk, if you deliberately tried to make two disks fail on the same day after years of running, you couldn't pull it of. This is random chance (or murphy's law, depending on point of view) and there is no way to trick it. Using different models, brands, won't really change anything about that. It's more likely to hurt your performance and increase wear&tear as a whole (if one disk always has to wait for another due to different performance, that's like throwing a wrench in the gearbox). But this is all a matter of religious belief / personal preference...

I'm using WD green drives, in a raid5, and they're in standby most of the time, this is basically not a problem (if you can afford to wait a few seconds for disks to spin up on access).

With two drives obviously you'd stick to raid1.

For three disks with two drive redundancy, it's still raid1 (over 3 disks) and not raid6. I would only ever use raid6 for larger disk groups (8+). But a single drive redundancy is enough for home use, provided you have a backup (which you always need anyhow). Two failed disks is just too unlikely to waste an entire disk on it.Last edited by frostschutz on Wed Oct 26, 2016 10:40 pm; edited 1 time in total

----------

## Fitzcarraldo

I'm using four 3TB Western Digital Red 3.5-inch NAS HDDs in two RAID1 configurations with mdadm in my server. Have been spinning 24/7 since March this year with reasonably heavy use by me and my family, and no problems so far.

"WD Red 3TB NAS Desktop Hard Disk Drive - Intellipower SATA 6 Gb/s 64MB Cache 3.5 Inch"

----------

## 1clue

This is good info, thanks. Really interested in the bad eggs though. I wasted a lot of money before, would like to get something reliable.

I'm still somewhat angry at WD for flooding the market with green drives, which are completely useless to me. But that said if WD red or black is highly recommended I'll take my chances.

----------

## frostschutz

 *1clue wrote:*   

> Really interested in the bad eggs though.

 

They're the exceptions. Even though I wrote "such as" above, I couldn't name another.

 *1clue wrote:*   

> I'm still somewhat angry at WD for flooding the market with green drives, which are completely useless to me.

 

*shrug*

Useless to you is useful to others. Pretty much all my disks are WD Greens. Best disks I ever had. If you don't like intellipark (or whatever it is called) you could turn it off, even without installing any strange tool, it's in hdparm (-J).

 *1clue wrote:*   

> But that said if WD red or black is highly recommended I'll take my chances.

 

They fail all the same. ALL hard disks do. Eventually. It's part of the design. You can't have things spinning, vibrating, producing heat and noise - and not fail.

So, you should expect drives to fail, no matter which you pick. You should be prepared to replace them. That usually means spending money, unless you want to hope for the best for weeks in degraded state while the warranty is being processed (if whatever you pick will not be replaced before you have to ship your broken disk back). So this should be part of your budget plan somehow... if you buy disks that are twice as expensive and overmax your budget and subsequently turn into a penny-pincher when it turns out you need a replacement quickly... maybe cheap disks are better.

Basically for a not high traffic, home use, multimedia box... buying datacenter grade hardware is a waste of money.

In the end it's all a matter of personal preference. Pick whatever floats your boat   :Laughing: 

----------

## 1clue

 *frostschutz wrote:*   

> 
> 
> ...
> 
> If you don't like intellipark (or whatever it is called) you could turn it off, even without installing any strange tool, it's in hdparm (-J).
> ...

 

I did that right about the time I sent the first broken drive back. Didn't change a thing, even on drives that were new when the parameter was changed. I used hdparm on some drives and the windows-based tool on others.

 *Quote:*   

> 
> 
>  *1clue wrote:*   But that said if WD red or black is highly recommended I'll take my chances. 
> 
> They fail all the same. ALL hard disks do. Eventually. It's part of the design. You can't have things spinning, vibrating, producing heat and noise - and not fail.
> ...

 

Of course. My green drives had a mean lifespan of about 2 years. I'd like for my next batch of drives to be better than that.

 *Quote:*   

> 
> 
> So, you should expect drives to fail, no matter which you pick. You should be prepared to replace them. That usually means spending money, unless you want to hope for the best for weeks in degraded state while the warranty is being processed (if whatever you pick will not be replaced before you have to ship your broken disk back). So this should be part of your budget plan somehow... if you buy disks that are twice as expensive and overmax your budget and subsequently turn into a penny-pincher when it turns out you need a replacement quickly... maybe cheap disks are better.
> 
> 

 

Did that with the greens. Bought a dozen drives all at once. Some were still in the box when I had my first failures. You're preaching to the choir on this. I've been a computing professional since the early 90s. You're telling me about best practices, and that's not what I'm asking here.Last edited by 1clue on Thu Oct 27, 2016 3:02 pm; edited 1 time in total

----------

## John R. Graham

This is not yet personal experience, but based on the latest Backblaze hard drive reliability data, I intend to upgrade my home server RAID 6 array with 7200rpm HGST NAS drives.

- John

----------

## 1clue

John, your post was extremely helpful. Actually if anyone has other useful 2016-based reliability studies I'd be curious to see those too. Googling now, I should have thought of that.   :Smile: 

I think I'll try to stick to Seagate or Toshiba, because the study said there's no discernible difference between HGST and WD, and they had only a few drives in the 4t-6t size range which is my market.  I have some experience with both Toshiba and Seagate, and while it was decades ago my longest-surviving hard drive under continuous use that I ever noticed was a Seagate.

I had a moderately (not lightly) loaded server (based on a desktop system born somewhere around 2000) that was pretty much on for 12 years, including one uptime of 47 days shy of 3 years. It was on a consumer-grade battery backup, but I'm pretty sure the battery was irrelevant a few years in. It had company mail, cvs and a few other things for a small business, maybe 30 active accounts for mail and up to 10 active developers for cvs. It became a little weird and in investigating I found out it had been up for literally years. I had been somewhat of an uptime junky until then, we actually lost significant money because the system was wonky. That day marked the end of my uptime madness. I still pay attention to uptime, but more as an indicator of whether I need a reboot. The mail server had gone crazy and we lost emails regarding a contract, and the cvs data was corrupted when the system came back up from the 3-year uptime reboot, 3 years no fsck and the filesystem was corrupted. I had a backup but we lost a few days of work.

Sorry for reminiscing.

----------

## 1clue

Actually though going to the original article https://www.backblaze.com/blog/hard-drive-reliability-stats-q1-2016/ they have a lot more HGST drives. Reading more.

----------

## John R. Graham

 *1clue wrote:*   

> I think I'll try to stick to Seagate or Toshiba...

 Yeah, I'm very fond of Seagate as well, especially the Cheetah and Barracuda lines which trace their lineage back to Control Data designs, legendary for their robustness. Going to try HGST this round, though.

- John

----------

## Fitzcarraldo

ANANDTECH 2013-09-04 -- Battle of the 4 TB NAS Drives: WD Red and Seagate NAS HDD Face-Off

I had a newish Seagate 1 TB HDD go up in smoke a couple of years ago (well, its PCB went up in smoke); I had to move quickly to pull the power. One of the (expensive) WD, Seagate or HGST helium-filled HDDs would have been useful in that situation! Mind you, the other Seagate drive of that pair is still going strong, so perhaps it was bad luck. The experience put me off Seagate a bit, though. Seems to be borne out by the HDD reliability charts in the following 2014 article: arsTECHNICA -- Putting hard drive reliability to the test shows not all disks are equal.

 *Quote:*   

> Failure rates vary from 2 percent to 24 percent per year, depending on make, model.

 

----------

## .user

In the past two years I had a lot of drives failing with the click of death. Before, I didn't even know that a hard drive can sound like that, no drives failed except for the first such experience -in early 2000s, I had a first hard drive failure, a quantum fireball, a tad over 3GB in size, failed quietly.

I didn't use many models and brands, but, as far as I know, everyone has green drives nowadays, there is a seagate green, there is a samsung ecogreen, etc.

I'd like to add a thing about TLER / ERC / CCTL / LCC, since one of the first answers in the thread had to mention wd reds. Toshiba ACA are the drives I'd buy again and these do have such settings available, but one has to enable the ERC, in seconds, since it is turned off by default.

----------

## Buffoon

My RAID load is not high, so I preferred WD red, 5400 RPM and they run real cool, I even turned off the cooling fans as they were not needed.

----------

## 1clue

@Fitzcarraldo,

Every hardware component has infant mortality issues. Statistically speaking there will always be a set of drives that will fail prematurely.  Hopefully that number is very small. I'm OK with a lemon in the batch, but not N lemons in a batch of N drives.

In my experience if a drive lasts 6 months in a server cabinet it will probably last years. It's always been the case that something is fairly likely to fail on a new computer system, and this prediction is borne out by the "best practices" of buying a couple extras for a raid array. If you buy enough drives (I used to, don't do that anymore) you can almost anticipate what you'll need.

My experience with the WD green drives was something else entirely. Some people got them to work, but plenty of others had serious problems with them, even in non-raid configurations.  And yes, other manufacturers have power-saving drives as well. I haven't heard so much about problems with these drives on Linux though.

I think I'll aim at NAS drives this time. It's the closest use case to what I anticipate.

Thanks.

----------

## szatox

 *Quote:*   

> In my experience if a drive lasts 6 months in a server cabinet it will probably last years.

 Yes, it's a good observation.

There is a nice, "smiling" curve showing probability of failure over time. The failure probability starts high, dominated by manufacturing flaws. Then it drops, and remains low and flat for a long time - those are random failures, and RAID does pretty good job mitigating those. Finally, the failure probability rises again as the devices age and wear down.

Exact values on the curve vary between devices, but the rule remains valid for almost everything, as almost everything goes through the same phases in it's lifetime.

----------

## Fitzcarraldo

 *szatox wrote:*   

> There is a nice, "smiling" curve showing probability of failure over time. The failure probability starts high, dominated by manufacturing flaws. Then it drops, and remains low and flat for a long time - those are random failures, and RAID does pretty good job mitigating those. Finally, the failure probability rises again as the devices age and wear down.
> 
> Exact values on the curve vary between devices, but the rule remains valid for almost everything, as almost everything goes through the same phases in it's lifetime.

 

Yep. It's commonly known as 'the bathtub curve'.

----------

## Zucca

I have rather good experience on WD Greens. First thing I do is disable the head parking. It makes them WD Blue, practically. I also have WD Blues as they are sometimes cheaper than Greens.

```
Model Family:     Western Digital Green

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  3 Spin_Up_Time            0x0027   181   181   021    Pre-fail  Always       -       5908

  9 Power_On_Hours          0x0032   064   064   000    Old_age   Always       -       26388

Model Family:     Western Digital Caviar Blue (SATA)

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  3 Spin_Up_Time            0x0027   159   158   021    Pre-fail  Always       -       5008

  9 Power_On_Hours          0x0032   051   051   000    Old_age   Always       -       36237

Model Family:     Western Digital Blue

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  3 Spin_Up_Time            0x0027   172   171   021    Pre-fail  Always       -       4400

  9 Power_On_Hours          0x0032   057   057   000    Old_age   Always       -       31913

Model Family:     Western Digital Blue

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  3 Spin_Up_Time            0x0027   174   173   021    Pre-fail  Always       -       2291

  9 Power_On_Hours          0x0032   090   090   000    Old_age   Always       -       7716

Model Family:     Western Digital Green

ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  3 Spin_Up_Time            0x0027   183   182   021    Pre-fail  Always       -       5833

  9 Power_On_Hours          0x0032   063   063   000    Old_age   Always       -       27580

```

----------

## Buffoon

Load_Cycle_Count is the parameter you want to keep your eye on. I used idle tools to turn off power saving on my WD Red drives, although it was not that bad as with WD Greens.

----------

## 1clue

You can save your breath about WD greens.  Won't happen. And I've pretty much given WD a time-out for a few more years just because they put out a drive with default settings like that.

----------

## Goverp

<asnide>

I love this. Which drives are good enough for RAID.  Err, "Redundant Array of Inexpensive Disks.  The answer ought to be the cheapest you can get, with a few spares, so you can swap the broken ones as-and-when.

</asnide>

----------

## Zucca

 *Goverp wrote:*   

> Which drives are good enough for RAID.  Err, "Redundant Array of Inexpensive Disks.

 .. or independent. Which one is "offically" right is another topic...

----------

## Fitzcarraldo

 *Zucca wrote:*   

>  *Goverp wrote:*   Which drives are good enough for RAID.  Err, "Redundant Array of Inexpensive Disks. .. or independent. Which one is "offically" right is another topic...

 

Well, the original term was 'inexpensive', although the per-MB price of 'inexpensive' HDDs in the 1980s was a lot more expensive than it is today. Anyway, the intent of the originators of the term was certainly to look at how to replace an expensive HDD ('Single Large Expensive Disk') with an array of relatively-inexpensive HDDs.

A scanned copy of the original 1988 paper by Patterson, Gibson & Katz, 'A Case for Redundant Arrays of Inexpensive Disks (RAID)', is available on the Carnegie Mellon University's School of Computer Science Web site: http://www.cs.cmu.edu/~garth/RAIDpaper/Patterson88.pdf

----------

## 1clue

 *Goverp wrote:*   

> <asnide>
> 
> I love this. Which drives are good enough for RAID.  Err, "Redundant Array of Inexpensive Disks.  The answer ought to be the cheapest you can get, with a few spares, so you can swap the broken ones as-and-when.
> 
> </asnide>

 

Clearly you didn't read the thread, or even the original post.

----------

## Zucca

 *1clue wrote:*   

> Clearly you didn't read the thread, or even the original post.

 

Yeah. Originally inexpensive, but now independent or something else than inexpensive.

No-one wants to store hepas of data in cheapest possible raid5 array and when something goes wrong you'll realize that while rebuilding the array you've lost another one.

----------

## .user

Oh, so that's why some drives fail. It's because they were from this kind, the cheap kind. An eye opener, search no further, eurika.

I strongly believe in that conclusion discussed in the thread earlier, the smiling curve, drives fail shorly after setup or 2-3-4+ years after that. My experience with failing drives means mostly the encounter of the click of death. It happens at cold boot. Most of my drives aren't clicking to death though, some are eventually seen by controller/mobo, correctly or just as rom. Many times the drives are fine after the next restart / warm reinitialization. I just consider them failed and I just swap them out or just temporarily unplug their power cable, thinking that I'd get into it at a later time, since it's annoying to have delayed cold boots and since the general saying states that such a drive will fail shortly after anyways /  has its days counted.

I also have a strong belief induced by this @only cold initialization in that running harddrives fail less. The drives that failed to me were all western digital. I am a maxtor (brand) jedi of storage! My olderst drives are still running and all are seagate and toshiba aba and aca. I think aca is the cheapest drive on the market. Stay away from it.

----------

## 1clue

I started the thread because the last time I bought a bunch of drives I paid not much attention and bought a dozen drives, 4 of which had been returned within the first year and 8 of which had failed by the second. By the end of the scond year I was no longer interested in replacements, only in finding something else.

I agree  in the abstract that I should be able to use anything, but in the real world we must face the fact that not all drives are created equal, and drives that are fantastic for one purpose are a terrible choice for others.

There are two main factors at play, as I see it:

Manufacturing defects

Drives manufactured for a specific purpose are often a very bad choice for another purpose.

I believe that my situation was a combination of both. I certainly suffered from infant mortality with my WD green drives, because not all of the early-failed drives were on RAID, or even on Linux.  But also putting a WD green into a RAID array is a stupendously bad choice, because of the default behavior of the drive. Even using the factory tool as mentioned earlier in this thread did not save the remaining drives.

Based on the backblaze data, WD has certainly had reliability issues lately. Go back a few years earlier and they were fine.

I don't particularly care for the way this thread turned into an abstract discussion of what RAID means, or what modes I should use, or how it shouldn't matter what drive model I use. Or how WD greens worked fine for somebody else.

I am after specific data on drive reliability by model number for RAID use, and I have it on a limited set of models. That's probably adequate but I'd be interested in another study with different models in it if someone has found one. I haven't, although I've spent some time with Google since the thread started.

Sorry for being cranky.

----------

## eccerr0r

So you actually bought 12 disks and returned or disposed of every one (or had to toss the replacements)?

The only thing I've had major problems with is: power.  As I am running a MDRAID on consumer quality devices (assuming people who are using WD Greens are also using consumer quality devices) I'm also using consumer quality power supplies.  I've found that connectors and PSUs may or may not be up to snuff, killing disks or data.  Before each of my RAIDs get implemented I've been running them on a different machine, testing whether disks will drop early.

I'm working on a fourth RAID using again consumer level drives.  It's currently in dependability testing now (3 or 4x2TB disks in RAID5, mostly Toshiba/Seagate "Desktop" drives).  The current "production" RAID is using a mixture of WD and Hitachi Desktop drives (4 500GB).  I have another low use 73GB x 2 RAID1 (Refurbished SCSI Enterprise disks) that I don't use much, so no real data there.  The first array was a 120GB x 4 Seagate and Maxtor drives, again using "Desktop" drives.

The 120GBx4 array after working power issues, I had no failures, and I ended up dismantling/upgrading the array as it was stuck at "steady state".  I repurposed the disks as individual scratch disks.  The 500GBx4 array I've had disk drops like mad due to power, but eventually got that squared away.  I did end up having two disk failures - one was infant mortality (Hitachi) and the other was probably wearout failure (the other side of the "bathtub"/"smile" curve).  It was a WD.

----------

## 1clue

OK so revised details backed by actually looking at hardware:

The original batch of WD green 750g drives (and the computer that the main RAID array was in) was bought as parts and built in 2011. I posted 8 or 10 years ago, but checking the system in question I realize it's only been 5 years. This is a home office setup, it's for my work but my money. An enterprise system was bought at the same time by my employer using higher quality components. I bought this to test software I was writing for the enterprise hardware. Not the same core count, not the same anything really, but similar enough.

Box 1: 2011 Asus P6T with an i7 920 (4 cores, hyperthreading) which had 6g at first, upgraded twice to 24g) had raid6 (4x wd green) and 1x WD green for system drive. Still in service as a dev box. No longer has RAID.

Box 2: 2011 Mac/OSX.  2x WD greens, non-RAID.

Box 3: 2013 Intel box, can't remember what exactly but dual core, RAID1 WD greens, system drive was something else, box has since made a trip to the dumpster.

My RAID array for this box used 4 drives. It's software RAID 10, where the enterprise setup is hardware. I bought 12 drives at the same time. They're numbered with a permanent marker, not sequentially as they came out of the box but rather when I had my first batch of failures, I numbered the ones that had not yet failed and were installed, and then numbered the rest which were still new as they were installed. These drives and their replacements constitute the entire stack of WD greens I've ever owned. There is exactly one WD green that is still running, error free, and that's as one of many backup devices which are physically removed after backup. I do weekly backups and this is one of several devices. It was new when I put it into the backup rotation. I expect it to fail at any time, and am at this point keeping it around to get personal statistics for this batch of greens.

The enterprise system at work is still running, no hardware replacements of any kind. We bought 2 spares for the array and they're still in the box. It's no longer performing the same task and no longer under high load, but everything originally installed on the box is still in service and still error free.  This is what I would expect after 5 years.

The chronology of this is that the first drive died on my testing box (the first one mentioned) and I immediately returned it thinking nothing more than that this was infant mortality.  The second drive died shortly after the first was replaced, and then I started doing research. At that point I reset the idle3 using the Linux tools on all Linux drives in service. Even so I got another failure, before a year was up. I got the WD Windows tool and disconnected my drives, plugged them into a Windows box (don't own one, had to borrow one from work) and then used that setup to reset idle3. Shortly after that one of the replacements went out, and I no longer cared about getting WD anything, an attitude that continues now. I still used the drives because I couldn't afford to just throw them out and buy a new batch of drives, but I did increase the frequency of offline backups to deal with the crazy failure rate.

I stopped buying used hardware for personal use right around the year 2002. Since then I've had pretty good luck with everything I spent time to research. I had the belief at the time that hard drives were pretty much all reliable, except for the infant mortality thing we've been talking about. The 1 or 2 devices I've had to return as nonfunctional were replaced with components that worked. That's with the exception of the greens. I've made bad choices about some hardware I didn't research, but rather than having early fails I just got something that doesn't play well with Linux or just got substandard performance.  A TV card comes to mind.

At any rate, I have typically had many years of good service from hard drives and pretty much anything else. Generally speaking my hardware becomes useless to me due to specs before I get a hard failure.

----------

## eccerr0r

Definitely the head load/unloading was an issue for the WD greens, though I had fought with simply trying to get Linux to stop spinning up sleeping drives... that ended up being futile as Linux really wants to write to disks, or at least I really wanted to make sure that current metadata is saved.  Thus I can see the head load/unload, even if the drive stays spun up, would be an issue as it's hard to keep Linux from touching the disks.  That being said, this is the only disk that I know of that has this bad behavior apart from laptop drives.

The SATA disks I'm currently using for production/testing are 

WD 500GB Blue ("Desktop") WD5000AAKS-65V0A0 (A WD5000AAKS-00YGA0 was the failing disk).  Funny that my RAID array is a LOT faster when the 00YGA0 was dropped but it could also have been the 4.4 kernel too (was running 3.17).

HGST Desktop 500GB HDP725050GLA360 (I had to RMA one for infant mortality, and a HDS725050KLA360 disk fail which was a refurbished disk, so that was just me being cheap)

Toshiba Desktop 2TB PH3200U-1I72 alas not enough hours on them to give a good recommendation. 

I have another Seagate 2TB I want to eventually integrate but 3x2TB RAID5 is currently more storage than I need and currently the disk is a cold spare.  In fact the 4x500GB RAID5 still exceeds my storage needs, I just need to clean up.

I only have one WD green disk (2TB) and it's not part of any array.  I see the head load count ballooning, but shut off this feature.  The computer with the disk installed has failed so I'm not racking more hours on it, though it did survive quite a while of 24/7 operation as a PVR disk.

I have a feeling that buying used HDDs is a craps shoot.  If it really was a pull, it might be good; but I have a suspicion a lot of used HDDs are actually ones that had a bad sector, someone erased it to make it go away, and resold.  I wish this was made more clear but it'd hurt the resale value of these disks too.

----------

## Anon-E-moose

I've been using the wd red 2 tb series for a while, in a usb3 raid external box. 

They've been there for a couple of years now, no problem.

I'm using them for mirrored backup.

That was just shortly after they announced the "red" series, don't know if they've gotten worse over time though.

----------

## Goverp

This is irrelevant, but it may be of interest:

My desktop machine has a 4-disk mdadm RAID-5 setup populated with WD Green disks for somthing like 6 years.  About 1 year after I set it up (so about 2010-11) it started throwing hardware problems on one drive.  SMART data showed nothing, and WD's disk test tool (the serious one that runs in MS DOS, not the toy that runs in Windows) couldn't find any problem.

Fortunately, at the same time a few other people were having RAID hardware problems, so plenty of advice was available.  I rebuilt the disk, and about a week later the same would happen.  Then another drive, an a different cable.  Then I happened to installed a new kernel (somewhere in the 3.8 series, but I'm not sure exactly when), and coincidentally, the problems stopped (I'm still using the same WD disks as I started with, including the ones throwing errors, and my spare drive is still on the shelf).  And coincidentally, most of the discussions of RAID errors in the fora ended.

----------

