He Who Thought He Knew Something About DASD 369
Happened last week. A RAID 1, first drive failed, replaced it, second drive failed 8 days later. Both failed drives from the same manufacturing batch, seems they were identical in performance a little better than I would care for. Maybe it's better in RAID arrays if the drives come from different plants and manufacturing dates.
I've noticed that low end RAID configurations tend to ignore the problems with multiple drives on the same bus, for instance 14 drives and two controllers on a SCSI ultra-wide bus. If one drive fails, how many go offline? Well, if it's the SCSI controller on the drive, or a terminator, all 14 are down. Or take the case of a small cluster, two boxes with a drive array between them. What happens when the controller on one side of the cluster fails in a way that jams the shared bus (i.e. SCSI bus drivers fail with a short to ground, or FC fails in a mode that constantly butterts the fibreoptic line)?
Electronic failures don't seem to be as common as mechanical ones, but I've encountered them on several occasions, the last being a few months ago when a RAID 1 mirrored set failed due to a jammed bus. Only one drive was bad, but it forced a shutdown. Jack Peachicken
Origin of "glob
I'd been told that it was short for "Global Expression," but in the grand tradition when it got shortened initially to the obvious "ge" that...