Reliability
As far as I know, I've always been a proponent of "worse is better" - I favor "cheap and bad" over "expensive and good". I don't wear brand clothing, I don't use brand appliances and I definitely hate brand computers.
The price/performance ratio is one reason.
Case A. One $100,000 computer with a guaranteed up-time of 99.9999% - that's less than an hour of downtime in a year. (I sincerely believe that any actual computer with such a guarantee would cost a lot more than that.)
Case B. Several computers with a guaranteed up-time of 90% - that's one day of downtime in ten, or more than a month of downtime in a year.
How many computers are needed to obtain the same reliability as the first system?
Well... the chance of one such system failing is 1 - 90% = 10%. The chance of two of them failing at the same time is 10% x 10% = 1% (therefore, the up-time of two cheap systems has increased from 90% to 99%). Adding a third computer will increase that to 99.9%... and each new system added to the cluster will add another "9" to that figure. Which means that we only need six cheap computers for the same 99.9999% reliability.
Ok, this doesn't take into account the infrastructure needed when you have a cluster as opposed to a single system. Let's say that we want to be absolutely sure and use ten cheap computers for reliability, and that the cost of the additional infrastructure is the same as that of the computers. Which means that each individual computer needs to be cheaper than $5,000 in order for solution B to be more profitable than A. (And let's not forget that the expensive system needs a lot of infrastructure too - to guarantee 99.9999% up-time you need controlled cooling, a back-up generator and so on.)
$5,000 for a system that fails one day out of ten? The worst junk I ever used was a lot better than that, and I never paid that much for a system. I think $3,000 was the highest.
The moral of the story: do not underestimate the power of many cheap systems. Google is the best example :)
The price/performance ratio is one reason.
Case A. One $100,000 computer with a guaranteed up-time of 99.9999% - that's less than an hour of downtime in a year. (I sincerely believe that any actual computer with such a guarantee would cost a lot more than that.)
Case B. Several computers with a guaranteed up-time of 90% - that's one day of downtime in ten, or more than a month of downtime in a year.
How many computers are needed to obtain the same reliability as the first system?
Well... the chance of one such system failing is 1 - 90% = 10%. The chance of two of them failing at the same time is 10% x 10% = 1% (therefore, the up-time of two cheap systems has increased from 90% to 99%). Adding a third computer will increase that to 99.9%... and each new system added to the cluster will add another "9" to that figure. Which means that we only need six cheap computers for the same 99.9999% reliability.
Ok, this doesn't take into account the infrastructure needed when you have a cluster as opposed to a single system. Let's say that we want to be absolutely sure and use ten cheap computers for reliability, and that the cost of the additional infrastructure is the same as that of the computers. Which means that each individual computer needs to be cheaper than $5,000 in order for solution B to be more profitable than A. (And let's not forget that the expensive system needs a lot of infrastructure too - to guarantee 99.9999% up-time you need controlled cooling, a back-up generator and so on.)
$5,000 for a system that fails one day out of ten? The worst junk I ever used was a lot better than that, and I never paid that much for a system. I think $3,000 was the highest.
The moral of the story: do not underestimate the power of many cheap systems. Google is the best example :)
Comments