Because it’s going to take time to put those servers back in production. Depending what they did it might have causing outage to external facing customers, which will have a higher impact than internal facing. But with that amount of money, it actually seems fairly low to me so I’m guessing they weren’t public facing servers.
So it was probably the time that it takes to recreate all those servers get everything back up and running, and delayed work caused by the outage.
As an armchair economist, lots of things. Loss of money from workers standing around unable to do their jobs because of the technical issues, the cost of doing a restore from backup (technician time, extra help, direct costs of accessing the data), etc. Opportunity costs from having to send business away, or otherwise unreasonably delay taking/delivering orders that have either been given to competitors or cancelled because of the issues.
Even the dang electricity costs of keeping the lights on while waiting for a fix…
Large companies calculate this value as a “burn rate”, which is to say, how much is it directly or indirectly costing to have everyone here, ready to work, and unable to do so because of an issue that affects everyone. Usually measured in dollars per hour. So if their burn rate is 100k/hr, and it takes 10 hours to fix the problem, it’s ~$1M in losses.
They may be able to recoup some of those losses by adding an extra shift or granting overtime to catch up, but for the most part, a large percent of that money is simply gone.
Question. Where does this “value” come from? Loss of business? Downtime? Labor loss due to people unable to work? Pull a number out of a hat?
These servers had no production data on them. Ideally with a proper DR solution you just restore and presa charges.
I would guess man hours.
Because it’s going to take time to put those servers back in production. Depending what they did it might have causing outage to external facing customers, which will have a higher impact than internal facing. But with that amount of money, it actually seems fairly low to me so I’m guessing they weren’t public facing servers.
So it was probably the time that it takes to recreate all those servers get everything back up and running, and delayed work caused by the outage.
That’s just my guess though
As an armchair economist, lots of things. Loss of money from workers standing around unable to do their jobs because of the technical issues, the cost of doing a restore from backup (technician time, extra help, direct costs of accessing the data), etc. Opportunity costs from having to send business away, or otherwise unreasonably delay taking/delivering orders that have either been given to competitors or cancelled because of the issues.
Even the dang electricity costs of keeping the lights on while waiting for a fix…
Large companies calculate this value as a “burn rate”, which is to say, how much is it directly or indirectly costing to have everyone here, ready to work, and unable to do so because of an issue that affects everyone. Usually measured in dollars per hour. So if their burn rate is 100k/hr, and it takes 10 hours to fix the problem, it’s ~$1M in losses.
They may be able to recoup some of those losses by adding an extra shift or granting overtime to catch up, but for the most part, a large percent of that money is simply gone.