Fault in CrowdStrike caused airports, businesses and healthcare services to languish in ‘largest outage in history’
Services began to come back online on Friday evening after an IT failure that wreaked havoc worldwide. But full recovery could take weeks, experts have said, after airports, healthcare services and businesses were hit by the “largest outage in history”.
Flights and hospital appointments were cancelled, payroll systems seized up and TV channels went off air after a botched software upgrade hit Microsoft’s Windows operating system.
It came from the US cybersecurity company CrowdStrike, and left workers facing a “blue screen of death” as their computers failed to start. Experts said every affected PC may have to be fixed manually, but as of Friday night some services started to recover.
As recovery continues, experts say the outage underscored concerns that many organizations are not well prepared to implement contingency plans when a single point of failure such as an IT system, or a piece of software within it, goes down. But these outages will happen again, experts say, until more contingencies are built into networks and organizations introduce better back-ups.
Here’s an idea: don’t give one company kernel level access to the OS of millions of PCs that are necessary to keep whole industries functioning.
I mean, Microsoft themselves regularly shits the bed with updates, even with Defender updates. It’s the nature of security, they have to have that kind of access to stop legit malware. That’s why these kind of outages happen every few years. This one just got to much coverage from the banking and airline issues. And I’m sure future outages will continue to get similar coverage.
But the Crowdstrike CEO was also at McAfee in 2010 when they shit the bed and shut down millions of XP machines so it seems like he needs a different career…
I’m not sure you can blame the CEO. As much as I despise C-level execs this seems like a failure at a much lower level. Now the question of whether this is a culture failure is a different story because to me that DOES come from the CEO or at least that level.
How difficult would it be for companies to have staged releases or oversee upgrades themselves? I mostly just use Linux but upgrading itself is a relatively painless processing and logging into remote machines to trigger an update is no harder. Why is this something an independent party should be able to do without end user discretion?
Also the obligatory: “don’t run infrastructure on Microsoft products, run Linux”
C-suite to experts: Are the future risks short term or long term? Specifically longer term than my golden parachute?
This is why “they are the biggest” isn’t a good reason to pick a vendor. If all these companies had been using different providers or even different OS, it wouldn’t have hit so many systems simultaneously. This is a result of too much consolidation at all levels and one issue with the Microsoft OS monopoly.
This would have been a fun MIR had my systems be impacted.
deleted by creator
Still say not allowing untested updates in a production environment fixes this. I don’t care if it’s a README file, don’t update without testing.