Microsoft says it estimates that 8.5m computers around the world were disabled by the global IT outage.
It’s the first time a figure has been put on the incident and suggests it could be the worst cyber event in history.
The glitch came from a security company called CrowdStrike which sent out a corrupted software update to its huge number of customers.
Microsoft, which is helping customers recover said in a blog post: “We currently estimate that CrowdStrike’s update affected 8.5 million Windows devices.”
CrowdStrike will ultimately have contract terms that put responsibility on the companies, and truth be told the companies should be able to handle this situation with relative ease. Maybe the discussion here should be on the fragility of Windows and why Linux is a better option.
Linux could have easily been bricked in a similar fashion by pushing a bad kernel or kernel module update that wasn’t tested enough. Not saying it’s the same as Windows, but this particular scenario where someone can push a system component just like that can fuck up both.
Yes it can, but a kernel update is a completely different scenario, and managed individually by companies as part of their upgrades. It is usually tested and rolled out incrementally.
Furthermore, Linux doesn’t blue screen. I know some scenarios where Linux has issues, but I can count on one finger the amount of times I’ve had an update cause issues booting… and that was because I was using some newer encryption settings as part of systemd.
However, it would take all my fingers & toes, and then some, to count the number of blue screens I’ve gotten with Windows… and I don’t think I’m alone in that regard.
Linux doesn’t blue screen, no. A kernel panic is a black screen.
And you’re running corporate kernel level security software on your encrypted Linux server?
I guess it depends on what you consider corporate kernel level security. Would that include AppArmor, SELinux, and other tools that are open-source but used in some of the most secure corporate and government environments? Or are you asking if I’m running proprietary untrusted code on a Linux server with access to the system kernel?
deleted by creator
Tell me you’ve never administered at scale without telling me you’ve never administered at scale.
Bruh, disk encryption is not optional in many environments and dealing with unbootable LUKS Linux is pretty much on par with an unbootable Bitlocker Windows machine.
In this case, it’s really not a Linux/windows thing except by the most tenuous reasoning.
A corrupted piece of kernel level software is going to cause issues in any OS.
Cloudstrike itself has actually caused kernel panics on Linux before, albeit less because of a corrupted driver and more because of programming choices interacting with kernel behavior. (Two bugs: you shouldn’t have done that, and it shouldn’t have let you).
Tenuously, Linux is a better choice because it doesn’t need this type of software as much. It’s easier and more efficient to do packet inspection via dedicated firewall for infrastructure, and the other parts are already handled by automation and reporting tools you already use.
You still need something in this category if you need to solve the exact problem of “realtime network and filesystem event monitoring on each host”, but Linux makes it easier to get right up to that point without diving into the kernel.
Also vendors managing auto update is just less of a thing on Linux, so it’s more the cultural norm to manage updates in a way that’s conducive to staggering that would have caught this.
Contract wise, I’m less confident that crowd strike has favorable terms.
It’s usually consumers who are straddled with atrocious terms because they neither have power nor the interest in digging into the specifics too far.
Businesses, particularly ones that need or are interested in this category of software, inevitably have lawyers to go over contract terms in much more detail and much more ability to refuse terms and have it matter to the vendor. United airlines isn’t going to accept the contract terms of caveat emptor.
You assume that businesses operate in good faith. That they thoroughly review contracts to ensure that they are fair and in the best interests of all its employees. Do you really think Greg, a VP of Cloud Solutions that makes 500k a year, who gets his IT advice on the golf course by AWS, Microsoft, & Oracle reps. Who gets wined & dined almost weekly by these reps, and a speaking spot at re:Invent, and believes Gartner when it says spending $5 million a month on cloud hosting and $90/TB on Egress traffic is normal, has the company’s best interests in mind?
I’ve seen companies pay millions for things they never used, or that weren’t ever provided by the vendor. You go to your managers, and say… “hey, why are we paying for this?” and suddenly you’re the bad guy. I’d love for you to prove me wrong. I’ve found pieces of progress before, within isolated teams when a manager wanted to actually accomplish something. It never lasts though… its like being an ice cube in a glass full of warm water.
There’s a big difference between “buying stuff you don’t need”, and “not having legal review a contract”, or “accepting terms that include no liability”.
Buying stuff you don’t need is in the authority of a VP seeing as their job is to make choices. Bypassing legal review and accounting diligence controls typically isn’t at any company big enough to matter.
I trust your hypothetical VP to not want to get fired from his nice job by skipping the paperwork for a done deal.
Do you honestly think that Amazon just didn’t read the contract? Microsoft? Google? The US government?
They’re getting sued, and they’re gonna have to pay some money. Cynicism is one thing, but taking it to the degree of believing that people are signing unread contracts that waive liability for direct, attributable damage caused by unprofessional negligence is just assinine.
Terms which should be void as this update was pushed to systems that explicitly disabled automatic updates.
Companies were literally raped by Crowdstrike.
/edit Sauce (bottom paragraph)
Companies were not raped by CrowdStrike. They were raped by their own ineptitude.
No where have I seen evidence where these updates were disabled and still got pushed. I’m not saying it is impossible, but unlikely if they followed any common sense and best practices. Usually, you’d be monitoring traffic and asking yourself why it is still checking for updates despite being disabled before deploying it to your entire IT infrastructure.
I see a lot of bad faith arguments here against CrowdStrike. I agree that they messed up, but it pales in comparison in my book to how messed up these companies are for not doing any basic planning around IT infrastructure & automation to be able to recover quickly.