paint-brush
Billions of Computers Not Working | Unraveling the Issue Behind the Global CrowdStrike Chaos by@dhanushnehru
New Story

Billions of Computers Not Working | Unraveling the Issue Behind the Global CrowdStrike Chaos

by Dhanush NehruJuly 23rd, 2024
Read on Terminal Reader
tldt arrow

Too Long; Didn't Read

CrowdStrike Falcon update accidentally turned a critical driver file into a series of zeros. As a result, computers running Falcon couldn’t boot up, leading to blue screens across the globe. Fixing this issue isn't simple. Users need to boot in safe mode, use command prompts, and manually delete the corrupted driver file.
featured image - Billions of Computers Not Working | Unraveling the Issue Behind the Global CrowdStrike Chaos
Dhanush Nehru HackerNoon profile picture

Introduction:

Imagine waking up to find that every PC around the world, especially those in businesses, is suddenly unusable. Business meetings are falling apart, news networks are in disarray, and flights are grounded. The chaos is beyond anything imaginable. What happened? Let’s break down the CrowdStrike blue screen chaos.

The Issue:

The issue began with a bad update from CrowdStrike Falcon. This update, intended to enhance security, accidentally turned a critical driver file into a series of zeros. As a result, computers running Falcon couldn’t boot up, leading to blue screens across the globe.


You might wonder why only Windows was affected. It’s because Windows, despite being popular, has more security vulnerabilities compared to other operating systems like macOS or Linux. Falcon’s deep integration with Windows, designed to bolster security, ironically made it a single point of failure when things went wrong.

The Fix:

Fixing this issue isn’t simple. Users need to boot in safe mode, use command prompts, and manually delete the corrupted driver file. For systems with BitLocker encryption, the process is even more complicated, requiring additional steps to decrypt the hard drive.


The various information received detail the fix below 👇


𝗣𝗵𝘆𝘀𝗶𝗰𝗮𝗹 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿𝘀:

  1. Boot Windows into Safe Mode or the Windows Recovery Environment

  2. Navigate to the 𝗖:\𝗪𝗶𝗻𝗱𝗼𝘄𝘀\𝗦𝘆𝘀𝘁𝗲𝗺𝟯𝟮\𝗱𝗿𝗶𝘃𝗲𝗿𝘀\𝗖𝗿𝗼𝘄𝗱𝗦𝘁𝗿𝗶𝗸𝗲 𝗱𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆

  3. Locate the file matching “𝗖-𝟬𝟬𝟬𝟬𝟬𝟮𝟵𝟭*.𝘀𝘆𝘀” 𝗮𝗻𝗱 𝗱𝗲𝗹𝗲𝘁𝗲 𝗶𝘁 (I would rename it to be safe).

  4. Boot the host


𝗙𝗼𝗿 𝗔𝗪𝗦 (𝗔𝗺𝗮𝘇𝗼𝗻 𝗪𝗲𝗯 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀), 𝗳𝗼𝗹𝗹𝗼𝘄 𝘁𝗵𝗲𝘀𝗲 𝘀𝘁𝗲𝗽𝘀:

  1. Detach the EBS volume from the impacted EC2 instance.

  2. Attach the EBS volume to a new EC2 instance.

  3. Fix the CrowdStrike driver folder.

  4. Detach the EBS volume from the new EC2 instance.

  5. Attach the EBS volume back to the impacted EC2 instance.


𝗙𝗼𝗿 𝗔𝘇𝘂𝗿𝗲, 𝗳𝗼𝗹𝗹𝗼𝘄 𝘁𝗵𝗲𝘀𝗲 𝘀𝘁𝗲𝗽𝘀:

  1. Log in to the Azure console.
  2. Go to Virtual Machines and select the affected VM.
  3. In the upper left of the console, click “Connect”.
  4. Click “More ways to Connect” and then select “Serial Console”.
  5. Once SAC has loaded, type in ‘cmd’ and press Enter.
  6. Type ‘ch -si 1’ and press the space bar.
  7. Enter Administrator credentials.
  8. Type the following commands:
  9. ‘bcdedit /set {current} safeboot minimal’
  10. ‘bcdedit /set {current} safeboot network’
  11. Restart the VM.
  12. To confirm the boot state, run the command: ‘wmic COMPUTERSYSTEM GET BootupState’.


Checkout this video for a more detailed explanation 👇


Conclusion:

This incident underscores the importance of robust cybersecurity practices and the potential risks of deep system integrations.


Thanks for reading; please give a like as a sort of encouragement and also share this post on socials to show your extended support.


Follow for more ⏬

Twitter / Instagram / Github / Youtube / Newsletter / Discord