Qilin to bet we’ve not seen the last of these attacks
Over the weekend, Qilin claimed not one but two April 2025 attacks on US government entities–the City of Abilene, TX, and the City of Blaine, MN.
While both cities had confirmed cyber attacks in April, neither had been overly forthcoming on the details. Quelle surprise.
In the case of Abilene, Qilin alleged to have stolen 477 GB of data, while it pilfered a little more from Blaine–489 GB.
INC-redulous claims
In early May, South African Airways stated that it had suffered a cyber attack which had disrupted some operations. On May 16, INC ransomware group came forward to confirm what everyone suspected–this was a ransomware attack.
INC’s post detailed the leak as “Part 1,” suggesting that a lot of data could be implicated. When Comparitech contacted SAA, it referred us to a previous statement, which suggested: “Presently, there is no evidence of compromised customer data or SAA’s financial management systems.” Investigations are ongoing, however.
On average, INC steals around 90,000 individual records per attack. But its biggest breach on OnePoint Patient Care in August 2024 saw over 1.7 million records breached.
Inter-locked into a massive data breach
Yesterday, Interlock came forward to claim a recent attack on West Lothian Council in the UK. But unlike its US and Italian counterparts, West Lothian was quick to confirm its May 6 cyber attack was of the “ransomware” variety.
That’s where the good news ends, however, as Interlock alleges to have stolen an eye-watering 2.63 TB of data, which includes 3,349,196 files and 580,783 folders. We await further updates and confirmation from the Scottish council about this potential breach.
What to do after an accidental deployment
"I swear, this has never happened to me before," says every IT professional at some point. Mistakes are as human as lying about mistakes, but in IT, even one mistake could be exceptionally costly. That's particularly true with botched deployments.
We recently read about one such incident in which an IT engineer (who was new to that role) accidentally deployed an application install and system reboot package to the wrong machines. In this case, instead of sending it to a test machine, it was sent to the entire system of 50K machines. The installer forcibly rebooted every system it touched. As you might expect, messages started rolling in with very confused endpoint users.
"Yikes" barely covers it.
For those who are also new to deployments, that raises some good questions:
- What are the consequences of botched deployments?
- What can you do about them?
Steps to take if you botch a deployment
Messing up a deployment can be terrifying, to say the least. Your job is potentially on the line when that happens. However, a botched deployment doesn't have to be career ending if you respond quickly, own up to the mistake, and learn from it so you don't muck it up again.
First, contain the deployment.
Disable or pause the deployment immediately. Resist the urge to delete it right away. It may be your only way to understand what was pushed or to roll back the changes. In our above example, the IT engineer did disable the deployment but, as he noted "in a panic, I deleted it.".\
Don't delete anything.
Deleting a botched deployment is one of the worst actions you can take. It will look like you are trying to cover your tracks. As it were, deleting the deployment can erase critical forensic data:
- Who it was targeted to
- What was installed
- When it happened
- Logs that show system behavior
This will also limit your recovery options. Most software deployment tools will allow you to roll back changes made from a deployment. Deleting removes that option and could delay remediation.
Notify your team.
Once the bleeding is stopped, notify your team and leadership as soon as possible. Communicate clearly and calmly. You don’t need to have all the answers yet, but you do need a structured approach. Try to cover these areas:
- What happened ("The wildest thing happened, my dog jumped on the keyboard...")
- Who may be impacted ("Don't worry, only 100% of endpoint users were affected...")
- What you've done so far ("I've started remediation by curling up in a ball...")
- What support you need right now ("I'll need a therapy dog..."
Be as clear as possible about the problem and the solution to limit the amount of follow-up questions you'll have to answer.
Document everything.
Capture the original deployment intent, what was actually deployed, timestamps, affected systems, and any logs or error messages you can find. This not only supports a proper root cause analysis but can also protect you during postmortems or HR reviews.
And by HR reviews, we mean job performance reviews and performance improvement plans (PIPs). Not to scare you, but there's a good chance neither is going to look good depending on the severity of the botched deployment.
Be ready for the potential consequences.
Mistakes like this can lead to major consequences:
- System outages
- Data loss
- Serious end-user disrpution
- Reputation damage (to your and your organization)
You might feel the urge to hide or panic, but here’s the truth: owning up and taking fast action often earns you more respect than silence or finger-pointing ever could.
How to avoid botching a future deployment
To avoid this kind of disaster in the future, implement a few protective guardrails:
- Always deploy to a test or pilot group first (and make sure you actually deploy to that test group, unlike our unfortunate example from earlier)
- Use clear, color-coded labels to differentiate production vs. test collections
- Require a second set of eyes on all major pushes
- Automate pre-deployment checklists or approval workflows
Finally, practice for failure. Set aside time with your team to simulate what would happen in a bad deployment scenario. Who takes action? Who communicates to stakeholders? How fast can you pause or roll back? These fire drills will help you respond with confidence when the real thing hits.
Look, every seasoned IT professional has an "I ******* up story." You're not defined by the mistakes you make, but how you respond them, recovery from, and learn from them.
Until next week! Let's keep that zero day at zero.