Global IT Outage

This was a major disruption for healthcare in my area (Boston and environs) because of the non-functionality of EPIC. All non-essential medical procedures/tests were cancelled
Confirmed by a friend in the Boston area who drove a long way to an appointment and was flatly turned away by the health care facility. That's how flummoxed and paralyzed they were.
 
I feel sure other IT pros here would agree with most of the following. The Occam's Razor (simplest) explanation is that software quality assurance, IMO, is now abysmal and has been a race to the bottom for many years due to human-natural greed
True - but this is a company with direct ties to the FBI/deep state/DNC - with multi billion dollar contracts.
So although it could be what you mentioned, it doesn't pass the smell test to me. Just like the Trump assassination attempt (and the timing between the two events) - it stinks.
Just like the Trump assassination attempt, you can 'blame it on incompetence/greed/DEI hires etc' - but in the end both where most probably deliberate. So the blame on incompetence is a smoke screen, a useful 'out'.
 

Seems CrowdStrike also broke Debian puters back in April... and it took them ages to fix it.

Maybe the reason for the "cyberpandemic/cyberattack" warning is because they know that such key businesses are such absolute garbage that a collapse is inevitable. Or... something.
 
Just a quick reminder - Crowdstrike where the ones that claimed Russia hacked the DNC servers.

This crash was intentional and "Russia hacked the DNC" was intentional too. Crash motives include erasing Trump assassination evidence.
 
This crash was intentional and "Russia hacked the DNC" was intentional too. Crash motives include erasing Trump assassination evidence.

I was just literally thinking about the connection regarding that - the crash / Trump / the shorting in relation to Truth social, erasing evidence relating to shooter, other dark entities behind the attempted assassination etc. And maybe stuff to do with Israel, Ukraine and underhand deals, money disappearing... the list is prob long. Very easy and convenient way to get rid of (hide) all kinds of info / assets.
 
@PopHistorian , agree with all your points. I have personally seen it get worse and techs becoming more incompetent due to the compounding effect. I have dealt with a fair few incidents like these in my career and it does begin to impact your mental health where you are in constant "high intensity" mode trying to think of root causes and resolutions. The worst part is what comes after and how to find explanations for the idiots above you in the food chain as to why deploying xyz was a bad idea and yes, "I told you so".

This is only going to get worse, whether intentional or not. Although, I'll throw some dark humour in that with the degrading technical skill-sets and general shortage of clever IT people due to burnouts, even the PTB may struggle to launch a massive cyber-attack world-wide which is a problem for them of different type. Lack of "balance" perhaps.
 
you can 'blame it on incompetence/greed/DEI hires etc' - but in the end both where most probably deliberate. So the blame on incompetence is a smoke screen, a useful 'out'.
I didn't suggest it wasn't deliberate. I think it was. I gave the Occam's conclusion for those unaware of the state of the business, which actually makes it easy to blame incompetence. The fact that real-time patching is now a ubiquitous reality is a wonderful opportunity for exploitation by the PTB and to blame incompetence, as they routinely do with so many horrors they perpetrate.
 
i saw a screenshot showing " rm *.* " but i do not find it back. has anybody seen this??



I have seen people moving the code like this , but that is at one individual server level in one company( out of thousands of server). Few years back, some body coded as rm *.* ( remove) with a variable directory path. But the variable path didn't got resolved ( so it became root directory) and admin executed as it got approval from every body (testing team to Manager to director and so on). It got successfully executed and the ticket closed. Well, until some body said all application on the server were gone.

But this is not a local server level. not even company level, it is damn world wide operating system level.

Whom to Blame? or Blame game


snapback on DEI is understandable. though the real problem is


Solution?

Stop trusting any thing. crowdstrike and DEI are done with this incident.
This one?
 
Perhaps the glitchy hackish update story is a bit of a cover for some add-on implementation to global systems that will enable or facilitate a next phase of nefarious manipulation on the path to a glorious digital future. IOW, “your cyber pandemic software is now up to date!”
 
The selling of shares and FBI connection combined points to the outage being intentional, not a glitch.


So the Chief Security Officer of Crowdstrike, who sold 4,000 shares just days before the "IT apocalypse" that wiped hundreds of thousands of government servers (possibly destroying evidence of deep state complicity in the assassination attempt) is an FBI veteran with 24 years with the FBI. Can't make this up. The same agency that's now actively involved in the cover-up.


Prior to joining CrowdStrike, Mr. Henry oversaw half of the FBI’s investigative operations as Executive Assistant Director, including all FBI criminal and cyber investigations worldwide, international operations, and the FBI’s critical incident response to major investigations and disasters. He also managed computer crime investigations spanning the globe, established the National Cyber Investigative Joint Task Force, and received the Presidential Rank Award for Meritorious Executive for his leadership in enhancing the FBI’s cyber capabilities.

 

Trending content

Back
Top Bottom