cybersecurity dam breaking image

The Dam Seems to be Breaking

I have been doing cybersecurity things for more than 50 years, and it seems to me that things are accelerating in a bad way these days way beyond what I have seen in the past. I hate to use the term exponential, because it is an over-used term and this sort of growth usually reflects a short term acceleration at the beginning of a step function. The thing is, we seem to be somewhere after the beginning, but nowhere near the end of a step function in cyber-related failures.

All PII has been leaked

I like to start with personally identifiable information (PII) because it is so widely used in legal circles. There are two basic aspects of it really; there is the stuff that rarely changes and the stuff that is very recent.

  • By now, all of the stuff that rarely changes has been leaked. If you cannot find name, social security or other governmental ID number, address(es), phone number(s), email accounts, psychological profile, social media history, medical history, financial history, family members, and other similar information about anyone in Western society, you are probably not looking very hard.
  • It takes a little bit more effort (but not much) to find the most recent information, let’s say anything from the last 6 months.
All widespread operating environments are breached

I don’t think there are any operating environments out there today with more than 100,000 systems or people using them that do not have current vulnerabilities allowing them to be successfully broken into from afar.

  • I know, some folks keep very up to date on their patches. But the rate of vulnerabilities published is such that up-to-date in this sense represents always behind.
  • The published (announced , revealed, whatever) vulnerabilities do NOT include all of the vulnerabilities not yet published, announced, revealed, whatever. As a guestimate, about 10% of the specific and readily exploitable vulnerabilities that someone knows about and has tested successfully are published, and those represent perhaps at most 1% of all the vulnerabilities that are out there (likely way less than 1%).
    • There are many whole known classes of vulnerabilities that are rarely if ever explored – presumably because it is so easy to stick with the ones we know that still work.
    • We don’t have a real science here, but we think we know that there are many unknown classes of vulnerabilities and we don’t really even know how to count the number of classes we don’t know about yet.

Given that every environment people use is vulnerable, it seems there is little hope of escaping attacks unless you are very good at defense.

Bad advice is more widespread than good advice

The US Transportation Safety Administration (TSA) just recently decided to tell people:

  • Not to plug their devices into free charging stations at airports, hotels, and on airliners;
    • How am I going to board the plane using my cell phone as my boarding pass if it is out of power and I cannot plug it in? The TSA says to use one of their approved devices to plug into a wall plug, but there aren’t enough of them...
  • Not to use free WiFi access points at airports, hotels, coffee shops, or anywhere else;
    • For some reason, if we pay for it the TSA thinks we can trust it. But how hard is it to have a free and for fee version of the same WiFi setup? If the suckers want to pay in addition to being surveilled and intercepted, that just makes more money for the criminals.
  • Not to use VPNs unless you pay for them; and
    • Money makes the TSA go round. They especially tell us not to trust free VPNs we get from China, but since a lot of the hardware we carry with us as well as the hardware used in all sorts of other places around the world comes from China, it sort of seems like spitting into the wind.
  • Not to download software, enter passwords, or do any financial transactions when traveling.
    • So I cannot pay for the hotel, food, ride shares or taxis, access my email, use any applications that involve javascript (most Web sites use it today), and so forth. I think the real advice is to not bring electronics with you...

Of course the TSA is only the latest with this bad advice… not the first or only source by a long stretch.

People are getting dumber

One of the side effects of advanced technology and influence operations is that people are apparently getting dumber. They do not read, preferring first audio, and now video content imagining that it is somehow making them smarter faster. While multi-modal learning is more effective in may ways, there are a few things to note:

  • AI-generated audio drill-downs of my articles take far longer to listen to than the articles take to read and they get lots of things wrong.
  • It takes a lot of time to make a decent movie that depicts written content well. Even with all the automation, the time and effort to convey and receive information is more extensive and, for the detailed stuff, written forms seem to still work better.
  • When you read things, you tend to focus longer on the things you need to think about, so it is self-paced in terms of your ability to comprehend the content. Audio and video tend to just fly past you. You get the big picture, but miss the details.

There are recent statistical studies of people getting content through different means, and they tend to support these notional findings of mine. But to be clear, it might just be different smartness instead of worse smartness. And measures of smartness are dubious at best.

Cognitive attacks are just beginning to emerge

The people getting dumber thing has many possible root causes, including that life is easier so many folks don’t have to worry about being really good at things, for a while. But pretty soon this is going to change.

The cognitive attack space is undergoing explosive growth and has metastasized from its early days to infiltrate every aspect of online interaction. Today, the few defenses are pathetic at best and in some cases more damaging than the problems they address. While pathetic defenses are not new, the attacks have gotten far easier, less expensive, more scalable, and harder to deal with while the defenses have gotten worse in most every way.

  • On the attack side:
    • Generative AI (GAI) now has the ability to mimic the form and content of communications to the point where it is practically impossible to differentiate it from human generated content at the simplistic high-level of communication. But even worse, since people are now using GAI to rewrite what they write, there is in fact no readily differentiable difference between human and GAI content at that level.
    • Estimates are that more than 50% of all the content on the Web and in many social media platforms today are GAI, and thus humans reading content can no longer tell the GAI from the human. And because GAI learns from this content, it is learning from itself and thus likely getting dumber and more and more self-informed, something about inbreeding has this effect.
    • People read less and less of what they are sent, presumably because they are sent so much more. As a result, they substitute form for content, buying into things based on form instead of substance. Of course this is nothing new, but it has gotten to extremes, where trust is lowered in human content because AI content has more compelling form, this in turn because the cost of using GAI is less than the cost of actual people thinking.
    • Those with money and power wishing to push their point of view have largely taken control over the major platforms, and as a result, they are forcing GAI on their user bases, which comprise well into the billions of people globally. That’s something like half of all the people on Earth now directly reading and following the GAI propaganda machines of the rich and powerful, which claim democratization by allowing anyone to say whatever they want, sort of, but overwhelming anything you or I say with their mass produced customized and individualized content.
    • A famous Stalin quote (translated to English of course) is something like: “Elections aren’t won in the voting booth, they are won in the counting room.” But this has now changed. They can be won in social media, and have been. It’s about attacks on the body politic, and that goes for nation states as well as large enterprises, small and medium businesses, non-profit groups, and right down to billions of individuals.
  • On the defense side:
    • With the difference between malicious and benign content and human and GAI content getting so small, there is really no hope for proper differentiation with automation, or with people. Unlimited false positives and negatives seem certain.
ComSec, OpSec, InfoSec, CyberSec, Etc.Sec

Broadening out a bit from computer security and cognitive security to other related disciplines, we are starting to see the results of the breakdown across the full spectrum of security disciplines. Here is a very brief briefing:

  • Communications Security: Communications was once low bandwidth and highly reliable. Now it’s high bandwidth and really flaky. Underlying the problem are two general areas of concern:
    • Where does it go? We just don’t know any more. It was once the case that a local phone call went to the local exchange and from there to another local telephone. That was before most living folks were born. Now days, most communications go through the Internet and can get routed anywhere along the way from place to place. Likely it does not go to the Moon, but country to country and continent to continent to call next door have happened and can be caused to happen at will by smart attackers.
    • What can be done to it? Almost anything you can think of. It can be stopped, slowed, delayed, replayed, jitter’d, copied, redirected, intercepted, altered, machine-in-the-middle’d, noised, over- or under- accounted for, falsely attributed, and so forth. And all of these have been done including to encrypted traffic, virtual private network (VPN) traffic, and all sorts of other supposedly ‘secure’ traffic.
  • Operations Security: Operations are often revealed through indirect information. The challenge today is the ready availability of so much of this information. All of the data breaches result in a massive set of overall content on individuals. For example, there are many sites where people list their desire for work, typically revealing lots of information, including security clearances, history of work, titles and position, etc. This makes solving the puzzle problem a lot easier than it once was. It supports targeting of individuals for detecting and infiltrating specific operations, and with additional information aggregated from all sources, often provides details on who is going where and when. Automation to support the analysis is now readily available, and with enough computing power and detailed content the logistics details for operations of all sorts is readily attainable.
  • Information Security: It’s almost not worth mentioning that with the massive leaks of information and the ability to alter that information by those leaking it (as demonstrated by all the ransomware incidents), confidentiality, integrity, availability, use control, accountability, custody, and transparency are all increasingly suspect. A few hundred million people effected by an attack is hardly even a surprise these days.
  • Cybernetic Security: Cybernetic systems have sensors, actuators, communications and control components that are supposed to interact to provide desired functions. Unfortunately, each of these components as a class and many of the specific items of each class have been subverted by supply chain and direct attack mechanisms. As a result, these systems are systematically exploited to use their sensors for surveillance, actuators for influence and physical harm, and feedback systems against the wills of those who put them in place for a purpose. Attacks on these systems are expanding while defenses for these systems are, at best, weakened by connectivity.
  • Personnel Security: Of course people are critical to all of the systems today, but this is being challenged on at least two major fronts:
    • The use of AI to replace people: Recent advances in AI have convinced people in charge that they no longer need as much human expertise to do their jobs. This is nothing new, as automation has replaced many of the so-called menial tasks that people used to perform, like bending metal and painting car bodies. But the information revolution, unlike the now traditional industrial revolution, is not yet ready for prime time in replacing actual people who know what they are doing. The era of mistakes in using computers for the wrong things is not new… to use an ancient (by now) quote:
      “To err is human, to really foul things up it takes a computer”
      I think it may be time to rethink the use of modern GAI for anything you actually want to count on. While people are not perfect, GAI amplifies those imperfections.
    • The weakening of the trust bond: People who are afraid of losing their jobs or who have multiple loyalties are, of necessity, now part of the insider profile of almost all entities operating in the digital arena. There is essentially no hope for checking out enough of the personnel we depend on for most modern systems to have a meaningful effect on people acting against the entity. The supply chain goes too far, the mechanisms are too complex, and the ability to differentiate who to trust to what purpose over what time frame on what basis is only known to a limited extent and practiced to almost no extent.
  • Etc. Security: If I missed your favorite discipline, I apologize. Send me an email and tell me what area you want me to include and I will add it on as an appendix in a future update.
Automated attacks are getting easier

People talk about automated defenses using modern AI, but the reality is that AI-based coding is still poor today. While augmenting experts in programming makes the people more efficient, every attempt I have seen to have AI generate decent software of a non-trivial nature results in programs that are problematic in many ways. Of course you can look up a javascript program to do most of the functions you might like to use, but that is not AI, it’s just looking up what other people did using AI as a sort of search engine, and if you generalize them with macros you can probably do a much better job today than GAI can produce.

But when it comes to writing attacks, GAI is probably just fine, because the attack code doesn’t need to be very good as coding goes. It doesn’t usually need to be efficient, or clever, meet any standards, or interface with other systems, or have a decent user interface, or work all the time, or be updated, or be sold or salable, or have many features. It can merely combine the millions of examples already out there in chunks and assemble it all together.

To get a sense of this, in the 1980s I wrote a computer virus generator that produced many viruses per second using a 1980s PC-AT. That’s a less powerful computer than my current Jabra microphone/speakers probably have. Generating malicious code is trivial indeed, and GAI just makes it easier for the lamest among us.

Automated defenses are getting harder

No successful cyber-defense has failed to use automation in the last 4 years, and much of that automation was once called AI, before becoming just a regular part of IT. Pattern matching was once AI, and today it is largely assumed. Voice output was once considered AI, and current ‘deep fake’ versions are called AI for now, but really it’s not new or intelligence. Voice, video, and similar detection to determine human vs. machine is already pretty old hat, and even though fakes are getting good enough to fool humans, it’s not that hard to detect them with computers, if you spend the time and effort to do so.

But real-time generation has long been easier than real-time understanding and detection. It’s trivial to come up with some new generation method and non-trivial to come up with a new detection technology that meets the false positive and false negative requirements of something useful.

Perhaps more importantly, it takes actual knowledge to produce a detection technology that works well enough for real use. While GAI can be used to produce samples of existing knowledge that can then be used to feed into detection methodologies, when it comes to actually organizing things usefully, it seems to still take people. And that takes time and attention and expertise.

What’s much worse is the current state of things like spam detection. This is a field that has been widely operable and used for more than 30 years. And yet current systems still detect false positives at a ridiculous rate, even for internal communications between people in the same organization using the same email provider, where sender and recipient are both in regular communications and all of the mechanisms for creating, sending, receiving, and providing the emails are using the current so-called trusted infrastructure technologies with people using accounts that are authenticated using multi-factor authentication from the same systems they use every day. If we cannot do this reasonably well, what hope do we have of getting it right in the wider environment.

People are unwilling or unable to do what it takes

Some folks seem to think that user training and education will solve the problem. I will tell you that this will not work. It’s not some big theoretical concept that I have that brings me to this conclusion. It is merely experience. There are two basic problems not yet solved in user training:

  • What do we train them and how? We seem to train them the wrong things poorly.
  • How well do we expect them to perform after training? Not as well as automation.

I have a general rule in this regard.

  • If we cannot program a computer to do it, we cannot expect to train people to do it.
  • If we can train a computer to do it, people should not be wasted doing it.

In other words: “Hey, teacher, leave us kids alone”.

I am not actually against training. I’m actually a great fan. But relying on it for the billions of human users in the world is a fools errand. We should be training people on how not to be scammed, but that is like training children not to get into cars with strangers. Necessary but not sufficient to deal with the real threats attacking today.

Which brings me to the second part of people not willing to do what it takes. In this case, the problem is not capacity but will. It turns out that to be good at any real profession requires a professional attitude and approach. Not the legendary gifted amateur that every once in a while does something outstanding. We welcome them of course, but mostly they are not that gifted, and quite amateur in their approaches.

And on the other side of the supply and demand equation, we need people and companies and a public system willing to pay professionals appropriately and provide them with careers, not jobs, not situations you get fired from every 2 years, but more like the model of medical doctors. We need actual employees who are dedicated to keeping up with the progress of their sub-fields, who are very well educated and practice their profession every day. Specialists in professional communities, supported by modern equipment and assistants, certified in their specialties, and serving their patients. While we have people in startups trying to build cybersecurity companies in niches and big companies gobbling them up, most of the startups are not very knowledgeable and most of the big companies are falling behind.

Government research and development in the US is nearing zero in terms of actual understanding and forethought, and the not-invented-here problem lingers large. Around the world there are a variety of reasonable small efforts to advance the science and art, but frankly, there is more money in attack than defense, and really good defenses don’t tend to provide recurring revenue. So in the end, too few people are really motivated to make the cyber-security world better for normal people. Those who are motivated in this way find it hard to make enough money to have a career, which means they cannot afford to really be professionals unless they are crazy like me.

Ethical implications and trust

We in the US are living in a world dedicated to the destruction of trust in institutions. And it is working. Trust in government is very low, and for good reason. Trust in large corporations and the legal system are also going down, for good reason as well. As trust goes down, the people put in place are chosen cynically, and only the cynical with massive funding behind them can ultimately get elected. To the extent this has to do with cyber-related issues, the massive and widely publicized failures along with the steady drum-roll of those failures keeps our focus of attention on the problems.

Eventually almost all people will do what it takes to survive then thrive regardless of their normal morality. It is the Maslow’s hierarchy of needs: physiology, safety, love and belonging, self-esteem, and self-actualization. As trust breaks down, we revert to the lower levels, and stop the development of modern society. Eventually, social disruption leads to revolution if it is allowed to go too far. And that is what we are seeing today, enhanced by influence operations and the lack of perceived safety.

Conclusions

As individuals we are screwed except for the wealthy and highly skilled among us (like me and those who can afford to pay me). Small and medium businesses are screwed because they cannot afford the expertise they need and the people and companies they buy products and services from are not competent to defend. Large enterprises are performing pathetically, and care about profits to their investors and not the well being of their customers or their societies. Nation states are screwed because good governance is declining and perception rising.

Scroll to Top