Tomorrow’s Internet Will Be A Bot-Infested, Mind-Warping Wasteland

Tomorrow’s internet: a sea of bots on the tails of the more prolific online posters. Crafted deep fake seating as a major vector for identity theft. Photo by Tima Miroshnichenko from Pexels

Yesterday, one of my favorite tech prognosticators (Peter Duffy, you should sign up for his newsletter) shared a tweet from Tim Ferriss.

Here’s Tim’s tweet:

Tim’s tweet flashed across my screen at one in the morning, just before I drafted a story here about how one guy in California is the internet’s time zone master (no, really; and if you haven’t figured out my circadian rhythm here it is: I work for clients during the day; do my own writing at night; and catch glimpses of sleep at totally random times, which is one of the joys of the freelancing lifestyle).

I sent Tim back a few tweets in reply. Tim Ferris being Tim Ferris and me being a guy in Jerusalem with internet access I am certain Tim Ferris didn’t see any of those.

But the picture of reality that I tired to get across is so warped that I thought it really merited a bit more long form space.

So here it is.

We Will See A Massive Proliferation Of Deep Fakes

I don’t think anybody is doubting that this is going to happen.But let’s take it as our starting point anyway.

Something many people know about me: I’ve been fascinated, for a long time, with misinformation and how easy it has become to divulge it over the internet.

That fascination has led me to a few very disturbing conclusions:

One of them: a college student with internet connectivity has, today, a social engineering toolbox at their fingertips that’s pretty disturbingly rich.

So far, it’s mostly been the object of intrigue.

But very soon — in fact this is probably already happening — we could see it being exploited at scale to … really shake the internet up in a way that may be so much closer around the corner than many of us realize.

Creating a somewhat compelling alternative online identity today is quite easy.

And this is just going through the process manually. The threat landscape I’m describing is, of course, more likely to rely upon programmatic methods.

You can, for instance:

  • Very easily use an AI generator to create a fake face
  • Use further AI generators to create derivative images of that fake face to populate social media profiles which look like a real person’s
  • Create imagery that brings together multiple fake people together for a family photo consisting entirely of people who either don’t exist or are people who do exist in situations they were never in
  • Combine all the above with one of the increasingly credible text to speed (TTS) engines to start a podcast using the voice of your fake person
  • Use a deep fake video generator to create fake video footage involving your fake person and start a YouTube channel under their name

We also have on hand reasonably sophisticated NPL and AI to study and then emulate patterns of speech and preferred modes of writing: such that credible copycat blogs and correspondence could be generated.

These technologies are all rapidly maturing:

https://www.youtube.com/watch?v=gLoI9hAX9dw&ab_channel=BloombergQuicktake

Scraping-Based PII Sniffing Will Become A Major Vector For Identity Takeover Attempts. Deep Fake Seeding And Cloning Will Be Fraudsters’ Go-To Approaches.

Currently, low level internet scamsters — which is probably most of them — rely upon a variety of methods to leverage the internet’s large size in order to scam people and make money.

Technology companies have been mostly focused to date on better filtering in order to try to thwart the problem.

But without dissing the defensive side — it stops people’s life savings from being stolen — both adversaries and attackers are operating at a fairly basic level of sophistication.

I’m talking about phishing — which is the most basic social engineering exploit most people have heard of.

Its secret sauce is currently typically scale:

  • Create (say) a copy of Facebook that looks reasonably credible.
  • Buy a domain name that looks close enough to the real thing that some people will fall for it
  • Buy up or scrape a large volume of emails
  • Begin sending out fake notification messages demanding a password change.
  • Sit back while you siphon off user credentials
  • Then put those together with other personally identifiable information (PII) and work your way slowly towards identity takeover.

I’ve had, on my client list, a few startups that all did somewhat similar things (the protection against these I mean!)— so I have some idea about what’s coming to market in the space.

The underlying logic that has connected them: if we’re targeting human gullibility, we need to create smarter humans to stop them clicking on phishing links and things like that that current threat actors cook up. Those things are familiar. There are a few nice ideas in development that combine actual phishing protection with human upskilling. But we’re looking at countering threats that most cybersecurity professionals are very familiar with.

What we haven’t seen nearly as much of yet?

Processes that attempt to exploit online breadcrumb quickly to wield out real human copycats for identity takeover purposes. And to do so at scale:

  • Sniff out carelessly strewn PII from the internet
  • Combine that with as much peripheral info as can be gathered from people who are active online
  • Use AI to cook up deep fakes. Or increasingly rich copycat profiles that begin the process of identity theft. Potentially ones that can even pass facial recognition challenges.

This is the threat landscape that we’ll need to focus on protecting against tomorrow.

And as a result:

We’ll Need To Prove Our Identity Constantly Through Identity Checks. But Deep Fake AI Seeding Rather Than Unauthorized Account Access Will Be Our Primary Nemesis

We all love CAPTCHAs and digging out our 2FA generator of choice to peck in credentials, right? More of both on the way (I predict). Lots more.

If the above threat landscape were to happen— the generation of deep fake clones at scale by scraping bots leveraging as much data as could be siphoned from the internet — where would that lead us to?

To a very dystopian place called: tomorrow’s internet.

It’s one in which we assume that a constant degree of identity replication and deep fake seeding is happening around us all of the time.

This will become a major nuisance for the majority of internet users who are operating legitimately, of course.

We’ll all start to become a bit paranoid whenever we encounter an entity that purports to be another human on the internet.

Are we engaging with a bot? A clone? Or a real person? Is this a real action or is this a potential AI cloner trying to collect intelligence?

For those working in cybersecurity, reality will end up looking quite familiar and different all at the same time.

Sort of like how any major website is constantly being challenged with backend takeover attempts (a fun fact I learned last year: malicious login attempts will basically happen all the time if you run a popular web script like a major CMS on an internet-exposed website.)

The industry will assume that consumers are constantly facing identity cloning attempts and bring solutions to market that attempt to thwart this process. Most likely that process will be focused on reactive defenses.

Instead of creating products that detect, say, SSN changes to provide reactive warning of fraud detection (by plugging into a database that contains verified legitimate entries), we’ll be using reverse search lookups and reverse voice detection (we’ve seen rather little of the latter) to keep on top of emerging deep fake cloning attempts of ourselves. We’ll need to work with real humans so that we know we’re not comparing fakes with fakes.

That’s still reactive.

Although we’re trying to mitigate a new attempt to malfeasance:

  • Daniel 1 has a Medium count and Twitter profile that weakly validate one another for the purpose of proving that Daniel 1 is Daniel 1 to both networks. One links to the other.
  • Fraudsters can very quickly replicate that process with fake accounts, creating Daniel 2. And create contact forms on those web assets which will intercept email intended for Daniel 1.
  • Our validation process is now useless.

The end point?

We’ll live in an increasingly mind-bending internet in which we’re all be increasingly rubbing shoulders with accounts on Reddit, Twitter, and everywhere else that social chatter happens which aren’t actually real.

They could be anything from marketing bots to bots trying to convince us that they’re real people to … real people. What if … the real humans are outnumbered?

Scamming will move from a numbers-based game (distribute one million phishing emails because we know predictably that 0.1% will fall for it enabling us to cash in on say a ransomware injection).

And towards something that will be much more targeted (sort of like account based marketing!).

Hone in on the targets who have voluntarily shared the most personally identifiable information (PII) online and thereby provided the most fertile dataset from which we can begin an identity cloning attempt for much the same set of reasons as we’ve always wanted to do that.

It’s cybercrime moving from the era of the mass approach to the tailored one.

Fraudsters aren’t the online equivalent of petty burglars any more going house to house looking for assets that are improperly secured.

They’re much more sophisticated adversaries.

AI is in one hand. Constant surveillance is in the other.

Together they are used to execute strategies that scam and defraud by cloning and impersonating rather than forcing unauthorized access and emptying bank accounts.

It’s not using a chunk of metal to barge in the door. It’s studying the real house-owner with binoculars for a few weeks before creating somebody that looks and talks and sounds just like the house-owner. And then simply having them walk straight in the front door.

Cybercriminals as something more like PII artists whose expertise is in convincing others that they are somebody else through increasingly elaborate deep fakes.

And all of us facing increased checks to verify our own identity as a result.

(Much in the same way that a few successful terrorist attacks have forced hundreds of millions of air travelers every day to abandon water bottles and take off their shoes at security. This is frequently how security works. The disruptive minority force the peaceful majority to live with daily restrictions on their freedom.)

Are you ready for it?

It could happen much sooner than any of us think.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Daniel Rosehill

Daniel Rosehill

Daytime: writing for other people. Nighttime: writing for me. Or the other way round. Enjoys: Linux, tech, beer, random things. https://www.danielrosehill.com