2 min read
From Morris to Morris II: AI Models are Vulnerable to Worms, Too
Chuck Matthews : May 8, 2024 11:52:20 AM MST
Earlier this week, I needed to contact my bank about an error. I tried to get ahold of someone on the bank’s app and website, but they weren't much help. When I called, I found myself talking to an Artificial Intelligence (AI) answering machine, which insisted I record a voice ID pin before proceeding.
Maybe they aren’t aware that OpenAI's Voice generation tool only needs a 15-second sample to clone a voice.
I declined and had to visit a branch office to speak with a human. The representative promptly fixed the error and even agreed that the voice ID pin was unnecessary.
As AI-enabled tools flood the market, we must acknowledge the security risks associated with implementing AI, including vulnerabilities to the models on which they are built. Frustrating stories about AI like mine aren't unique.
The First Malware Worm: The Story of the Morris Worm
A recent article reminded me of the dangers of bleeding-edge technology, when the world's first recognized malware worm, the Morris Worm, was unleashed on November 2, 1988.
Robert Tappan Morris, a graduate student at Cornell University, intended to gauge the size of the internet, but a coding error caused the worm to replicate excessively, wreaking havoc on computers and networks. This event highlighted the potential dangers of malicious software and marked the beginning of a new era in cybersecurity.
The Next Generation: The Arrival of Morris II, the AI Worm
Fast forward to 2024, and another worm, also associated with Cornell, has emerged.
This time, a group of researchers developed what they believe to be the first self-replicating AI worm, aptly named Morris II in homage to the original. Formally known as an "adversarial self-replicating prompt," it targets generative AI powered by Large Language Models (LLMs) like ChatGPT, Llama, and Gemini. The Morris II worm was revealed exclusively in Wired magazine in March.
Researchers demonstrated how the AI worm could attack a generative AI email assistant to steal data from emails and send spam messages, breaking some security protections in ChatGPT and Gemini in the process.
Companies offering LLMs, such as OpenAI, Google, and Meta, acknowledged the vulnerability and stated they were working on a fix. However, the response was less than reassuring, and most people probably never heard about the issue.
Irrational Exuberance for AI in Tech and Finance
Despite these concerns, Wall Street and much of the world continue to exhibit "irrational exuberance" over generative AI.
While 2023 was on pace to be the lowest year for venture funding since 2018, global funding for AI startups reached nearly $50 billion last year (up 9% from 2022), according to Crunchbase data.
Among this fervor, prominent figures like Elon Musk and others have raised concerns about AI's privacy, security, and ethical implications, calling for a pause to assess its proper use.
Yet, the tech industry has rapidly advanced, seemingly unfazed by these warnings.
AI Ethics: A Call for a Strategic Pause and Discussion
With the emergence of Morris II and the potential for AI to be abused, it may be wise to take a timeout.
The tech and financial sectors seem blinded by massive investments in anything AI-related. Companies that were unsellable a few years ago have added "AI" to their names and are now being bought for much higher prices.
International discussions on AI's privacy, security, and ethical use are necessary, and thankfully, are beginning - particularly in Europe.
Just because we can do something doesn't mean we should. If we don't act quickly, the opportunity to control AI's direction may be lost forever.
There is no putting this genie (or rather, worm) back in the bottle!
Chuck Matthews is the CEO of PacketWatch, a cybersecurity firm specializing in Managed Detection and Response (MDR) and incident response, leveraging their proprietary network monitoring platform. With over 35 years of executive experience, Matthews excels in aligning technology with strategic business goals and is a recognized leader in cybersecurity. Chuck has contributed to numerous publications and media outlets, focusing on topics like cybersecurity legislation and best practices.
Posts by Tag
- CEO Perspective (23)
- Compliance (10)
- Incident Response (10)
- GRC (9)
- Vulnerability Management (7)
- Cybersecurity Resilience (5)
- Cyber Insurance (4)
- Artificial Intelligence (AI) (3)
- Full Packet Capture (3)
- HIPAA (3)
- Artificial Intelligence (2)
- Ransomware (2)
- Event (1)
- Legal Industry (1)
- Manufacturing Industry (1)
- Security Risk Assessment (1)
- Zero-Day (1)