Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

11:15 AM
Connect Directly

The Morris Worm Turns 30

How the historic Internet worm attack of 1988 has shaped security - or not.

Michele Guel was sound asleep on Nov. 3, 1988, when the call came at 3:30 a.m.: An unknown virus had infiltrated NASA Ames Research Laboratory's Sun Microsystems file servers and workstations and was sapping their resources, slowing them to a crawl. She headed to the lab in the dead of the night and with her team at NASA scrambled to stop the attack. They manually powered down each machine, one by one. "We walked around and shut them down ... [and] unplugged the cables," Guel recalls.

The attack was draining memory resources of NASA's computers and spreading fast among its Digital Equipment Corp. (DEC) VAX, Silicon Graphics Unix, and Cray supercomputer machines as it targeted systems running the version 4 BSD Unix operating system. In doing so, it was exploiting security flaws in the Sendmail email application and Finger network user-finding app, as well as brute-force password cracking and taking advantage of a Unix feature that allowed users of one system to use another without a password.

"In the moment, it was all hands on deck. There was a need to get the workstations fixed, get them back up and not reinfected, so scientists could get back to their work as fast as possible," says Guel, then a lab administrator at NASA Ames. The supercomputer systems in the Mountain View, Calif., facility also were used by outside organizations such as Boeing, and for projects such as space capsule rocket design.

NASA, along with the US Department of Defense, Harvard, Lawrence Livermore National Laboratory, MIT, UC-Berkeley, Stanford, and several other major universities and government research arms, all were hit that day with a worm that knocked out servers and workstations connected to the then-nascent Internet, a cloistered and collegial community of academia, research and development, military, and government users.

The worm was a graduate student experiment gone awry: Robert Tappan Morris, then a Cornell University computer science student, later confessed that he wrote the program to spread as much as possible around the Internet in order to gauge its size but not to cause harm or take down machines. His project backfired, though, ultimately crashing some 6,000 Unix-based machines – 10% of the Internet at that time – leaving systems out of commission and offline for anywhere for two or more days.

It's been three decades since Morris first unleashed his Frankenstein-esque worm on the evening of Nov. 2. It was the first major Internet security event and a loss of innocence for the young Net. "No one was preparing" for something like that, recalls Guel, who is now the chief security architect for Cisco's security and trust organization. "The Internet was a big happy place, and we were all using it for good purposes. It did catch us all by surprise."

But the lessons of the Morris Worm still haunt Internet security today, according to experts who responded to and cleaned up the attack. While its impact ultimately led to the emergence of the information security industry, some of the same security issues that let the worm rapidly wriggle its way through the Internet, from machine to machine, surround networks today: weak passwords, vulnerable software, and a lack of layered security. Today's worms are more dangerous than ever and are mostly in the hands of nation-states and other malicious actors.

The new generation of self-replicating malware variants have become handy tools for spreading and dropping destructive payloads, such as ransomware with the 2017 WannaCry attack by North Korea, and data-wiping exploits transported by NotPetya, where Russian military hackers attacked mostly Ukraine targets with destructive software posing as ransomware. These Morris Worm descendants make their payload-less ancestor almost seem quaint in comparison.

"Ultimately, you can draw a line from then until now because [the Morris Worm] was a seminal event. It was the first time we realized a global, connected infrastructure was going to be globally vulnerable," says Paul Vixie, who pioneered the Internet's Doman Name System (DNS). The mass infection also demonstrated how running a mix of different types of computer systems and operating systems can save the Internet from an all-out outage, notes Vixie, who battled the worm while working for DEC, in Palo Alto, Calif., where just a few research computers slowed as the worm tried to replicate among them. The attack was the first buffer overflow attack he had ever seen.

"It consumed a lot of resources while trying to spread," he recalls, though email and the Internet gateway stayed up at the DEC research site. "I stayed up all night listening to email chatter about various people that had been affected or hadn't been." 

Organizations today that are moving toward more homogeneous computing environments are risking a single point of failure during a big cyberattack, such as a worm, according to Eugene "Spaf" Spafford, who was one of the first to analyze the Morris Worm after battling the attack on Purdue University while a software engineer there. He says the network community learned a lot from the worm, but even all these years later still hasn't applied those lessons across the board, leaving them at risk. "Organizations that have the same one or two platforms, and the same storage technology, and same baseband network – if something bad happens like ransomware, it sweeps through the whole organization," Spaf says.

Purdue computers that weren't running 4 BSD Unix, including the university's Sun Solaris SPARQ machines and its Sequent machine in the computer science department, emerged unscathed by the worm. "We had a very divided computing environment at that time, so, as a result, we didn't lose a lot [of systems]," Spaf says. "Our DEC VAX and Sun Solaris machines either slowed down too much or, on a couple of occasions, crashed. It was primarily brute-forcing email servers. We had one or two classroom machines that went down."

The Morris Worm was all about spreading from machine to machine; once it did so, it attempted to hide out by changing its process name and deleting temporary files, for instance. At each computer it hit, the code checked whether the target already was infected with the worm. An apparent bug in this process led to the duplication of copies of the worm in machines and, ultimately, an apparently unintended denial-of-service effect.

For NASA Ames, the Morris Worm infection and computer outage was a rude wakeup call that its physical security wasn't enough anymore for the lab. "Many Unix workstations and one of our Crays and big VAXes were grinding to a halt," Guel recalls. "It took us off the map." For most operations, the fact that the worm took off in the later hours of the day and evening probably saved a lot of initial downtime for the affected organizations, but at NASA Ames, its many projects ran round-the-clock. "The Cray [supercomputer] ran 24/7 ... and there were always a backlog of processes [by engineers], so there was a lot of processing time lost," she says.

After the worm had been eradicated from NASA's computers, Guel was tasked with building the lab's first security team, establishing a patching process and strong password policy, configuration management, and building an incident response team – the same tasks many organizations still wrestle with now. "Today there are still organizations that struggle to monitor 24/7, where they can have the right level of visibility or enough people," she says.

Michele Guel, chief security architect, Cisco
Michele Guel, chief security architect, Cisco

Text-based and reused passwords continue to be a poor but pervasive practice today, Cisco's Guel points out. We're still patching software and failing on basic security hygiene, she says. Running software programs as root, using weak passwords, and running unnecessary programs still remain in practice today, she notes.

Spaf's Story
On Nov. 2, 1988, Spaf had just celebrated his wedding anniversary with a day off and a nice dinner with his wife. On the morning of Nov. 3, he woke up and logged into his home machine to check his email. He immediately noticed something was wrong. "I discovered my lab machine running an insane process load," he recalls, so he rebooted and went to get ready for work while the machine reset.

"When I came back, the load was climbing upward rapidly, and I knew something was wrong," he recalls.

Spaf hurried to his office at Purdue, where he found other university machines were experiencing similar problems. So he disconnected them from the network. "We began to piece together what was happening and by midafternoon, we had come up with a mostly reliable way from keeping the worm from reinfecting the machines and began to tell people to bring machines back online," he says.

Around the same time, experts at UC-Berkeley and MIT were comparing notes and sharing their analyses of the attack. But communication among the Internet community was basically cut with the worm since most members couldn't access their online and email connections during the attack. "Many of us knew each other online, but we didn't have each other's phone numbers. One of the lessons [of the worm] was that we needed a more reliable out-of-band connection," Spaf says. "The concerns many of us had was who set this off and why – and how do we get the word out to deal with it?"

So that very day he set up an online message/list forum called the Phage List, where responders could communicate during the incident. DARPA soon funded the Computer Emergency Response Team at Carnegie Mellon, the CERT, which opened in early 1989, for helping organizations coordinate response to cyberattacks.

There were few widely available tools to analyze software in '88, with the exception of disassemblers, Spaf recalls, so much of the Morris Worm analysis came via manual debugging. "It was a very tedious process," he says.

A few hours after the attack, the Computer Systems Research Group at Berkeley developed a temporary patch to stop the worm's spread. It later issued software patches for the 4 BSD Unix operating system.

Spaf, now the executive director emeritus of Purdue University's Center for Education and Research in Information Assurance and Security and a professor of computer science there, argues that most problems in security today, post-Morris Worm, are well-known and actually can be avoided or prevented. "But it costs money and possibly interferes with the way people are currently doing business. And security just isn't valued enough in most of these environments to want to take those steps," he says. That's partly because there aren't sufficient ways to measure security in order to employ the right business decisions, he adds.

When organizations nowadays suffer major attacks such as ransomware, they typically just patch the system rather than rethink their security architecture that allowed the attack, he explains. 



Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ... View Full Bio

Recommended Reading:

1 of 2
Comment  | 
Print  | 
More Insights
Threaded  |  Newest First  |  Oldest First
User Rank: Apprentice
11/13/2018 | 10:13:49 AM
Thanks for such interesting information
Hm, seriously? I didn't hear about it. Really interesting article, thant you so much
US Formally Attributes SolarWinds Attack to Russian Intelligence Agency
Jai Vijayan, Contributing Writer,  4/15/2021
Dependency Problems Increase for Open Source Components
Robert Lemos, Contributing Writer,  4/14/2021
FBI Operation Remotely Removes Web Shells From Exchange Servers
Kelly Sheridan, Staff Editor, Dark Reading,  4/14/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-04-20
An unsafe deserialization vulnerability in Bridgecrew Checkov by Prisma Cloud allows arbitrary code execution when processing a malicious terraform file. This issue impacts Checkov 2.0 versions earlier than Checkov 2.0.26. Checkov 1.0 versions are not impacted.
PUBLISHED: 2021-04-20
An information exposure through log file vulnerability exists in Palo Alto Networks PAN-OS software where secrets in PAN-OS XML API requests are logged in cleartext to the web server logs when the API is used incorrectly. This vulnerability applies only to PAN-OS appliances that are configured to us...
PUBLISHED: 2021-04-20
An information exposure through log file vulnerability exists in Palo Alto Networks PAN-OS software where the connection details for a scheduled configuration export are logged in system logs. Logged information includes the cleartext username, password, and IP address used to export the PAN-OS conf...
PUBLISHED: 2021-04-20
A denial-of-service (DoS) vulnerability in Palo Alto Networks GlobalProtect app on Windows systems allows a limited Windows user to send specifically-crafted input to the GlobalProtect app that results in a Windows blue screen of death (BSOD) error. This issue impacts: GlobalProtect app 5.1 versions...
PUBLISHED: 2021-04-19
An out-of-bounds (OOB) memory access flaw was found in fs/f2fs/node.c in the f2fs module in the Linux kernel in versions before 5.12.0-rc4. A bounds check failure allows a local attacker to gain access to out-of-bounds memory leading to a system crash or a leak of internal kernel information. The hi...