How the historic Internet worm attack of 1988 has shaped security - or not.
Michele Guel was sound asleep on Nov. 3, 1988, when the call came at 3:30 a.m.: An unknown virus had infiltrated NASA Ames Research Laboratory's Sun Microsystems file servers and workstations and was sapping their resources, slowing them to a crawl. She headed to the lab in the dead of the night and with her team at NASA scrambled to stop the attack. They manually powered down each machine, one by one. "We walked around and shut them down ... [and] unplugged the cables," Guel recalls.
The attack was draining memory resources of NASA's computers and spreading fast among its Digital Equipment Corp. (DEC) VAX, Silicon Graphics Unix, and Cray supercomputer machines as it targeted systems running the version 4 BSD Unix operating system. In doing so, it was exploiting security flaws in the Sendmail email application and Finger network user-finding app, as well as brute-force password cracking and taking advantage of a Unix feature that allowed users of one system to use another without a password.
"In the moment, it was all hands on deck. There was a need to get the workstations fixed, get them back up and not reinfected, so scientists could get back to their work as fast as possible," says Guel, then a lab administrator at NASA Ames. The supercomputer systems in the Mountain View, Calif., facility also were used by outside organizations such as Boeing, and for projects such as space capsule rocket design.
NASA, along with the US Department of Defense, Harvard, Lawrence Livermore National Laboratory, MIT, UC-Berkeley, Stanford, and several other major universities and government research arms, all were hit that day with a worm that knocked out servers and workstations connected to the then-nascent Internet, a cloistered and collegial community of academia, research and development, military, and government users.
The worm was a graduate student experiment gone awry: Robert Tappan Morris, then a Cornell University computer science student, later confessed that he wrote the program to spread as much as possible around the Internet in order to gauge its size but not to cause harm or take down machines. His project backfired, though, ultimately crashing some 6,000 Unix-based machines – 10% of the Internet at that time – leaving systems out of commission and offline for anywhere for two or more days.
It's been three decades since Morris first unleashed his Frankenstein-esque worm on the evening of Nov. 2. It was the first major Internet security event and a loss of innocence for the young Net. "No one was preparing" for something like that, recalls Guel, who is now the chief security architect for Cisco's security and trust organization. "The Internet was a big happy place, and we were all using it for good purposes. It did catch us all by surprise."
But the lessons of the Morris Worm still haunt Internet security today, according to experts who responded to and cleaned up the attack. While its impact ultimately led to the emergence of the information security industry, some of the same security issues that let the worm rapidly wriggle its way through the Internet, from machine to machine, surround networks today: weak passwords, vulnerable software, and a lack of layered security. Today's worms are more dangerous than ever and are mostly in the hands of nation-states and other malicious actors.
The new generation of self-replicating malware variants have become handy tools for spreading and dropping destructive payloads, such as ransomware with the 2017 WannaCry attack by North Korea, and data-wiping exploits transported by NotPetya, where Russian military hackers attacked mostly Ukraine targets with destructive software posing as ransomware. These Morris Worm descendants make their payload-less ancestor almost seem quaint in comparison.
"Ultimately, you can draw a line from then until now because [the Morris Worm] was a seminal event. It was the first time we realized a global, connected infrastructure was going to be globally vulnerable," says Paul Vixie, who pioneered the Internet's Doman Name System (DNS). The mass infection also demonstrated how running a mix of different types of computer systems and operating systems can save the Internet from an all-out outage, notes Vixie, who battled the worm while working for DEC, in Palo Alto, Calif., where just a few research computers slowed as the worm tried to replicate among them. The attack was the first buffer overflow attack he had ever seen.
"It consumed a lot of resources while trying to spread," he recalls, though email and the Internet gateway stayed up at the DEC research site. "I stayed up all night listening to email chatter about various people that had been affected or hadn't been."
Organizations today that are moving toward more homogeneous computing environments are risking a single point of failure during a big cyberattack, such as a worm, according to Eugene "Spaf" Spafford, who was one of the first to analyze the Morris Worm after battling the attack on Purdue University while a software engineer there. He says the network community learned a lot from the worm, but even all these years later still hasn't applied those lessons across the board, leaving them at risk. "Organizations that have the same one or two platforms, and the same storage technology, and same baseband network – if something bad happens like ransomware, it sweeps through the whole organization," Spaf says.
Purdue computers that weren't running 4 BSD Unix, including the university's Sun Solaris SPARQ machines and its Sequent machine in the computer science department, emerged unscathed by the worm. "We had a very divided computing environment at that time, so, as a result, we didn't lose a lot [of systems]," Spaf says. "Our DEC VAX and Sun Solaris machines either slowed down too much or, on a couple of occasions, crashed. It was primarily brute-forcing email servers. We had one or two classroom machines that went down."
The Morris Worm was all about spreading from machine to machine; once it did so, it attempted to hide out by changing its process name and deleting temporary files, for instance. At each computer it hit, the code checked whether the target already was infected with the worm. An apparent bug in this process led to the duplication of copies of the worm in machines and, ultimately, an apparently unintended denial-of-service effect.
For NASA Ames, the Morris Worm infection and computer outage was a rude wakeup call that its physical security wasn't enough anymore for the lab. "Many Unix workstations and one of our Crays and big VAXes were grinding to a halt," Guel recalls. "It took us off the map." For most operations, the fact that the worm took off in the later hours of the day and evening probably saved a lot of initial downtime for the affected organizations, but at NASA Ames, its many projects ran round-the-clock. "The Cray [supercomputer] ran 24/7 ... and there were always a backlog of processes [by engineers], so there was a lot of processing time lost," she says.
After the worm had been eradicated from NASA's computers, Guel was tasked with building the lab's first security team, establishing a patching process and strong password policy, configuration management, and building an incident response team – the same tasks many organizations still wrestle with now. "Today there are still organizations that struggle to monitor 24/7, where they can have the right level of visibility or enough people," she says.
Michele Guel, chief security architect, Cisco
Text-based and reused passwords continue to be a poor but pervasive practice today, Cisco's Guel points out. We're still patching software and failing on basic security hygiene, she says. Running software programs as root, using weak passwords, and running unnecessary programs still remain in practice today, she notes.
On Nov. 2, 1988, Spaf had just celebrated his wedding anniversary with a day off and a nice dinner with his wife. On the morning of Nov. 3, he woke up and logged into his home machine to check his email. He immediately noticed something was wrong. "I discovered my lab machine running an insane process load," he recalls, so he rebooted and went to get ready for work while the machine reset.
"When I came back, the load was climbing upward rapidly, and I knew something was wrong," he recalls.
Spaf hurried to his office at Purdue, where he found other university machines were experiencing similar problems. So he disconnected them from the network. "We began to piece together what was happening and by midafternoon, we had come up with a mostly reliable way from keeping the worm from reinfecting the machines and began to tell people to bring machines back online," he says.
Around the same time, experts at UC-Berkeley and MIT were comparing notes and sharing their analyses of the attack. But communication among the Internet community was basically cut with the worm since most members couldn't access their online and email connections during the attack. "Many of us knew each other online, but we didn't have each other's phone numbers. One of the lessons [of the worm] was that we needed a more reliable out-of-band connection," Spaf says. "The concerns many of us had was who set this off and why – and how do we get the word out to deal with it?"
So that very day he set up an online message/list forum called the Phage List, where responders could communicate during the incident. DARPA soon funded the Computer Emergency Response Team at Carnegie Mellon, the CERT, which opened in early 1989, for helping organizations coordinate response to cyberattacks.
There were few widely available tools to analyze software in '88, with the exception of disassemblers, Spaf recalls, so much of the Morris Worm analysis came via manual debugging. "It was a very tedious process," he says.
A few hours after the attack, the Computer Systems Research Group at Berkeley developed a temporary patch to stop the worm's spread. It later issued software patches for the 4 BSD Unix operating system.
Spaf, now the executive director emeritus of Purdue University's Center for Education and Research in Information Assurance and Security and a professor of computer science there, argues that most problems in security today, post-Morris Worm, are well-known and actually can be avoided or prevented. "But it costs money and possibly interferes with the way people are currently doing business. And security just isn't valued enough in most of these environments to want to take those steps," he says. That's partly because there aren't sufficient ways to measure security in order to employ the right business decisions, he adds.
When organizations nowadays suffer major attacks such as ransomware, they typically just patch the system rather than rethink their security architecture that allowed the attack, he explains.
(CONTINUED ON PAGE 2)
Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise ... View Full Bio
1 of 2