Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

The good news is that Web servers have come a long way in terms of security. But to err is human, even for IT and security people.

Kacy Zurkus, Contributing Writer

August 5, 2019

5 Min Read

When the security industry thinks about breaches caused by human error, the image of an employee accidentally clicking on a malicious link in a phishing email often comes to mind. But to err is human, even for IT and security people, especially when it comes to Web servers.

Web servers themselves have come a long way in terms of security. Think back to the nascent days of Apache and how the server earned its name. "It was, 'A-PAtCHy server' based on applying a number of patch files against an older server platform," says Geoff Walton, senior security consultant at TrustedSec.

But the industry has moved beyond that world. Today, whether you are running Apache, IIS, Nginx, or some combination thereof, "all of these and some others have benefited from years of hardening and security improvements," Walton says. "Most Web security challenges today are really found in the applications running on those servers."

If Servers Are Misconfigured
Configuration errors made by administrators are probably the biggest risks to Web servers themselves in modern deployments, according to Walton.

Server misconfiguration issues include "inappropriate directory permissions, running the server itself as an account with excessive privileges, enabling handlers or plugins for scripts, and APIs that are not needed or should be restricted to specific applications or documents," he says. "These and the selection of weak SSL/TSL cipher algorithms are all still common problems."

According to WhiteHat Security, unnecessary default and samples of application, script, Web page, and configuration files that often come with servers contribute to these misconfiguration issues. "They may also have unnecessary services enabled, such as content management and remote administration functionality. Debugging functions may be enabled or administrative functions may be accessible to anonymous users," writes WhiteHat Security.

In some cases, misconfigurations also can trigger unexpected authentication and authorization behavior that differs from what administrators have configured for a hosted applications Web directory.

Removing the Target from Your Back
It's really not possible to discuss best practices in Web server security without also mentioning Web application security.

Because the Web applications you run are a hacker's most likely target for abuse, "if you discover the application you plan to host requires you to soften your hardened platform, it's a good indicator that the application isn't following best practices and should trigger an investigation into why," Walton says.

Applications can be strengthened at the development phase when you want to ensure you have a strong secure software development life cycle (secSDLC) process in place. Additionally application defense can be addressed at runtime, when a Web application firewall (WAF) can provide effective detection and prevention control.

Improper server or Web application can lead not only to flaws, but also to cyberattacks, according to the OWASP Foundation's Top Ten 2017 Project. That is why it's important to detect when an application is vulnerable. If an application is "missing appropriate security hardening across any part of the application stack," it could be vulnerable, OWASP states.

"Without a concerted, repeatable application security configuration process, systems are at a higher risk," OWASP adds.

Patch, Patch, and Patch Again
Because Web servers are mostly Internet-facing, and the Internet is littered with bots, it is likely that a bot is probing your Web server for exploitable vulnerabilities to compromise access, such as to embed malicious scripts or malware used to steal credentials, financial details, sensitive personal information, or deploy malware to unknowing visitors, says Joseph Carson, chief security scientist at Thycotic.

"The best way to stay one step ahead of those Internet bots is to consistently patch your Web servers and ensure that you don't leave the door open to cybercriminals waiting for the moment you forget to update major security vulnerabilities," he adds.

In addition, security teams should combine the principle of least privilege with strong privileged access management. "Web servers should only be accessible via authorized and approved employees who should only access the Web server when scheduled or planned maintenance is expected," says Carson.

That means go ahead and lock down access to the Web servers with strict privileged access management controls and restrict access to only those employees who are permitted to view or make changes. 

Hold onto the Vendor Guide!
According to Walton, the best way to address Web server and Web application challenges is to consult the vendors' security best practices guide and follow it.

"Roll that up into your organization's platform standards for server and container configurations. Once you have a solid platform configuration and repeatable process, it becomes much easier to avoid the most serious problems," Walton adds.

Taking the time to include security code reviews for new features and changes before deployment or even integrating application security testing into your QA process will also enhance server security. In addition, developers should make it a point to remain aware of the most common threats to Web-based applications and how they can be avoided.

"This enables them to put those things into practice or validate that the tools they are selecting do so," Walton says.

Leveraging static source code analyzers and Web vulnerability scanners can provide developers a great deal of insight, Walton adds. "These tools will help to find potential risks in large existing projects that may have been present for years," he says. "They can also spot problems in new work."

Web vulnerability scanners look at the application at run time and exercise it, using probes against all inputs to detect risky application behavior, according to Walton. They may also detect issues in third-party components and other parts of the overall application stack that a static analysis tool does not have visibility into. However, Walton says, "they are usually unable to detect authorization and business logic issues as effectively."

No single best practice can defend against determined attackers, and no combination of these steps is a magical elixir. While most products come with a solid set of general rules out of the box, getting the most out any tool requires that someone familiar with the behavior the application is protecting is in charge of building out custom rule sets.

Even then, Walton warns, "highly crafted targeted attacks might still be able to be effective before blocking rules and alerts are triggered."

Related Content:

Image Source: Siarhei via Adobe Stock

 

 

About the Author(s)

Kacy Zurkus

Contributing Writer

Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition's security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM's Security Intelligence. She has also contributed to several publications, including CSO Online, The Parallax, InfoSec Magazine, and K12 Tech Decisions. She covers a variety of security and risk topics and has also spoken on a range of cybersecurity topics at conferences and universities, including Secure World and NICE K12 Cybersecurity in Education. 

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights