Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Edge Articles

8/5/2019
07:15 AM
Kacy Zurkus
Kacy Zurkus
Edge Articles
Connect Directly
Twitter
LinkedIn
RSS
E-Mail
50%
50%

How to Keep Your Web Servers Secure

The good news is that Web servers have come a long way in terms of security. But to err is human, even for IT and security people.

When the security industry thinks about breaches caused by human error, the image of an employee accidentally clicking on a malicious link in a phishing email often comes to mind. But to err is human, even for IT and security people, especially when it comes to Web servers.

Web servers themselves have come a long way in terms of security. Think back to the nascent days of Apache and how the server earned its name. "It was, 'A-PAtCHy server' based on applying a number of patch files against an older server platform," says Geoff Walton, senior security consultant at TrustedSec.

But the industry has moved beyond that world. Today, whether you are running Apache, IIS, Nginx, or some combination thereof, "all of these and some others have benefited from years of hardening and security improvements," Walton says. "Most Web security challenges today are really found in the applications running on those servers."

If Servers Are Misconfigured
Configuration errors made by administrators are probably the biggest risks to Web servers themselves in modern deployments, according to Walton.

Server misconfiguration issues include "inappropriate directory permissions, running the server itself as an account with excessive privileges, enabling handlers or plugins for scripts, and APIs that are not needed or should be restricted to specific applications or documents," he says. "These and the selection of weak SSL/TSL cipher algorithms are all still common problems."

According to WhiteHat Security, unnecessary default and samples of application, script, Web page, and configuration files that often come with servers contribute to these misconfiguration issues. "They may also have unnecessary services enabled, such as content management and remote administration functionality. Debugging functions may be enabled or administrative functions may be accessible to anonymous users," writes WhiteHat Security.

In some cases, misconfigurations also can trigger unexpected authentication and authorization behavior that differs from what administrators have configured for a hosted applications Web directory.

Removing the Target from Your Back
It's really not possible to discuss best practices in Web server security without also mentioning Web application security.

Because the Web applications you run are a hacker's most likely target for abuse, "if you discover the application you plan to host requires you to soften your hardened platform, it's a good indicator that the application isn't following best practices and should trigger an investigation into why," Walton says.

Applications can be strengthened at the development phase when you want to ensure you have a strong secure software development life cycle (secSDLC) process in place. Additionally application defense can be addressed at runtime, when a Web application firewall (WAF) can provide effective detection and prevention control.

Improper server or Web application can lead not only to flaws, but also to cyberattacks, according to the OWASP Foundation's Top Ten 2017 Project. That is why it's important to detect when an application is vulnerable. If an application is "missing appropriate security hardening across any part of the application stack," it could be vulnerable, OWASP states.

"Without a concerted, repeatable application security configuration process, systems are at a higher risk," OWASP adds.

Patch, Patch, and Patch Again
Because Web servers are mostly Internet-facing, and the Internet is littered with bots, it is likely that a bot is probing your Web server for exploitable vulnerabilities to compromise access, such as to embed malicious scripts or malware used to steal credentials, financial details, sensitive personal information, or deploy malware to unknowing visitors, says Joseph Carson, chief security scientist at Thycotic.

"The best way to stay one step ahead of those Internet bots is to consistently patch your Web servers and ensure that you don't leave the door open to cybercriminals waiting for the moment you forget to update major security vulnerabilities," he adds.

In addition, security teams should combine the principle of least privilege with strong privileged access management. "Web servers should only be accessible via authorized and approved employees who should only access the Web server when scheduled or planned maintenance is expected," says Carson.

That means go ahead and lock down access to the Web servers with strict privileged access management controls and restrict access to only those employees who are permitted to view or make changes. 

Hold onto the Vendor Guide!
According to Walton, the best way to address Web server and Web application challenges is to consult the vendors' security best practices guide and follow it.

"Roll that up into your organization's platform standards for server and container configurations. Once you have a solid platform configuration and repeatable process, it becomes much easier to avoid the most serious problems," Walton adds.

Taking the time to include security code reviews for new features and changes before deployment or even integrating application security testing into your QA process will also enhance server security. In addition, developers should make it a point to remain aware of the most common threats to Web-based applications and how they can be avoided.

"This enables them to put those things into practice or validate that the tools they are selecting do so," Walton says.

Leveraging static source code analyzers and Web vulnerability scanners can provide developers a great deal of insight, Walton adds. "These tools will help to find potential risks in large existing projects that may have been present for years," he says. "They can also spot problems in new work."

Web vulnerability scanners look at the application at run time and exercise it, using probes against all inputs to detect risky application behavior, according to Walton. They may also detect issues in third-party components and other parts of the overall application stack that a static analysis tool does not have visibility into. However, Walton says, "they are usually unable to detect authorization and business logic issues as effectively."

No single best practice can defend against determined attackers, and no combination of these steps is a magical elixir. While most products come with a solid set of general rules out of the box, getting the most out any tool requires that someone familiar with the behavior the application is protecting is in charge of building out custom rule sets.

Even then, Walton warns, "highly crafted targeted attacks might still be able to be effective before blocking rules and alerts are triggered."

Related Content:

Image Source: Siarhei via Adobe Stock

 

 

Kacy Zurkus is a cybersecurity and InfoSec freelance writer as well as a content producer for Reed Exhibition's security portfolio. Zurkus is a regular contributor to Security Boulevard and IBM's Security Intelligence. She has also contributed to several publications, ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
tdsan
50%
50%
tdsan,
User Rank: Ninja
8/6/2019 | 4:06:41 PM
I agree with the last few points
No single best practice can defend against determined attackers, and no combination of these steps is a magical elixir. While most products come with a solid set of general rules out of the box, getting the most out any tool requires that someone familiar with the behavior the application is protecting is in charge of building out custom rule sets.
Even then, Walton warns, "highly crafted targeted attacks might still be able to be effective before blocking rules and alerts are triggered."

I think these points are extremely relevant especially when we are taking about nation-state actors, but we have to make it extremely hard for them to access web servers:

  • Linux & Network Tools:
    • iptables/ufw - Firewalls
    • Fail2ban - IPS/IDS
    • SELinux - Securing Enterprise Linux
    • Remove unauthorized users
    • Login with security keys (keep in lock box)
    • Train personnel and perform psychological evaluations
    • Move away from IPv4 to IPv6 (enable IPSec AES256 ESP/AH VPN connection between two sites using MPLS rd#:# connections)
    • Implement TRILL for R-Bridge configuration where IPv6 runs atop of it
    • Implement NGFW where they connect to a ML site to process numerous attacks
    • Reduce the size of your attack vector (email traffic goes through a proxy, and Web traffic goes through NGFW with embedded intelligence and process at line speed - ASICS)
    • Create a baseline OS that is locked down, accepted by all
    • Implement Kubernetes for web applications and lock down those applications using Carbon-Black/Blue-Vector, Sophos or any machine learning capability
    • Monitor the systems using Extrahop (monitors performance, availability and application performance using an assortment of metrics)
    • Provide training to all personnel, mentor personnel with senior/advanced security personnel
    • Implement simulation tests with all personnel to help gather metrics of how the team and individuals are performing (depends on the level of comfort)



 

Just something to think about.


T
Flash Poll