Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.
The Dangers of Default Cloud Configurations
Default settings can leave blind spots but avoiding this issue can be done.
January 16, 2023
5 Min Read
Source: Image Source via Alamy Stock Photo
When you hear "default settings" in the context of the cloud, a few things can come to mind: default admin passwords when setting up a new application, a public AWS S3 bucket, or default user access. Often, vendors and providers consider customer usability and ease more important than security, resulting in default settings. One thing needs to be clear: Just because a setting or control is default doesn't mean it's recommended or secure.
Below, we'll review some examples of defaults that can leave your organization at risk.
Azure SQL Databases, unlike Azure SQL Managed Instances, have a built-in firewall that can be configured to allow connectivity at the server or database level. This gives users a lot of options to ensure the right things are talking.
For applications inside Azure to connect to an Azure SQL Database, there is an "Allow Azure Services" setting on the server that sets the starting and ending IP addresses to 0.0.0.0. Called "AllowAllWindowsAzureIps," it sounds harmless, but this option configured the Azure SQL Database firewall to not only allow all connections from your Azure configuration but from any Azure configurations. By using this feature, you open your database to allow connections from other customers, putting more pressure on logins and identity management.
One thing to note is whether there are any public IP addresses allowed to the Azure SQL Database. It is unusual to do so and, while you can use the default, it doesn't mean you should. You’ll want to reduce the attack surface for an SQL server — one way to do this is by defining firewall rules with granular IP addresses. Define the exact list of available addresses from both data centers and other resources.
Amazon Web Services (AWS)
EMR is a big-data solution from Amazon. It offers data processing, interactive analytics, and machine learning using open source frameworks. Yet Another Resource Negotiator (YARN) is a prerequisite for the Hadoop framework, which EMR uses. The concern is that YARN on EMR's main server exposes a representational state transfer API, allowing remote users to submit new apps to the cluster. Security controls in AWS are not enabled by default here.
This is a default configuration that may not be noticed because it sits at a couple of different crossroads. This issue is something we find with our own policies looking for open ports open to the Internet, but because it is a platform, customers can get confused that there is an underlying EC2 infrastructure making EMR work. Moreover, when they go to check the configuration, confusion can occur when they notice that in the configuration for EMR, they see the "block public access" setting is enabled. Even with this default setting enabled, EMR exposes port 22 and 8088, which can be used for remote code execution. If this isn't blocked by a service control policy (SCP), access control list, or on-host firewall (e.g., Linux IPTables), known scanners on the Internet are actively looking for these defaults.
Google Cloud Platform (GCP)
GCP embodies the idea of identity being the new perimeter of the cloud. It utilizes a powerful and granular permissions system. However, the one pervasive issue that affects people the most concerns Service Accounts. This issue resides in the CIS Benchmarks for GCP.
Because Service Accounts are used to give services in GCP the ability to make authorized API calls, the defaults in the creation are frequently misused. Service Accounts allow other Users or other Service Accounts to impersonate it. It's important to understand the deeper context of concern, which could be fully unfettered access in your environment, that could be surrounding these default settings. In other words, in the cloud, a simple misconfiguration can have a greater blast radius than what meets the eye. A cloud attack path can start at a misconfiguration, but end at your sensitive data through privilege escalations, lateral movement, and covert effective permissions.
All user-managed (but not user-created) default Service Accounts have the Editor role assigned to them to support the services in GCP they offer. The fix isn't necessarily a simple removal of the Editor role, as doing so might break functionality of the service. This is where a deep understanding of permissions becomes important because you must know exactly which permissions the Service Account is using or not using, and over time. Due to the risk that a programmatic identity is potentially more susceptible to misuse, leveraging a security platform to get at least privilege becomes vital.
While these are just a few examples within the major clouds, I hope this will inspire you to take a close look at your controls and configurations. Cloud providers aren't perfect. They are susceptible to human error, vulnerabilities, and security gaps, just like the rest of us. And while cloud service providers offer exceptionally secure infrastructure, it's always best to go the extra mile and never be complacent in your security hygiene. Often, a default setting leaves blind spots, and achieving true security takes effort and maintenance.
About the Author(s)
Principal Security Architect, Sonrai Security
Nathan Schmidt is an American technophile whose focus is cybersecurity through the confidentiality, integrity, and availability of data. In addition to his work in cybersecurity, he is founder of a privately-funded mentorship program to encourage non-traditional and a tangential-skilled workforce to find success in the world of Technical Solution Selling.
Joining Sonrai Security in 2021 as a Principle Solutions Architect, his cyber journey really started as a CORE and pioneering member, ultimately leading many efforts and services through a 17-year expedition at Rackspace, and then in 2018 working globally with System Integrators via Thales CPL on encryption, HSMs, and data protection systems. As an avid researcher, using his extensive knowledge and experience to protect organizations from cyber threats and data breaches, Nathan’s expertise and dedication to customer service and excellence-in-field have made him a valuable asset to clients and colleagues alike.
You May Also Like
Your Everywhere Security guide: Four steps to stop cyberattacksFeb 27, 2024
Your Everywhere Security Guide: 4 Steps to Stop CyberattacksFeb 27, 2024
API Security: Protecting Your Application's Attack SurfaceFeb 29, 2024
API Security: Protecting Your Application's Attack SurfaceFeb 29, 2024
Securing the Software Development Life Cycle from Start to FinishMar 06, 2024
Laptop with ransomware, and bitcoin in the palm of a man's hand to illustrate ransomwareCyberattacks & Data Breaches
Causes and Consequences of IT and OT Convergence
Secure Access for Operational Technology at Scale
FortiSASE Customer Success Stories - The Benefits of Single Vendor SASE
Fortinet Named a Leader in the Forrester Wave: Zero Trust Edge (ZTE) Solutions
Understanding AI Models to Future-Proof Your AppSec Program