By David Fiser
According to the description, the Azure App Service is used to “quickly and easily create enterprise-ready web and mobile apps for any platform or device, and deploy them on a scalable and reliable cloud infrastructure.” In other words, it provides a ready-to-use infrastructure for applications.
From a technical perspective, the service is running inside a Docker container. The container image contains a language interpreter for the chosen runtime stack. The developers can bind the App Service with their code repository (e.g. GitHub) and build a continuous delivery and continuous integration (CD/CI) pipeline for deploying the code inside App Service.
Figure 1. Azure App Services with CD/CI integration
Once a commit is pushed into the GitHub repository, a GitHub Actions (GHA) task is executed, effectively building a Docker image for Azure App Services’ linked account. When the customers access the http endpoint of the service, a container is spawned for serving the query. From a security perspective, there is no access token saved within the build container. Instead, an access token for the Azure App Services is stored within GHA, and thus limited to its security boundary.
Security and environmental analysis
Upon request, the developer-provided code is executed within the container. Assuming the code can contain a vulnerability, we analyzed attacker options when the vulnerable user-provided code is exploited. That means the boundaries of the environment— in this case, a container, its permissions, configurations, and their consequences.
This includes the user’s permissions when the application is executed and the available capabilities within the container. For Python and Node.js container images as an example, the executed code runs under root permissions. If attackers find and exploit a vulnerability such as spawning a shell, they can get the same permissions that the current user has. This contradicts the security principle of least privilege and effectively increases an attacker’s options in cases of compromise.
Figure 2. User permissions within the App Services environment-container running in Python
Unfortunately, even when an application is running under a low-privileged user such as www-data in case of a PHP container image, one can easily escalate the privileges to the root user within the running container. This is due to the non-security-oriented container design used for app services. Listing the documentation, one can escalate privileges, since the root password remains exposed — which is the same for every instance the customer will use out of the default repository — they also propose to use the same password for custom-made containers.
Figure 3. A screenshot of a container customization tutorial from Azure
Figure 4. An example of privilege escalation of a running container within App Services
On analysis, the root user still does not have all the capabilities on the host. It is limited within container isolation schemes, meaning the container is not running in privileged mode. The available capabilities within the container are:
We looked at the list and saw one dangerous and unnecessary capability: the CAP_NET_RAW capability can allow an attacker to craft low-level packets that can cause additional stress to the infrastructure by creating raw sockets. The level of compromise depends on the services available based on the cloud network’s design, but examples of infiltration can include the discovery of services, DNS attacks, attacking TCP/IP exploiting previously documented gaps with assigned CVEs (e.g., CVE-2020-14386), or flooding devices accessible within the network with packets.
The SMB3 mounted volume in the /home folder is also interesting. This allows the container to have stateful storage, so anything stored in it remains intact among container spawns. The SMB service is also known to be prone to vulnerabilities. When newer exploits similar to EternalBlue appear, it can be abused to compromise hosted storage service.
The app service is intended to host web applications or services. In the case of ordinary web applications, the web request is handled within milliseconds (up to 0.5 seconds). It tends to be served as soon as possible to achieve a good user experience. Our tests found that the timeout for HTTP requests is around 240 seconds and additional processes spawned within the container can live up to 10 minutes. And while there are no “official” timeout settings available, the developer can influence this within the code itself.
From the user’s perspective, this is considered beneficial if they were running complex tasks. However, this can be prone to attacks such as denial of service (DoS). Moreover, scaling the service is required when a service hangs before the timeout trigger is reached. At some point, the user will reach the limit in scalability if triggered repeatedly. When the user needs higher timeouts, they should ask a question if this is the suitable service in the first place.
We previously emphasized the need for proper secrets management within the DevOps environment as they play an important role in securing the environment. But how does Azure App Services stand in terms of secrets management?
In our analysis, we could distinguish between user-defined secrets and platform-defined secrets. In the case of user-defined secrets, the responsibility is fully on the user. Azure provides KeyVault service for managing secrets.
On the other hand, a common practice in the DevOps community is to use environmental variables for secrets within the container environments. We highly disagree with this approach as the secrets do not need to be present within the environment the entire time and it provides an additional attack vector — as they are also copied into every child process by design. For example, a simple environment leak would expose the secrets.
With that in mind, we don’t recommend using application settings for storing secrets within App Services as they will be exposed as environmental variables upon execution. The same recommendation applies for connection strings as the only difference is in the variable prefix.
Figure 5. Application settings within the App Services
On the other hand, the application settings contain environmental variables that are present in the system architecture design and the user can’t affect those because these are provided by default for the service to run. From a security perspective, the user should focus on the following items:
- WEBSITE_AUTH_ENCRYPTION_KEY: The generated key is used as the encryption key by default. To override this automated key, set it to a desired key. This is recommended if the user shares tokens or sessions across multiple apps.
- WEBSITE_AUTH_SIGNING_KEY: The generated key is used as the signing key by default. To override this, set it to a desired key. This is recommended if the user shares tokens or sessions across multiple apps.
According to the Azure documentation, these keys are used for encryption and signing. We are sure they are not a good candidate for environmental variables.
Available network boundaries can provide another angle that an attacker could use for infiltration. We identified the following types:
- Incoming connections
- Outgoing connections
- Accessible devices within the local area network (LAN)
The actual boundaries are dependent on the user’s cloud architecture, use cases, and scenarios. As App Service maps the HTTP endpoint to the container where the application listens for connections, the container also exposes port 2222 for SSH connections. The Web SSH gateway is used for initiating the SSH connection to the container that requires Azure authentication, so an unauthenticated user can’t connect to container.
By default, outbound connections are possible and not limited, allowing an attacker to spawn a reverse shell upon successful exploitation of the user code.
The default container accessible network contains a minimum of three IP addresses. One is the container network interface itself, the second is the default gateway for accessing the internet, and the last one for incoming SSH connections from Web SSH.
Figure 6. Default network scheme
Conclusions and recommendations
The default settings can be altered by modifying the network settings, Azure Virtual Networks (VNets) and hybrid connections, among others. As different organizations will have multiple variations and use cases depending on their needs, companies and developer teams should apply the principle of least privilege. From a network perspective, this means denying all other traffic besides those deemed necessary for the application to work, especially if your network consists of multiple endpoints within one VNet.
Overall, what does this mean for the App Services consumer? The answer is simple: The user is the biggest security risk, either by misconfiguring the cloud service and creating a wider attack surface or by implementing a code that has vulnerabilities. To mitigate the risks, we suggest the following best practices and recommendations:
- Exercise peer reviews of the code
- Execute continuous testing of the code
- Secure secrets and don’t trust environmental variables
- Follow the principle of least privilege
- Configure the services assuming a breach or attack scenario (to minimize the impact of a breach)
Is there anything that Azure can do to address these issues? They could make sure that the default container images don’t run web applications with root permissions. Second, it would be great if we could change the mindset within the DevOps community, which uses environmental variables to store secrets within the runtime. Even as Azure — or any cloud service provider for that matter — is doing everything else in a safe manner (such as providing =TLS, encryption, and others), these little details can degrade security. Ultimately, the strength of the chain is defined by the strength of its weakest parts.
Like it? Add this infographic to your site:
1. Click on the box below. 2. Press Ctrl+A to select all. 3. Press Ctrl+C to copy. 4. Paste the code into your page (Ctrl+V).
Image will appear the same size as you see above.
Posted in , Application Security, Cloud Computing
The Microsoft SDL approach to threat modeling is a focused design analysis technique.How do you create a threat model diagram? ›
- Identify security objectives. Clear objectives help you to focus the threat modeling activity and determine how much effort to spend on subsequent steps.
- Create an application overview. ...
- Decompose your application. ...
- Identify threats. ...
- Identify vulnerabilities.
OCTAVE is most useful when creating a risk-aware corporate culture. However, it lacks scalability. Trike: Trike is an open-source asset-centric framework for threat modeling and risk assessment. The project began in 2006 to improve the efficiency and effectiveness of existing threat modeling methodologies.What are the 6 steps of threat modeling? ›
- Step 1: Asset Identification. ...
- Step 2: Attack Surface Analysis. ...
- Step 3: Attack Vectors. ...
- Step 4: Analysis. ...
- Step 5: Prioritization. ...
- Step 6: Security Controls.
- Diagram: what are we building?
- Identify threats: what can go wrong?
- Mitigate: what are we doing to defend against threats?
- Validate: validation of previous steps and act upon them.
In general, there are three basic approaches to threat modeling: software centric, attacker centric, and asset centric.Which six category threat classification model developed by Microsoft is used? ›
STRIDE is a model for identifying computer security threats developed by Praerit Garg and Loren Kohnfelder at Microsoft. It provides a mnemonic for security threats in six categories.What is the six category threat classification model developed by Microsoft? ›
STRIDE is an acronym for six threat categories: Spoofing identity, Tampering with data, Repudiation threats, Information disclosure, Denial of service and Elevation of privileges. Two Microsoft engineers, Loren Kohnfelder and Praerit Garg, developed STRIDE in the late 1990s.What is threat modeling simple example? ›
Identifying an encryption algorithm used to store user passwords in your application that is outdated is an example of threat modeling.What is threat modelling for beginners? ›
- Define objectives.
- Define technical scope.
- Decompose the application.
- Analyze threats.
- Analyze vulnerabilities.
- Analyze attack paths.
- Analyze risk and impact.
Threat modeling is a structured process with these objectives: identify security requirements, pinpoint security threats and potential vulnerabilities, quantify threat and vulnerability criticality, and prioritize remediation methods. Threat modeling methods create these artifacts: An abstraction of the system.What is the most common threat model? ›
Microsoft Threat Modeling Tool (MTMT)
One of the market's oldest and most tried-and-true threat modeling products is Microsoft Threat Modeling Tool. The STRIDE (spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege) approach is used by this open-source program.
Trike Threat Modeling (Acceptable Risk Focused)
The foundation of the Trike threat modeling methodology is a “requirements model.” The requirements model ensures the assigned level of risk for each asset is “acceptable” to the various stakeholders.
- Extended Detection & Response (XDR)
- Security Information & Event Management (SIEM)
- Managed Detection and Response (MDR)
- Network Traffic Analysis (NTA)
- Threat Detection.
- Incident Response.
- Incident Response Plan.
- User and Entity Behavior Analytics (UEBA)
- Phishing Attacks. Phishing attacks are one of the most common types of cyberattacks. ...
- Social Engineering Attacks. ...
- Ransomware Attacks. ...
- Malware and Virus Attacks. ...
- Denial-of-Service (DoS) Attacks. ...
- Spyware and Adware Attacks.
Risk Value = (Damage + Affected users) x (Reproducibility + Exploitability + Discoverability). Then the risk level is determined using defined thresholds below.Which tool can be used for threat Modelling? ›
OWASP Threat Dragon is a modeling tool used to create threat model diagrams as part of a secure development lifecycle.What is threat Modelling diagram? ›
Threat modeling looks at a system from a potential attacker's perspective, as opposed to a defender's viewpoint. Making threat modeling a core component of your SDLC can help increase product security. The threat modeling process can be decomposed into three high level steps.What is the four questions approach to threat modeling? ›
At the heart of threat modeling are four incredibly simple questions. What are we working on, What can go wrong, What are we going to do about it, and, did we do a good job? These questions act as guideposts as you're threat modeling and analyzing how you're threat modeling.What are the three 3 categories of threats to security? ›
The three most general categories are natural threats (such as earthquakes), physical security threats (such as power outages damaging equipment), and human threats (blackhat attackers who can be internal or external.)
- LOW means an attack is highly unlikely.
- MODERATE means an attack is possible, but not likely.
- SUBSTANTIAL means an attack is likely.
- SEVERE means an attack is highly likely.
A typical threat modeling process includes five steps: threat intelligence, asset identification, mitigation capabilities, risk assessment, and threat mapping. Each of these provides different insights and visibility into your security posture.What are the seven threat modelling processes developed by Microsoft? ›
Microsoft introduced its STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege) threat modeling methodology in 1999. There are now many other approaches.How many types of security threat categories do we have? ›
7 Types of Cyber Security Threats.What are the three main categories of security explain each category? ›
There are three main types of IT security controls including technical, administrative, and physical. The primary goal for implementing a security control can be preventative, detective, corrective, compensatory, or act as a deterrent.Which of the following are the six components of the STRIDE threat model? ›
The STRIDE threat model defines threats in six categories, which are spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege.What is threat modeling in a nutshell? ›
Threat modeling is a family of activities for improving security by identifying threats, and then defining countermeasures to prevent, or mitigate the effects of, threats to the system.Why is threat modelling difficult? ›
The first problem with 'the threat modeling process' is that there are a lot of processes. People, eager to threat model, had a number of TM processes to choose from, which led to confusion. If you're a security expert, you might be able to select the right process.Why is threat modeling difficult? ›
Threat Modeling Process Saturation
Numerous threat modeling processes are available, which frequently leads to confusion, especially for teams lacking an experienced security expert. This makes it difficult to judge the various processes, and select the right one to drive cyber defence priorities.
- Social perception. With the rise of social media, consumers are increasingly aware of the business practices of the companies they support. ...
- Natural disasters. ...
- Technological changes. ...
- Legislation. ...
- Competition. ...
- Globalization. ...
- Data security. ...
- Rising costs.
The most popular Threat Modelling techniques are Data Flow Diagrams and Attack Trees.What are two methods that detect threats? ›
- Threat intelligence. ...
- User and attacker behavior analytics. ...
- Intruder traps. ...
- Threat hunting. ...
- Security event detection technology. ...
- Network threat technology. ...
- Endpoint threat technology. ...
- Security data lake implementation.
- Biometric Measures.
- Passwords and Usernames.
- Email Confirmations.
- Anti-Malware Software and Automatic Software Updates.
- Use strong passwords. Strong passwords are vital to good online security. ...
- Control access to data and systems. ...
- Put up a firewall. ...
- Use security software. ...
- Update programs and systems regularly. ...
- Monitor for intrusion. ...
- Raise awareness.
Prevention is the key to reducing the risk of a data breach. By investing in cybersecurity software, using a VPN, and being aware of common attack methods, individuals and organizations can deter hackers and keep their data private.What are the five parts of threat assessment? ›
- The Security Threat and Risk Assessment. ...
- Active Threat Assessment. ...
- The Cyber-security Threat and Risk Assessment. ...
- Threat Assessment for Instrumental Violence. ...
- The Violence Threat Risk Assessment.
- Definition of your objectives.
- Definition of the technical scope of the project.
- Analysis of threats.
- Analysis of weaknesses and vulnerabilities.
- Attacks modeling.
- Analysis of the risk and impact on the business.
Threat modeling is a method of optimizing network security by locating vulnerabilities, identifying objectives, and developing countermeasures to either prevent or mitigate the effects of cyber-attacks against the system.What is threat modelling life cycle? ›
Simply put, threat modeling is a procedure to identify threats and vulnerabilities in the earliest stage of the development life cycle to identify gaps and mitigate risk, which guarantees that a secure application is being built, saving both revenue and time.What are the 4 categories of the risk of security threats? ›
Threats can be classified into four different categories; direct, indirect, veiled, conditional.
Indicators of a potential insider threat can be broken into four categories--indicators of: recruitment, information collection, information transmittal and general suspicious behavior.What are the six category threat classifications? ›
STRIDE is an acronym for six threat categories: Spoofing identity, Tampering with data, Repudiation threats, Information disclosure, Denial of service and Elevation of privileges.What is threat model diagram? ›
Threat models constructed from process flow diagrams view the applications from the perspective of user interactions. This allows easy identification of potential threats and their mitigating controls.What makes a good threat model? ›
A threat model should capture as many details about the system as it can in the diagram. It should also capture what controls are already implemented and the strengths of those controls.What is the first step in threat modeling? ›
The first step in the threat modeling process is concerned with gaining an understanding of the application and how it interacts with external entities. This involves: Creating use cases to understand how the application is used.What are the three main approaches to threat modeling? ›
In general, there are three basic approaches to threat modeling: software centric, attacker centric, and asset centric.What are the 4 life cycle models? ›
Commonly those include development, realisation, utilisation and disposal. The Life Cycle Model can be understood as an overall framework that defines criteria and expected results for each phase.