Training
Get a free hour of SANS training

Experience SANS training through course previews.

Learn More
Learning Paths
Can't find what you are looking for?

Let us help.

Contact us
Resources
Join the SANS Community

Become a member for instant access to our free resources.

Sign Up
For Organizations
Interested in developing a training plan to fit your organization’s needs?

We're here to help.

Contact Us
Talk with an expert

How to Automate in Azure Using PowerShell - Part 2

In this post, we will discuss automation approaches to mitigating risks identified in Part 1 of the How to Automate in Azure Using PowerShell series.

Authored byJosh Johnson
Josh Johnson

Introduction to Automating Azure Using PowerShell

Welcome to Part Two of a three-part series on automating Azure security. In the first post, we covered how to identify and evaluate Azure security risks using PowerShell automation. Using cloud-native capabilities, PowerShell can be used for automated reporting to ensure actionable data is presented to stakeholders on a regular basis for constant, accurate understanding of security posture in Azure.

In this post, we will discuss automation approaches to mitigating risks identified in Part 1 of the series. While there are many ways to achieve a strong security posture in the cloud, the two methods that will be discussed in this post are redeploying resources using ARM/BICEP templates and Azure Policy. ARM/Bicep templates are Infrastructure as Code (IaC) style deployment options for controlling aspects of cloud services including configuration options and security controls. Azure Policy is a native service to configure and ensure …you guessed it… policy compliance across cloud resources.

Setup Information

To follow along, you’ll need the following:

  • An Azure Tenant with resources deployed
    • Az Module

NOTE: This blog post will walk through a fictitious cloud resource group with a resource that is currently misconfigured and identified as risky. Please fully test all commands and techniques prior to deploying to production infrastructure as the changes below may inhibit desired functionality depending on the use cases of deployed resources.

Option 1 - Infrastructure as Code

One of the great enabling features of modern cloud platforms is the ability to define infrastructure as code. Rather than logging into a GUI and button-clicking our way to a production environment, we can simply define our infrastructure using a standard, human-readable format such as JSON or YAML. These files can be managed in source control to ensure proper versioning, change control, and even release capabilities to deploy infrastructure in true CI/CD fashion. IaC approaches help reduce drift because there is a clear, documented standard configuration. Releasing that configuration via pipelines ensures that the standard is reapplied if drift occurs between releases. Another wonderful aspect of this technology is idempotence. From an automation standpoint, rather than several ‘if/else’ statements to first check the state of a system to see if changes are needed, we simply define what the desired state should look like. The underlying engine that processes the configuration file can then perform that logic and simply do nothing if the target is already in the correct configuration.

Vendor agnostic solutions such as Terraform offer excellent capabilities to use infrastructure as code, while abstracting away the underlying cloud platform. Such capabilities make these tools well-suited for multi-cloud deployments.

In Azure, there are two popular Azure-native technologies supporting this methodology, Azure Resource Management (ARM) and Bicep. ARM templates are JSON-based and are certainly human-readable, but the format is slightly more complex. Bicep, the newer of the two, provides an even simpler format for defining infrastructure. Under the hood, Bicep files are transpiled to ARM templates prior to deployment, as Azure deployments still consume ARM format. Both options are fully supported and provide great infrastructure as code options.

With respect to the development environment, Bicep offers superior experience for users working in VS Code. Syntax is simple and extremely easy to comprehend, and Microsoft publishes a VS Code extension allowing for ease of development. The extension allows for decompiling ARM to Bicep for editing as well as compiling Bicep to ARM for deployment. Popular CI/CD vendors also provide features for automating this functionality.

For an existing deployment with known vulnerabilities, a resource or resource group can be exported to its template format. This can be done from Azure Portal, or from PowerShell.Looking back at part 1, let’s consider the ‘throwaway-resources’ resource group, which contains a deliberately vulnerable storage account with public access enabled.

First, let’s use PowerShell to connect to Azure and export a template representing the current configuration of the resource group and resources:

Connect-AzAccount 
Set-AzContext -Context (Get-AzSubscription -SubscriptionName 'throwaway') 
Export-AzResourceGroup -ResourceGroupName 'throwaway-resources' -Path C:\temp\throwaway_template.json

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltf71c90bc1ee0dda8/6401024194e69a2df810fa8a/Part_2_-_Image_1.png

We’ll review any warnings and review the template itself to ensure everything looks good. Notice that the resulting JSON file is readable and could be easily edited itself.

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt0428e93991811568/64010241e70dd635488d02ef/Part_2_-_Image_2.png

Using the Bicep VS Code extension, this JSON format can be easily decompiled into Bicep format via the command palette option, Bicep: Decompile into Bicep, producing a new code tab with a .bicep file.

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltae2abb46c307af96/640102418d9f9910878e6831/Part_2_-_Image_3.png

The resulting resource definition is cleaner and easy to read:

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltaca65fea356dc5f4/6401024110ff830ebb55fa58/Part_2_-_Image_4.png

From here, we can simply edit the file, changing the following two settings for demonstration purposes:

allowBlobPublicAcess: false 
supportsHttpsTrafficOnly: true

Now we can save the file as it is ready for deployment. Certainly, it is appropriate to review all settings and ensure proper configuration for production workloads. Before we can deploy the resource group, we must overcome one limitation and install the local Bicep CLI or convert the file back to JSON for deployment using the VS Code extension.

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt50a264d9227c9062/640102416fb32236236d3fd2/Part_2_-_Image_5.png

With the template ready, it can be deployed via PowerShell.The New-AzResourceGroupDeployment cmdlet provides excellent support for PowerShell’s well-known ‘-WhatIf’ parameter, which produces a report showing what will change because of the deployment.

New-AzResourceGroupDeployment -Name 'Fix_Blob_Vulns' -ResourceGroupName 'throwaway-resources' -TemplateFile C:\Temp\throwaway_template.json -WhatIf
https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltc4b70bb4cca94323/6401024182b5176dad09a4fa/Part_2_-_Image_6.png

Since these changes look like the intended results, we can now deploy the resource group by simply removing the -WhatIf parameter.

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt5495c1e08297a67b/64010241bddfe71066fce00d/Part_2_-_Image_7.png

Looking in Azure Portal, the two settings have indeed been corrected.

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltab35cc95160e2696/64010241564e5e2df2fa81d2/Part_2_-_Image_8.png

This approach can be used to build parameterized templates for future cloud deployments, using Infrastructure as Code to produce repeatable results. To improve on the above, managing templates in source control and using pipelines to automate releases will provide an excellent starting point for deploying cloud infrastructure with proper configurations.

Option 2 - Azure Policy

While manually updating and using PowerShell to automate template deployments works, it may not scale for larger environments. Instead of touching each resource/resource group, we may want to create standard policies with which resources of given types must comply. Azure Policy gives us the ability to standardize configurations across any new deployment of a type of resource, as well as the ability to remediate existing resources that are already deployed.

Azure Policy is not only limited to security functionality. It supports operational aspects of cloud usage, such as resource tagging, deployment regions, resource types, and more to manage other considerations. From a security perspective, it allows for standard configurations across the myriad of resource types and options in Azure.

With nearly 3,000 available policy definitions at the time of this writing, it’s safe to assume that many common security best practices can be implemented in this manner. Custom policy definitions can be created for additional use cases. Multiple policy definitions can be combined together into an Azure term called Initiatives (simply a set of policy definitions), and Microsoft provides several out of the box Initiatives as well. In this section, we will create a new Initiative to control the public access and HTTPS settings across any Blob storage within our subscription. This way, if an engineer makes a mistake and deploys a misconfigured storage account, the initiative will ensure there is no unnecessary exposure.

Creating an Azure Policy Initiative

The first step is to look for policy definitions that already suit the use case. Policy definitions generally have verbose and descriptive names, so we can search for terms of interest. This can be done within Azure Portal, or inside a PowerShell session using Get-AzPolicyDefinition.This cmdlet returns objects representing existing definitions, and a Properties property on the returned object includes the display name of the policy. By default, the return objects do not have much meaningful information in the properties displayed in the standard output format. However, we can expand the property named Properties for more meaningful understanding, looking for any policies with a DisplayName including the term ‘storage’ followed by the term ‘public’:

Get-AzPolicyDefinition | Where-Object {$_.Properties.DisplayName -like "*storage*public*"} | Select-Object -ExpandProperty Properties
https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltab35cc95160e2696/64010241564e5e2df2fa81d2/Part_2_-_Image_8.png

With four possible options, we can find the policy of interest and store it in a PowerShell variable:

$publicBlobPolicy = Get-AzPolicyDefinition | Where-Object {$_.Properties.DisplayName -eq "Configure your Storage account public access to be disallowed"}
https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt89c4c6af86f27b2a/640102419ac9ae108e946892/Part_2_-_Image_9.png

We’ll do the same thing for the secure transfer policy:

$secureTransferPolicy = Get-AzPolicyDefinition | Where-Object {$_.Properties.DisplayName -eq "Configure secure transfer of data on a storage account"}

To create a policy definition, we need a JSON file including an array of policyDefinitionId objects. We can create the proper structure using the ConvertTo-Json cmdlet. Note: we’re showing this method to avoid manual creation of the JSON file and providing an easy way to support adding multiple policy definitions based on search criteria.

@($publicBlobPolicy,$secureTransferPolicy) | ForEach-Object {$_ | Select-Object -Property PolicyDefinitionId} | ConvertTo-Json | Out-File -FilePath 'C:\temp\blob_policy.json'
https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltc702eafc8e74f598/640102413b7399107f6bb1c2/Part_2_-_Image_10.png

With the policy set contained in a JSON file, we can create a new Initiative using New-AzPolicySetDefinition.Specify the location of the JSON file along with a name and display name for supportability.

New-AzPolicySetDefinition -Name 'blob-lockdown' -DisplayName 'Blob - Disallow Public Access and Force HTTPS' -PolicyDefinition C:\temp\blob_policy.json
https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltb756dfcba6ab5c8e/640102414d5a1946d3dd929f/Part_2_-_Image_11.png

The policy initiative is now visible within Azure Portal:

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltc923bd344af455af/6401024243153734d1472a73/Part_2_-_Image_12.png

Lastly, we need to assign the policy set definition to a target, in this case our ‘throwaway-resources’ resource group.Initiatives can also be targeted to entire subscriptions or management groups to scope these changes according to a specific use case.

First, store the newly created initiative in a PowerShell variable:

$policySet = Get-AzPolicySetDefinition -Name 'blob-lockdown'

With the $policySet variable populated, create a new policy assignment using New-AzPolicyAssignment, providing a name and display name, scope to apply the policy definitions, policy set, location, and managed identity. When setting the IdentityType parameter to “SystemAssigned”, a new Managed Identity will be automatically created with the same name as our policy assignment.

New-AzPolicyAssignment -Name 'Lock down storage accounts' -DisplayName 'Ensure no blob public access and HTTPS enforcement on storage accounts' -Scope "/subscriptions/$(Get-AzSubscription -SubscriptionName 'throwaway' | Select-Object -ExpandProperty Id)" -PolicySetDefinition $policySet -Location 'eastus' -IdentityType 'SystemAssigned'
https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt915635e32943d397/640102424fd99f36ebe22a22/Part_2_-_Image_13.png

At this point, policy will be applied on new deployments. Regardless of what settings are chosen during deployment, the resource will end up in the desired configuration.

Remediating Existing Resources

For automated remediation tasks, we’ll need to update the Managed Identity’s role to have access to modify existing resources. The role(s) needed will depend on the policy in question. In this case, we know that the Managed Identity needs Contributor permissions on Storage Accounts to update settings. We can find relevant role definitions by searching based on name.

Get-AzRoleDefinition | Where-Object {$_.Name -like "*Storage*Contributor"} | Select-Object -Property Name,Description,Id
https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt3e3abcd6353d75f7/64010242aa6c3f7f594bceec/Part_2_-_Image_14.png

With this understanding, assign the role to the Managed Identity:

New-AzRoleAssignment -ObjectId (Get-AzADServicePrincipal -SearchString "Lock down storage accounts").Id -RoleDefinitionName "Storage Account Contributor" -Scope "/subscriptions/$(Get-AzSubscription -SubscriptionName 'throwaway' | Select-Object -ExpandProperty Id)"
https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blte94b31c349efd4b8/640102422a47326c5cec343b/Part_2_-_Image_15.png

Now, we can see that this existing storage account is not meeting compliance with the new initiative (it was manually configured to its vulnerable state after step 1 of this blog post).

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt2f971e9cf53f3963/64010242112aba108a264144/Part_2_-_Image_16.png

If we need to mitigate existing non-compliant resources, we can use the Start-AzPolicyRemediation cmdlet to start a remediation task for a given policy assignment. However, since we have created an initiative with two policies, we’ll need to create remediations for each within the policy set.

First, get the ID of the previous policy assignment:

$policyAssignment = Get-AzPolicyAssignment -Name 'Lock down storage accounts' -WarningAction SilentlyContinue

Next, get the policy definitions for policies that are currently showing non-compliant. Using the Get-AzPolicyState cmdlet, we get similar output as to what is shown in the Azure Portal screenshot above. In the output object, a property named ComplianceState shows if the state is non-complaint, and a property named PolicyDefinitionReferenceId includes the reference needed to specify the specific policy definition needing remediation.

$policyStates = Get-AzPolicyState | Where-Object {$_.PolicyAssignmentId -eq $policyAssignment.PolicyAssignmentId -and $_.ComplianceState -eq 'NonCompliant'}

Once we have the policies which are not compliant, we can iterate through each object and start a remediation job for the respective PolicyDefinitionReferenceId.

$policyStates | ForEach-Object {Start-AzPolicyRemediation -Name "$($_.PolicyDefinitionName)_Remediation" -PolicyAssignmentId $policyAssignment.PolicyAssignmentId -PolicyDefinitionReferenceId $_.PolicyDefinitionReferenceId }

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt739ff8b30d649f8a/64010242316d4410175ffea8/Part_2_-_Image_17.png

Remediation jobs have started and resources will be updated to reflect the correct settings.These jobs can be viewed in Azure Portal and can be monitored for any errors.

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt783d347e32fea2f3/6401024294e69a2df810fa8e/Part_2_-_Image_18.png

Additionally, looking at the affected resource(s), we can manually validate that settings have been corrected:

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/blt1d020ad23a458898/640102417af6422f7a2485d5/Part_2_-_Image_19.png

By default, Azure Policy will reevaluate resources once per day. For those of you as impatient as I am, Start-AzPolicyComplianceScan can scratch that itch to understand immediate impact by evaluating resources against existing policies. Note that compliance scans take some time, so now may be a good time for a coffee refill or go and remediate other findings.

Once completed, simply run Get-AzPolicyState to show that findings have been remediated. Notice that the IsCompliant property now shows True for both policies.

https://images.contentstack.io/v3/assets/blt36c2e63521272fdc/bltfad5a1379fb24a55/640102427af6422f7a2485d9/Part_2_-_Image_20.png

Wrapping Up

In Part Two of this series, we have covered remediation of vulnerabilities and misconfigurations discovered in Part One. From an Infrastructure as Code perspective, ARM and Bicep provide excellent, readable formats that can be managed in source control and automatically deployed to provision cloud resources. Providing secure, working templates to teams looking to embrace the cloud is a great strategy for ensuring safe and enjoyable experimentation, without needing to go back and secure things after the fact. Furthermore, Azure Policy provides an excellent mechanism for globally (or more tactically) setting those configuration options that are critical to security posture. Both approaches have strong PowerShell support and can scale with the needs of modern organizations.

Google Cloud Logging Extraction | SANS Institute