Three Key Elements for Integrating Security with Automated Microsoft Azure Deployments

Provisioning, deploying, and scaling resources in the cloud is order of magnitudes easier and faster than the deployment process in most on-premises environments.  That is why most ‘cloud initiatives’ started with a personal credit card. 

We’re often asked about security best practices for DevOps and you can find checklists here that we have written about practical security tips for DevOps.  While there isn’t an established definition or framework that covers SecDevOps today, we did want to share some of the best practices around integrating security with automation.

First and foremost, you can implement security controls as part of the provisioning process.  For Azure, you can use either Azure Resource Manager templates (also known as ARM templates), or Powershell.  When done correctly, your developers use these templates to provision their applications and services, enabling them to move as fast as they can, while still staying with the guardrails set by the InfoSec and Compliance team. 

Here are some examples of things you would want to include in an ARM template:

  • Require all developers to launch approved and managed golden images.  These images should be encrypted, and the Azure KeyVault credentials can be integrated with the ARM template to ensure the developers who should have access to these images can decrypt them.  Also, these images and templates should be updated as they are patched for vulnerabilities.
  • Apply Network Security Group policies to implement more granular inbound and outbound network access policies.  These policies should be rolled out across all DevOps environments (dev/test, staging, and production) to ensure configurations are consistent in the pipeline.
  • Activate log collection via the Azure agent or other 3rd party collection tool.  Ideally, these logs will be sent to a storage account or other destination that limits write access for developers and operations team members so these logs cannot be modified if their credentials are comprised. 
  • Tag the resources as they are deployed so teams monitoring the environment have some context.  These tags should be consistent across the organization, and implementing these within the ARM template helps ensure this information is usable.

These templates should be maintained in a source control repository, and any changes should be reviewed and approved by a cross functional team that includes the InfoSec team.  

Microsoft IT has implemented many of these practices as part of their DevOps pipeline to deploy internal applications to Azure.  You can read more about their lessons learned here

The second area to focus on is around process.  While there are many security tools available that integrate with DevOps pipelines, there are things you may want to do more frequently than you do today, but not necessarily every time you push a build.  

  • These environments are dynamic, and though you may have templates in place, that does not mean they will always be used.  Increasing the frequency of vulnerability scanning across your cloud environment can help catch outdated image builds, rogue deployments, or vulnerabilities that affect your entire environment because they are tied to the same base image each VM uses.  For public IPs, make sure you submit a penetration testing form before conducting these tests, as the scanner will be blocked by Microsoft if you do not.  This is part of the value they provide as part of the Azure platform.
  • Regularly review the Activity and other logs to make sure policies are being used and are working properly.  There are several tools built into Azure that help you aggregate and analyze these logs.  Some of the things you are looking for include:
  • Do users with access to several resources and services use those services?  If not, those roles can be reduced.
    • Are the NSG policies blocking unwanted traffic?  Are they too restrictive?
    • Are the developers and operations teams using the right templates?  Are there any rogue deployments?
    • Are logs being collected?
    • Does our environment conform to the best practices Microsoft has outlined for Azure?
  • You can also start tracking metrics around patching and the ability to address vulnerabilities faster via these deployment pipelines.  This is incredibly valuable from a risk management perspective, though it is important to set the appropriate expectations, especially around patching, as an aggressive policy can slow the application or service development velocity due to compatibility issues.

Finally, reevaluate your security toolsets, and focus on security partners who can integrate their technology into your build pipeline, and can publish incident notifications into the service management or ticketing platform automatically.  This enables the development team to address vulnerabilities via self-service without requiring a handoff from the InfoSec team, and provides additional tracking on vulnerabilities found and the time to fix.  

For some organizations, it can take some time before they achieve a model that fits across multiple teams.  Part of this is because it takes time for the development and security teams to both come up to speed on cloud and how security works in the cloud.  Templates are a great place to start, and they will evolve over time. 

One of the unintended benefits of frequent deployments is that it makes it difficult for an advanced persistent threat to do recon in cases where they are exploiting vulnerabilities on the VM.  With each new deployment, their foothold is lost, and they must go through the process of reestablishing it through the same exploits they used before.  More frequent vulnerability scanning and security monitoring should identify these exploits and enable you to fix them before the hacker can do more serious damage by taking down your environment or exfiltrating your data. 

DevOps toolsets help increase this velocity by standardizing the deployment pipeline and enabling multiple deployments, with failbacks if needed, with little or no application downtime.  Once you’ve implemented this internally, it’s difficult to go back to the all-hands-on deck, pressured rollouts within the change window that many organizations continue to struggle with today.

This blog post expanded on some of the best practices we introduced in this webinar.  For more information about how we protect applications and services running on Azure, please check out our Azure page.

Security is a shared responsibility: Microsoft provides physical security, instance isolation, and protection for foundational cloud services, while you are responsible for securing the applications and data in your environment.

Read More

About the Author

Vince Bryant - Cloud Platform Partner Executive

Vince Bryant

Vince Bryant, Cloud Platform Partner Executive - Vince is Alert Logic’s business relationship manager for Microsoft.  Prior to joining Alert Logic, he worked at an early state EdTech startup operating within the University of Washington, and advised technologists on commercialization and spin-out strategies as part of the UW Center for Commercialization (C4C).  He also has worked in Corporate Development and Technology Alliances at Hitachi Data Systems and EMC Corporation.

Email Me | More Posts by Vince Bryant