Back to blog
Terraform2025-03-01 8 min read

Terraform Greenfield Azure Environment: The Complete Guide

How I design, provision, and manage fully automated Azure environments from scratch using Terraform modules — covering VNets, VMs, ADF, Key Vault, and more.

Introduction

Provisioning a greenfield Azure environment from scratch is one of the most impactful things you can do for a new product. Done right, it gives you a clean, reproducible, secure foundation. Done poorly, it becomes years of technical debt.

In this guide I'll walk through the exact approach I use at MetaSol — real code, real patterns, no hand-waving.

What "Greenfield" Actually Means

A greenfield environment is one built from zero. No inherited resources, no manual-click history, no undocumented configurations. Everything defined in code, everything reproducible.

The goal: if I delete this environment today, I can recreate it in 30 minutes.

Project Structure

infra/
├── modules/
│   ├── networking/      # VNet, subnets, NSGs
│   ├── compute/         # VMs, scale sets
│   ├── data/            # Azure SQL, ADF
│   ├── security/        # Key Vault, managed identity
│   └── functions/       # Function Apps, Logic Apps
├── environments/
│   ├── dev/
│   ├── test/
│   └── prod/
└── main.tf

Step 1 — Networking First

Always start with networking. Everything else depends on it.

module "networking" {
  source = "../../modules/networking"

  resource_group_name = var.resource_group_name
  location            = var.location
  vnet_address_space  = ["10.0.0.0/16"]

  subnets = {
    app  = "10.0.1.0/24"
    data = "10.0.2.0/24"
    mgmt = "10.0.3.0/24"
  }
}

Step 2 — Key Vault Before Everything Else

Secrets need a home before any resource tries to reference them. Provision Key Vault early and use managed identities — never hardcoded credentials.

module "keyvault" {
  source = "../../modules/security"

  name                = "kv-${var.environment}-${var.project}"
  resource_group_name = var.resource_group_name
  location            = var.location

  access_policies = [
    {
      object_id = module.app.managed_identity_id
      secret_permissions = ["Get", "List"]
    }
  ]
}

Step 3 — Multi-Environment Strategy

The key insight: use a single module set, drive differences with .tfvars files per environment.

# environments/prod/terraform.tfvars
environment         = "prod"
vm_size             = "Standard_D4s_v3"
sql_sku             = "S3"
enable_diagnostics  = true
replication_type    = "GRS"
# environments/dev/terraform.tfvars
environment         = "dev"
vm_size             = "Standard_B2s"
sql_sku             = "S1"
enable_diagnostics  = false
replication_type    = "LRS"

Step 4 — CI/CD Integration

Your Terraform pipeline should run plan on PRs and apply on merge to main. Use Azure DevOps or GitHub Actions with remote state in Azure Storage Account.

- task: TerraformTaskV4@4
  inputs:
    provider: 'azurerm'
    command: 'plan'
    workingDirectory: 'infra/environments/$(ENVIRONMENT)'
    environmentServiceNameAzureRM: 'azure-service-connection'

Key Takeaways

  • Module everything — reusability prevents drift between environments
  • Key Vault first — never reference secrets before their home exists
  • State in Azure Storage — use remote state with locking from day one
  • Separate tfvars per env — same modules, different inputs
  • Plan before apply — always review the plan in CI before it touches prod

This approach has allowed me to spin up full enterprise environments consistently at MetaSol. The first time takes effort. The tenth time takes 30 minutes.