Overview
This guide covers the essential commands and workflows for developing Terraform infrastructure on your local machine before pushing to CI/CD.
Daily development cycle
The typical workflow for making infrastructure changes:
# 1. Create a feature branch
git checkout -b feature/add-vpc
# 2. Make changes in your environment directory
cd environments/test/
# Edit main.tf, variables.tf, etc.
# 3. Format code
make format
# 4. Validate configuration
make validate-env ENV=test
# 5. Preview changes
make plan ENV=test
# 6. Apply changes (if plan looks good)
make apply ENV=test
# 7. Test your infrastructure
# Run manual tests or automated tests
# 8. Commit and push
git add .
git commit -m "feat: add VPC with public and private subnets"
git push origin feature/add-vpc
# 9. Open pull request on GitHub
Essential Makefile commands
The starter kit provides convenient make commands for all operations. For complete documentation of all commands and options, see the Makefile reference.
Setup and installation
make install-tools # Install Terraform, AWS CLI, TFLint, Checkov, Granted
make check # Verify tool versions
make setup # Run the complete setup wizard
Code quality
make format # Format all Terraform files
make lint # Run TFLint on all code
make security-scan # Run Checkov security scan
make validate-full # Run format check, validate, lint, and security scan
Example output:
$ make validate-full
✓ Checking Terraform formatting...
✓ Validating Terraform configuration...
✓ Running TFLint...
✓ Running Checkov security scan...
All checks passed!
Environment operations
make init ENV=test # Initialize Terraform backend
make validate-env ENV=test # Validate specific environment
make plan ENV=test # Generate execution plan
make apply ENV=test # Apply infrastructure changes
make destroy ENV=test # Destroy all resources
Important: Always specify ENV=<environment> for environment-specific commands.
Cleanup
make cleanup # Interactive cleanup wizard
The cleanup wizard offers:
- Destroy environment resources
- Destroy bootstrap backend (S3, DynamoDB)
- Clean local files (.terraform/, lock files)
- Remove source files (environments/, workflows)
- Full cleanup (all of the above)
For detailed cleanup options and safety considerations, see the Makefile reference.
Working with Terraform directly
While make commands are convenient, you can also use Terraform directly:
Initialize backend
cd environments/test/
terraform init
First-time initialization downloads provider plugins and configures the S3 backend.
Format code
terraform fmt -recursive
Formats all .tf files in the current directory and subdirectories.
Validate configuration
cd environments/test/
terraform validate
Checks syntax and validates configuration against provider schemas.
Preview changes
cd environments/test/
terraform plan
Shows what Terraform will create, modify, or destroy.
Save a plan:
terraform plan -out=tfplan
terraform show tfplan # Review saved plan
terraform apply tfplan # Apply saved plan
Apply changes
cd environments/test/
terraform apply
Terraform prompts for confirmation. Review the plan and type yes to proceed.
Auto-approve (use with caution):
terraform apply -auto-approve
Target specific resources
Apply changes to specific resources only:
terraform apply -target=aws_s3_bucket.example
Use case: Fix a single resource without affecting others.
Destroy resources
cd environments/test/
terraform destroy
Destroys all resources managed by the current configuration.
Destroy specific resources:
terraform destroy -target=aws_s3_bucket.example
View outputs
terraform output
terraform output role_arn # Specific output
terraform output -json # JSON format
View current state
terraform show
terraform state list # List all resources
terraform state show aws_s3_bucket.example # Show specific resource
Code quality checks
TFLint
Run TFLint for code quality validation:
# From repository root
make lint
# Or manually
cd environments/test/
tflint --init
tflint --format=compact
Common issues TFLint catches:
- Naming convention violations (should use snake_case)
- Missing variable descriptions
- Missing output descriptions
- Variables without types
- Unused variables or outputs
- Deprecated resource patterns
- AWS-specific best practices
Example output:
environments/test/main.tf:10:1: Warning: Missing variable description (terraform_documented_variables)
environments/test/variables.tf:5:1: Error: variable "instanceType" should be snake_case (terraform_naming_convention)
Checkov
Run Checkov for security scanning:
# From repository root
make security-scan
# Or manually
checkov --directory environments/test/ --framework terraform
Security issues Checkov detects:
- Unencrypted S3 buckets
- Publicly accessible resources
- Missing logging/monitoring
- Overly permissive IAM policies
- Insecure network configurations
- Compliance violations (PCI-DSS, HIPAA, etc.)
Example output:
Check: CKV_AWS_18: "Ensure S3 bucket has server-side encryption enabled"
FAILED for resource: aws_s3_bucket.data
File: /environments/test/main.tf:15-18
Suppressing false positives:
resource "aws_s3_bucket" "logs" {
#checkov:skip=CKV_AWS_18:Encryption not required for access logs
bucket = "app-access-logs"
}
Managing state
View state
cd environments/test/
terraform state list # List all resources
terraform state show aws_s3_bucket.example # Show resource details
Import existing resources
Bring existing AWS resources under Terraform management:
terraform import aws_s3_bucket.example my-existing-bucket
Workflow:
-
Add resource definition to Terraform:
resource "aws_s3_bucket" "example" { bucket = "my-existing-bucket" } -
Import the resource:
terraform import aws_s3_bucket.example my-existing-bucket -
Run
terraform planto verify configuration matches reality
Move resources
Rename resources in state without destroying:
terraform state mv aws_s3_bucket.old_name aws_s3_bucket.new_name
Remove resources from state
Remove a resource from Terraform management without destroying it:
terraform state rm aws_s3_bucket.example
Use case: Migrate a resource to a different Terraform configuration.
Pull remote state
Download the current state from S3:
terraform state pull > local-state.json
Warning: Only for inspection. Don't modify and push back manually.
Working with multiple AWS accounts
Using AWS CLI profiles
Configure profiles in ~/.aws/config:
[profile test-account]
sso_start_url = https://my-org.awsapps.com/start
sso_region = us-east-1
sso_account_id = 123456789012
sso_role_name = AdministratorAccess
region = us-east-1
[profile production-account]
sso_start_url = https://my-org.awsapps.com/start
sso_region = us-east-1
sso_account_id = 210987654321
sso_role_name = AdministratorAccess
region = us-east-1
Switch profiles:
export AWS_PROFILE=test-account
make plan ENV=test
export AWS_PROFILE=production-account
make plan ENV=production
Using Granted
Granted simplifies multi-account access:
# Install via make install-tools or manually
brew install common-fate/granted/granted
# Configure SSO
granted sso populate
# Assume role
assume test-account
# Deploy to test
make apply ENV=test
# Switch to production
assume production-account
make apply ENV=production
Learn more: Setting up the AWS CLI with AWS SSO
Testing infrastructure
Manual testing
After applying changes, manually verify resources:
# Example: Verify S3 bucket
aws s3 ls s3://my-bucket-name
# Example: Verify Lambda function
aws lambda get-function --function-name my-function
# Example: Test IAM role
aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/MyRole \
--role-session-name test
Best practices
Before making changes
- Pull latest from main
- Review existing configuration
- Check for similar patterns in other environments
- Plan resource naming conventions
During development
- Make small, incremental changes
- Run
terraform planfrequently - Format code regularly (
make format) - Add descriptive variable descriptions
- Tag resources appropriately
Before committing
- Run full validation:
make validate-full - Review plan output carefully
- Check for sensitive data in code
- Update documentation if needed
- Write clear commit messages
After applying
- Verify resources in AWS Console
- Test functionality manually
- Document any manual steps required
- Update environment outputs if needed
Next steps
- Understand the CI/CD Workflow for automated deployments
- Learn about Environment management strategies
- Explore the OIDC provider module implementation