💸 Catch expensive AWS mistakes before deployment! See cost impact in GitHub PRs for Terraform & CDK. Join the Free Beta!
AWS Security Best Practices: The 20% That Prevents 80% of Breaches

AWS Security Best Practices: The 20% That Prevents 80% of Breaches

Stop implementing AWS security blindly. Learn the critical IAM, network, data, and monitoring practices that prevent most breaches, with specific configurations.

February 5th, 2026
0 views
--- likes

AWS provides over 300 security services and features. That number alone causes paralysis for teams trying to secure their environments.

Here's the uncomfortable truth: most AWS breaches don't exploit sophisticated vulnerabilities. They exploit basic misconfigurations. An S3 bucket left public. An IAM policy with wildcard permissions. Root credentials without MFA. The Capital One breach that exposed 100 million records? A misconfigured WAF combined with an overly permissive IAM role.

The AWS Well-Architected Framework Security Pillar organizes security into seven best practice areas: Security Foundations, Identity and Access Management, Detection, Infrastructure Protection, Data Protection, Incident Response, and Application Security. That's comprehensive, but it doesn't tell you where to start.

This guide applies the 80/20 principle to AWS security. I'll show you the critical 20% of practices that prevent 80% of breaches. Every section is organized into Foundation, Intermediate, and Advanced levels so you know exactly what to prioritize based on your team size and security maturity.

By the end of this guide, you'll have specific configurations, actual policy examples, and a clear implementation roadmap. No more guessing what matters most.

The AWS Shared Responsibility Model

Before implementing any security control, you need to understand what you're responsible for versus what AWS handles. This distinction determines everything else.

AWS calls this the Shared Responsibility Model. It's the foundation every other practice builds on, and misunderstanding it leads to dangerous assumptions.

What AWS Secures vs. What You Must Secure

AWS handles "Security OF the Cloud." This means AWS protects the infrastructure that runs all services: hardware, software, networking, and physical facilities. AWS manages the host operating system and virtualization layer, network infrastructure, and physical security of data centers. You can't access or audit this layer, but AWS provides compliance certifications to prove they're doing their job.

You handle "Security IN the Cloud." This is where breaches happen. You're responsible for:

  • Customer data and its classification
  • Identity and access management (IAM policies, roles, users)
  • Operating system patches on EC2 instances
  • Application configuration and security
  • Network configuration (security groups, NACLs, VPC design)
  • Encryption choices and key management
  • Logging and monitoring configuration

The boundary shifts based on service type. With EC2 (IaaS), you're responsible for the guest OS, security patches, and everything above. With managed services like S3 or DynamoDB, AWS handles more of the stack, but you're still responsible for data, encryption settings, IAM permissions, and access policies.

The Responsibility Shifts by Service Type

Here's how responsibility varies across common services:

Service TypeAWS ResponsibilityYour Responsibility
EC2 (IaaS)Physical infrastructure, hypervisorGuest OS, patches, applications, security groups
RDS (Managed)Database engine patching, infrastructureAccess policies, encryption options, backup retention
S3 (Abstracted)Storage infrastructure, durabilityBucket policies, encryption, access logging
Lambda (Serverless)Execution environment, scalingFunction code, IAM permissions, VPC configuration

Common Misconceptions That Lead to Breaches

"AWS encrypts everything automatically." Partially true since January 2023 for S3, but you still decide whether to use AWS-managed keys or customer-managed keys. You're responsible for encryption at the application layer and for services where encryption isn't automatic.

"AWS monitors my account for threats." AWS provides the tools (GuardDuty, Security Hub, CloudTrail), but you must enable and configure them. Many accounts have CloudTrail disabled or limited to single regions.

"Managed services mean I don't need to worry about security." Wrong. Managed services reduce operational burden, not security responsibility. A misconfigured S3 bucket policy can expose data regardless of how well AWS manages the underlying storage.

Understanding this model prevents the assumption that AWS "handles security." They handle their part. Your part is where the breaches happen.

Now that you understand what you're responsible for, let's tackle the number one cause of AWS security breaches: identity and access management.

Identity and Access Management Best Practices

IAM misconfigurations cause more breaches than any other category. An overly permissive policy, exposed credentials, or missing MFA can compromise your entire AWS environment in minutes.

This section covers the IAM practices that matter most, organized by implementation priority.

Foundation Level (Start Here)

These practices are non-negotiable. Implement them before anything else.

Protect the root user aggressively. The root user has unrestricted access to everything in your account. No IAM policy or SCP can limit it. Enable hardware MFA (YubiKey or FIDO security key) immediately. Never create access keys for the root user. AWS offers a free MFA security key to eligible US account owners. Use a corporate group email like aws-root@company.com so access isn't tied to an individual employee.

Use IAM roles over IAM users. Human users should access AWS using temporary credentials through federation, not long-term IAM user credentials. AWS IAM Identity Center is the recommended approach for workforce identity. For workloads outside AWS (CI/CD pipelines, on-premises servers), use IAM Roles Anywhere with X.509 certificates to obtain temporary credentials.

Implement least privilege from day one. Start with zero permissions and grant only what's needed. Use AWS managed policies as starting points, then refine based on actual CloudTrail activity. Here's a practical least-privilege policy for a developer who needs to manage Lambda functions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "lambda:GetFunction",
        "lambda:ListFunctions",
        "lambda:UpdateFunctionCode",
        "lambda:UpdateFunctionConfiguration"
      ],
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:dev-*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "logs:GetLogEvents",
        "logs:FilterLogEvents"
      ],
      "Resource": "arn:aws:logs:us-east-1:123456789012:log-group:/aws/lambda/dev-*:*"
    }
  ]
}

Notice the specific resource constraints. No wildcards on resources. No lambda:* actions. This developer can update code and view logs for development functions only.

Enforce MFA for all human users. AWS supports three MFA types: passkeys and security keys (FIDO standards), virtual authenticator apps (TOTP like Google Authenticator), and hardware TOTP tokens. For privileged users (administrators, security team), mandate hardware MFA. For standard users, virtual MFA apps are acceptable but dramatically better than password-only authentication.

Intermediate Level

Once foundations are solid, these practices scale IAM security.

Use Service Control Policies for organization-wide guardrails. SCPs define maximum permissions at the organization, OU, or account level. They don't grant permissions but set boundaries that even account administrators can't exceed. For detailed SCP examples and implementation patterns, including deny policies for preventing security service disablement, see the linked guide.

Key SCP characteristics:

  • Maximum 5 SCPs per target (root, OU, or account)
  • 5,120 character limit per policy
  • Don't affect the management account
  • Evaluated hierarchically (account has only permissions allowed by every parent)

Implement permission boundaries for safe delegation. Permission boundaries let you delegate IAM creation without granting unlimited access. A team can create IAM roles for their Lambda functions, but the boundary ensures they can't grant themselves administrator access.

Use IAM Access Analyzer to find and fix issues. Access Analyzer generates least-privilege policies based on CloudTrail activity, identifies resources shared with external accounts, and validates policies for security issues. Enable it and review findings weekly.

Advanced Level

These practices address enterprise-scale identity requirements.

Centralize access with AWS IAM Identity Center. For multi-account environments, IAM Identity Center provides single sign-on across all accounts. It supports SAML 2.0 identity providers, Microsoft Active Directory integration, and custom identity sources. Users authenticate once and access multiple accounts through temporary credentials.

Implement attribute-based access control (ABAC). ABAC uses tags for dynamic permissions. Instead of maintaining separate policies for each team, use a single policy that grants access based on matching resource tags. This scales better than traditional role-based access control when you have hundreds of resources and teams.

Common IAM Mistakes to Avoid

AWS Security Hub identifies these misconfigurations constantly:

  1. Administrative access granted unnecessarily. Teams default to AdministratorAccess because it's easier. The blast radius of any credential compromise becomes the entire account.

  2. Wildcard permissions in production. Policies with "Resource": "*" and broad actions like "s3:*" or "ec2:*" grant far more access than needed.

  3. Missing MFA for privileged access. Root users and administrators without MFA are one password away from compromise.

  4. Unused credentials not deactivated. Security Hub flags credentials unused for 90 days. These orphaned credentials accumulate and increase attack surface.

  5. Access keys not rotated. Keys used for years have years of potential exposure through logs, debugging sessions, and configuration files. Rotate every 90 days maximum.

Testing IAM Security

Security controls only work if you verify them.

Use IAM Policy Simulator to test policies before deployment. The simulator shows what actions a policy allows or denies without making actual API calls.

Run IAM Access Analyzer regularly. Beyond policy validation, Access Analyzer detects external access you might have forgotten about. A resource policy from two years ago that grants cross-account access will show up.

Review service last accessed data. IAM tracks when each permission was last used. If a role has S3 permissions but hasn't accessed S3 in 6 months, those permissions are candidates for removal.

With identity locked down, your next priority is protecting your network. This is where many teams make costly mistakes.

Network Security Best Practices

Network security determines what can communicate with what in your AWS environment. Get this wrong, and internal resources become internet-accessible, or attackers who breach one system can pivot to everything else.

Foundation Level

Start with these controls for any AWS environment.

Design VPCs with security in mind. Use multiple subnets across Availability Zones for high availability. Separate public subnets (for load balancers) from private subnets (for applications) and isolated subnets (for databases). Never place databases in public subnets.

Understand Security Groups. Security groups are stateful virtual firewalls at the instance level. If you allow inbound traffic, the response is automatically allowed outbound. Each EC2 instance can have up to five security groups, and you can associate one security group with multiple instances.

Security group best practices:

  • Authorize only specific IAM principals to manage security groups
  • Create purpose-specific groups rather than reusing generic ones
  • Restrict SSH (port 22) and RDP (port 3389) to specific trusted IP addresses, never 0.0.0.0/0
  • Use security group referencing to allow traffic between resources without hardcoding IP addresses

Understand Network ACLs. NACLs are stateless controls at the subnet level. Responses to allowed traffic must be explicitly allowed. Rules are evaluated in order starting with the lowest numbered rule. NACLs provide defense in depth when combined with security groups.

Enable VPC Flow Logs for all VPCs. Flow Logs capture traffic metadata for security analysis and troubleshooting. Without them, you're blind to network activity. GuardDuty analyzes Flow Logs for threat detection.

Intermediate Level

Production environments need these additional controls.

Build multi-tier architectures. Separate resources by security zone:

  • Web tier (public subnet): Load balancers, NAT Gateways
  • Application tier (private subnet): EC2 instances, containers, Lambda
  • Data tier (isolated subnet): RDS, ElastiCache, Elasticsearch

Use VPC Endpoints for private connectivity. VPC Endpoints eliminate exposure to the public internet when accessing AWS services. Instead of routing S3 traffic through the internet gateway, traffic stays within the AWS network. This reduces attack surface and often improves latency.

Benefits of AWS PrivateLink:

  • Eliminates exposure to public internet
  • Simplifies network architecture
  • Reduces attack surface
  • Supports private IP addresses and security groups

Use security group referencing. Instead of allowing traffic from specific IP addresses, reference another security group. This creates dynamic rules that update automatically when instances change:

{
  "IpPermissions": [
    {
      "IpProtocol": "tcp",
      "FromPort": 5432,
      "ToPort": 5432,
      "UserIdGroupPairs": [
        {
          "GroupId": "sg-0123456789abcdef0",
          "Description": "Allow PostgreSQL from application tier"
        }
      ]
    }
  ]
}

This rule allows PostgreSQL connections from any instance in the referenced security group, regardless of IP address changes.

Advanced Level

Enterprise environments need additional network security capabilities.

Use AWS Network Firewall for advanced filtering. Network Firewall provides stateful and stateless inspection, intrusion prevention (IPS), and URL filtering. Deploy it for traffic that needs more granular control than security groups and NACLs provide.

Implement DDoS protection. AWS Shield Standard is automatically included with all AWS accounts at no cost, protecting CloudFront, Route 53, Global Accelerator, and Elastic Load Balancing. For critical applications, Shield Advanced provides 24/7 access to the AWS DDoS Response Team (DRT), DDoS cost protection (credits for scaling costs during attacks), and application layer protection when combined with WAF.

Use WAF for application layer protection. AWS WAF protects web applications against common exploits. AWS Managed Rules provide baseline protection for OWASP Top 10 vulnerabilities, SQL injection, and known bad inputs. New in 2025: AWS WAF provides automatic application layer (L7) DDoS protection with machine learning-based detection.

Port Configuration by Application Type

Forums consistently ask: "Which ports should I open?" Here's concrete guidance.

Web applications:

  • Allow 443 (HTTPS) inbound from 0.0.0.0/0 for public traffic
  • Allow 80 (HTTP) only if needed for redirect to HTTPS
  • Never expose backend ports (database, cache, admin) publicly

Databases:

  • Allow database ports (5432 for PostgreSQL, 3306 for MySQL) only from application security group
  • Never allow 0.0.0.0/0 for database ports
  • Place databases in private subnets with no internet gateway route

SSH/RDP management:

  • Avoid opening these ports entirely. Use AWS Systems Manager Session Manager instead
  • If you must use SSH/RDP, allow only from specific management IPs, never 0.0.0.0/0
  • Consider a bastion host in a public subnet as a jump box

Before and after example:

Overly permissive (WRONG):

Inbound: SSH (22) from 0.0.0.0/0
Inbound: PostgreSQL (5432) from 0.0.0.0/0
Inbound: HTTP (80) from 0.0.0.0/0
Inbound: HTTPS (443) from 0.0.0.0/0

Properly configured (RIGHT):

Inbound: HTTPS (443) from 0.0.0.0/0
Inbound: HTTP (80) from 0.0.0.0/0 (redirect only)
Outbound: PostgreSQL (5432) to sg-database
No SSH - use Session Manager

Common Network Security Mistakes

  1. 0.0.0.0/0 for SSH/RDP. This exposes management ports to the entire internet. Brute force attacks begin within minutes.

  2. Default security groups left in use. The default security group allows all traffic from other instances in the same group. Create purpose-specific groups instead.

  3. Databases in public subnets. Even with security groups blocking direct access, public subnets mean databases have public IP addresses and routes to the internet.

  4. VPC Flow Logs not enabled. Without Flow Logs, you can't investigate security incidents or detect reconnaissance.

  5. Missing network segmentation. All resources in one subnet with one security group means compromise of any system compromises everything.

Your network perimeter is now secured. Next, let's protect the data itself, because a breach of encrypted data is far less damaging than plaintext exposure.

Data Protection Best Practices

Data protection determines whether a breach exposes readable customer data or encrypted garbage. Encryption at rest and in transit, combined with proper key management, transforms security incidents from catastrophes into contained events.

Foundation Level

These practices are non-negotiable for any production environment.

Enable S3 Block Public Access at the account level. This single setting prevents accidental public bucket creation. Enable it in your account settings, then enable it again at the bucket level for defense in depth.

Understand S3 encryption options. Starting January 5, 2023, all new object uploads to Amazon S3 are automatically encrypted using SSE-S3 at no additional cost. You still choose the right option for your needs:

  • SSE-S3 (Amazon S3 managed keys): AWS manages everything. Good for most use cases. Free.
  • SSE-KMS (AWS KMS keys): Customer control over key management. Separate permissions for key usage. Audit trail via CloudTrail. Supports S3 Bucket Keys to reduce KMS costs by up to 99%.
  • DSSE-KMS (Dual-layer encryption): Two layers of encryption for enhanced security requirements.
  • SSE-C (Customer-provided keys): You manage keys outside AWS. AWS performs encryption/decryption. Complex to manage.

Enforce HTTPS-only access. Use bucket policies to deny unencrypted connections:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "EnforceHTTPS",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::my-bucket",
        "arn:aws:s3:::my-bucket/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}

Enable EBS encryption by default. In your EC2 settings, enable "Always encrypt new EBS volumes." This applies account-wide and eliminates unencrypted volumes without requiring changes to launch templates or CloudFormation.

Intermediate Level

These practices address key management and secrets handling.

Choose the right KMS key type:

  • AWS managed keys: Created automatically when services need encryption. Less management overhead but less control.
  • Customer managed keys: You create and manage with full control over key policies, rotation, and deletion. Required for cross-account access and custom key policies.

Implement proper secrets management. Never hardcode credentials. Use AWS Secrets Manager for credentials requiring rotation (database passwords, API keys), and Parameter Store for configuration data without rotation requirements.

RequirementUse Secrets ManagerUse Parameter Store
Automatic rotationYesNo
Database credentialsYesEither
Multi-Region replicationYesNo
Non-sensitive configNoYes
Cost optimizationParameter StoreParameter Store

Secrets Manager integrates natively with RDS, DocumentDB, and Redshift for automatic credential rotation without application changes.

Use CodeGuru Reviewer to detect hard-coded secrets. CodeGuru scans code for credentials, API keys, and other secrets that shouldn't be in source control. Implement pre-commit hooks as an additional layer.

Advanced Level

These practices address regulatory requirements and advanced data protection.

Implement automatic key rotation. KMS supports automatic annual rotation for customer managed keys. For secrets, configure rotation schedules in Secrets Manager (30, 60, or 90 days recommended). Test rotation in non-production first.

Use S3 Object Lock for compliance. Object Lock prevents deletion or overwrite for a retention period. Compliance mode prevents anyone, including root, from deleting objects before retention expires. Required for SEC Rule 17a-4, FINRA, and similar regulations.

Consider Amazon Macie for sensitive data discovery. Macie uses machine learning to automatically discover and classify sensitive data in S3. It finds PII, financial data, and credentials you might not know exist in your buckets.

Use CloudHSM for FIPS 140-2 Level 3 requirements. If regulatory requirements mandate Level 3 validation, CloudHSM provides single-tenant hardware security modules. More complex than KMS but meets stricter compliance requirements.

Common Data Protection Mistakes

  1. Public S3 buckets. The Capital One breach exposed 100+ million records through a misconfigured bucket. Enable Block Public Access at account level.

  2. Unencrypted data in development. Development environments often skip encryption. Treat dev data with production security because it often contains production-like data.

  3. Secrets in code repositories. Credentials committed to Git persist forever in history. Even after removal, they're exposed. Use git-secrets or similar pre-commit hooks.

  4. Missing backup validation. Backups you've never tested might not work when needed. Regularly restore and verify backup integrity.

Your data is now protected. But how do you know if someone tries to compromise it? That's where logging and monitoring become critical.

Logging and Monitoring Best Practices

Detection is essential. You can't respond to threats you don't see. Proper logging and monitoring transforms your AWS environment from opaque to observable.

Foundation Level

Enable these services from day one. They form your security visibility foundation.

Enable CloudTrail across all regions. CloudTrail records every API call in your account. Create a multi-region trail, not a single-region trail. Attackers can operate in any region, and single-region logging leaves blind spots.

CloudTrail best practices:

  • Enable log file integrity validation (detects tampering)
  • Store logs in a dedicated S3 bucket with strict access controls
  • Encrypt logs using KMS
  • Enable MFA Delete on the log bucket
  • Integrate with CloudWatch Logs for real-time alerting

Enable GuardDuty immediately. GuardDuty is managed threat detection that analyzes CloudTrail logs, VPC Flow Logs, and DNS query logs. It uses machine learning and threat intelligence to identify malicious activity.

GuardDuty detects:

  • Cryptocurrency mining on compromised instances
  • Credential compromise from unusual locations
  • Command and control communication
  • S3 data exfiltration patterns
  • Reconnaissance activity (port scanning, API enumeration)

Enable all GuardDuty protection plans: S3 Protection, EKS Protection, Malware Protection, RDS Protection, and Lambda Protection.

Set up essential CloudWatch alarms. Create alarms for security-critical events:

{
  "AlarmName": "RootAccountUsage",
  "MetricName": "RootAccountUsageCount",
  "Namespace": "CloudTrailMetrics",
  "Threshold": 1,
  "ComparisonOperator": "GreaterThanOrEqualToThreshold",
  "EvaluationPeriods": 1,
  "Period": 300,
  "Statistic": "Sum"
}

Monitor these events at minimum:

  • Root user API calls (any root activity is unusual)
  • IAM policy changes
  • Security group modifications
  • CloudTrail configuration changes
  • Failed console login attempts

Intermediate Level

Scale security visibility across multiple accounts.

Centralize logging in a dedicated account. Create a Log Archive account within AWS Organizations to store all CloudTrail, Config, and application logs. Use an organization trail to automatically log all member accounts. Protect logs with SCPs that prevent disabling security services.

Enable Security Hub for aggregated findings. Security Hub aggregates findings from GuardDuty, Inspector, Macie, Config, and IAM Access Analyzer. It provides automated compliance checks against CIS AWS Foundations Benchmark, PCI DSS, and AWS Foundational Security Best Practices.

Enable Security Hub in all accounts:

aws securityhub enable-security-hub \
  --enable-default-standards \
  --region us-east-1

Review your security score weekly. Address Critical and High findings immediately.

Deploy AWS Config for continuous compliance. Config tracks resource configurations and evaluates compliance against rules. Use conformance packs to deploy standard rule sets:

  • CIS AWS Foundations Benchmark
  • AWS Foundational Best Practices
  • PCI DSS (if handling payment data)

Config can trigger automated remediation through Systems Manager Automation when resources become non-compliant.

Advanced Level

Enterprise-scale detection and investigation capabilities.

Implement Amazon Security Lake for unified security data. Security Lake automatically centralizes security data from AWS services, SaaS providers, and custom sources into a purpose-built data lake. Data is normalized to the Open Cybersecurity Schema Framework (OCSF) for consistent analysis.

Use Amazon Detective for investigation. Detective uses machine learning to analyze CloudTrail logs, VPC Flow Logs, and GuardDuty findings. When you have a GuardDuty finding, Detective automatically builds a unified view showing related activity, affected resources, and potential root cause.

Configure custom GuardDuty threat lists. Add known malicious IPs and domains to GuardDuty threat lists. Add trusted IPs (corporate offices, known partners) to suppress false positives.

Attack Detection (Answering Forum Questions)

Forums consistently ask about detecting DDoS, brute force, and unusual traffic. Here's how.

DDoS detection: GuardDuty combined with Shield metrics identifies volumetric attacks. CloudWatch monitors for traffic spikes. Shield Advanced provides attack notifications and analytics for protected resources.

Brute force detection: GuardDuty detects SSH and RDP brute force attempts with specific finding types like UnauthorizedAccess:EC2/SSHBruteForce. CloudWatch Logs Insights can query CloudTrail for repeated failed authentication:

fields @timestamp, errorCode, userIdentity.userName
| filter eventName = "ConsoleLogin" and errorCode = "Failed"
| stats count(*) as failedLogins by userIdentity.userName
| sort failedLogins desc

IDS/IPS strategy: GuardDuty serves as managed IDS, detecting threats through log analysis. AWS Network Firewall provides IPS capabilities with stateful inspection and intrusion prevention rules.

Monitoring Without Alert Fatigue

Alert fatigue makes security monitoring useless. When everything is urgent, nothing is.

Prioritize alerts by severity:

  • Critical: Root activity, credential compromise indicators, active attacks. Immediate notification (PagerDuty, phone call).
  • High: Security group changes to production, IAM policy modifications. Same-day investigation.
  • Medium: Non-critical Config rule violations, low-severity GuardDuty findings. Weekly review.

Suppress expected events. If your CI/CD pipeline legitimately modifies security groups, suppress those specific findings rather than ignoring the entire category.

Use Security Hub automated suppression. Create suppression rules for findings you've investigated and accepted as false positives or acceptable risk.

Detection is essential, but preparation turns potential catastrophes into contained incidents. Let's cover the security mistakes that lead to breaches before discussing incident response.

Common AWS Security Mistakes (That Lead to Breaches)

Learning from others' failures is cheaper than learning from your own. These are the misconfigurations I see consistently, each one a potential breach waiting to happen.

The 8 Most Dangerous Misconfigurations

1. Overly Permissive IAM Policies

Teams default to AdministratorAccess because it's faster. Service accounts get admin policies "temporarily" that become permanent. Wildcard permissions ("Action": "*", "Resource": "*") grant far more than needed.

Prevention: Use IAM Access Analyzer to generate least-privilege policies from CloudTrail activity. Implement permission boundaries. Never start with admin access and "lock it down later."

2. Unencrypted Data Storage

S3 buckets without encryption (less common since 2023 but legacy buckets exist). EBS volumes created before encryption defaults were enabled. RDS databases with encryption disabled at launch (can't be enabled later without recreation).

Prevention: Enable account-wide encryption defaults. Deploy AWS Config rules (encrypted-volumes, s3-bucket-server-side-encryption-enabled, rds-storage-encrypted). Regularly scan for unencrypted resources.

3. Publicly Accessible Resources

S3 buckets with public read or write access. EC2 instances with 0.0.0.0/0 for SSH/RDP. RDS databases with "Publicly Accessible" enabled. Elasticsearch domains with public endpoints.

Prevention: Enable S3 Block Public Access at account level. Restrict security groups to specific IPs. Never enable public accessibility for databases. Use VPC endpoints for private connectivity.

4. Disabled or Misconfigured Logging

CloudTrail disabled or limited to a single region. VPC Flow Logs not enabled. S3 access logging disabled. Log buckets without protection against deletion.

Prevention: Create organization trails covering all regions and accounts. Enable VPC Flow Logs for all VPCs. Use SCPs to prevent CloudTrail modification. Enable MFA Delete and Object Lock on log buckets.

5. Weak Network Security

Security groups allowing 0.0.0.0/0 for all ports. Default security groups in use (allow all traffic within group). NACLs with allow-all rules. No network segmentation between environments.

Prevention: Follow least privilege for security groups. Create purpose-specific groups. Implement subnet-level isolation. Use AWS Network Firewall for advanced filtering. Regular security group audits.

6. Credential Management Failures

Access keys not rotated for years. Root account credentials used for daily operations. Secrets hard-coded in application code or committed to Git. Access keys stored in configuration files.

Prevention: Use temporary credentials (IAM roles) everywhere possible. Rotate access keys every 90 days. Use Secrets Manager for credential storage. Implement pre-commit hooks with git-secrets. Enable Trusted Advisor exposed access key checks.

7. Missing MFA

Root user without MFA. IAM users with console access but no MFA. Privileged users (administrators, billing) without MFA requirement.

Prevention: Enforce MFA for all root users. Use IAM policies with MFA conditions for sensitive operations. Implement IAM Identity Center with mandatory MFA. Deploy AWS Config rules to detect users without MFA.

8. Unused Resources and Permissions

Inactive IAM users and access keys accumulating. Unused security groups cluttering the environment. Orphaned resources consuming costs. Stale permissions granted for past projects.

Prevention: Regular access reviews using IAM Access Analyzer service last accessed data. Implement user lifecycle policies. Remove unused access keys after 90 days. Use AWS Config to identify unused security groups.

How to Audit Your Account for These Issues

Run these tools to find misconfigurations before attackers do:

Security Hub: Enable and review your security score. Address Critical and High findings first. Each finding includes remediation guidance.

Trusted Advisor: Run security checks for MFA on root, unrestricted security groups, exposed access keys, and S3 bucket permissions.

IAM Access Analyzer: Detect resources shared with external accounts and unused access.

AWS Config: Deploy conformance packs for CIS Benchmarks and AWS Foundational Best Practices.

Amazon Inspector: Scan EC2 instances, containers, and Lambda functions for software vulnerabilities and network exposure.

Real-World Breach Examples

Capital One (2019): A misconfigured WAF combined with an overly permissive IAM role allowed an attacker to access S3 buckets containing 100+ million customer records. The practices in this guide, specifically least privilege IAM, network segmentation, and proper logging, would have prevented or quickly detected this breach.

S3 Data Exposures: Numerous organizations have exposed sensitive data through public bucket settings. The pattern is consistent: someone disables Block Public Access for a "quick" data share and forgets to re-enable it, or a bucket policy grants public access that nobody reviews.

The lesson: Most breaches exploit configuration mistakes, not sophisticated vulnerabilities. Fix the basics first.

Incident Response Preparation

Even with perfect prevention, incidents happen. Preparation determines whether an incident becomes a minor disruption or a major breach.

Building Your IR Playbook

Document these elements before you need them:

Incident types and severity levels. Define what constitutes Critical (active breach, data exfiltration), High (credential compromise, suspicious access), Medium (policy violation, non-compliance), and Low (informational findings).

Communication plan. Who gets notified for each severity level? What's the escalation path? How do you communicate externally if customer data is affected?

Containment procedures. Pre-documented steps for common scenarios: compromised EC2 instance (isolate via security group), compromised credentials (disable keys, rotate), suspected data breach (preserve evidence, assess scope).

Contact information. Security team contacts, AWS Support details (know your support plan capabilities), legal counsel, PR contacts for public incidents.

AWS Security Incident Response Service

AWS launched Security Incident Response service (generally available December 2024), providing automated monitoring, AI-powered investigation, and 24/7 access to AWS Customer Incident Response Team (CIRT).

Key capabilities:

  • Automated triage: Reduces alert fatigue by filtering and prioritizing security findings from GuardDuty and Security Hub
  • Agentic AI investigation: Uses AI to analyze security events and provide investigation insights
  • Proactive escalation: Automatically escalates high-severity cases
  • Automated containment: Implements containment actions to prevent spread
  • 24/7 CIRT access: Expert-guided response within minutes for critical incidents

The service is available in 12 AWS Regions with AWS Organizations integration.

Forensic Readiness

Set up these capabilities before incidents occur:

Enable detailed logging. CloudTrail, VPC Flow Logs, and application logs must be enabled before an incident. You can't retroactively enable logging for past events.

Configure automated snapshots. Use EventBridge rules to automatically create EBS snapshots and AMIs when GuardDuty detects threats. Preserve evidence before it's modified.

Set up an isolated forensic environment. Prepare a CloudFormation template for a forensic VPC with isolated subnets, no internet access, and IAM roles for investigation. Deploy it when needed.

Define evidence preservation procedures. Document how to capture memory dumps, disk images, and log exports. Know where evidence will be stored (S3 bucket with versioning and MFA Delete).

When Something Goes Wrong (Step-by-Step Runbook)

Step 1: Detect and Triage

Assess severity based on your defined levels. Determine scope: which resources, accounts, and data might be affected? Notify appropriate team members based on severity.

Step 2: Contain

Isolate affected resources. For EC2: modify security group to deny all traffic. For IAM: disable credentials immediately. For S3: update bucket policy to deny all access. Create snapshots before making changes for forensic purposes.

Step 3: Investigate

Analyze CloudTrail logs to understand the timeline. Review VPC Flow Logs for network activity. Check GuardDuty findings for related alerts. Use Amazon Detective for visualization if enabled.

Step 4: Remediate

Fix the root cause, not just the symptoms. If a misconfigured security group enabled access, fix the group and review all similar groups. Rotate all potentially compromised credentials.

Step 5: Recover

Restore from clean backups if systems were compromised. Validate that restored systems don't contain backdoors. Re-enable services gradually with monitoring.

Step 6: Post-Incident Review

Document what happened, how it was detected, how it was contained, and total time to resolution. Update procedures based on lessons learned. Share findings (appropriately sanitized) with the team.

Sometimes the complexity or stakes are too high to handle alone. Let's discuss when to bring in external expertise.

When to Get External Help

There's no shame in recognizing when you need expertise beyond your team's current capabilities. The question is knowing when that point arrives.

Signs You Need Security Expertise

Compliance deadlines with unclear paths. SOC 2, HIPAA, PCI DSS, or ISO 27001 certification with audit timelines you can't meet internally. Compliance frameworks have specific requirements that take time to understand and implement correctly.

Significant security findings with unclear remediation. Security Hub shows Critical findings but your team isn't sure how to fix them without breaking applications. GuardDuty detects threats you don't know how to investigate.

Major architectural changes. Migrating to AWS, implementing a multi-account architecture, or redesigning your network. These changes set security posture for years.

Active security incident exceeding capabilities. Your team is overwhelmed, the scope is expanding, and you need experienced incident responders immediately.

No dedicated security expertise. Your team is excellent at development and operations but doesn't have security specialists. That's fine for many organizations, but it means external review catches what internal teams miss.

AWS Professional Services vs. Partners

AWS Professional Services provides direct AWS expertise. They know the services deeply and have seen many implementations. Higher cost but direct access to AWS knowledge. Best for complex implementations where you want AWS's official guidance.

AWS Security Partners (like independent consultancies) offer specialized expertise often at more flexible pricing. We can provide hands-on implementation, not just recommendations. Good for organizations wanting practical help alongside guidance.

AWS Support Plans determine your access to AWS expertise. Enterprise Support includes a Technical Account Manager (TAM) who knows your environment. The AWS Security Incident Response service provides 24/7 CIRT access for incidents.

Security Assessment Checklist

When engaging external help, ensure the assessment covers:

Account-level security review: Root user protection, IAM configuration, logging setup, cost management controls. These AWS account best practices form the foundation.

Network architecture review: VPC design, security groups, NACLs, connectivity patterns. Identify overly permissive rules and missing segmentation.

IAM policy audit: Least privilege analysis, permission boundary usage, access analyzer findings, unused credentials.

Compliance gap analysis: Map current state against your target compliance framework. Identify specific gaps and remediation requirements.

Remediation roadmap with priorities: Not everything needs fixing immediately. Prioritize based on risk and effort. Critical issues first, then high, then medium.

Whether you implement yourself or bring in help, here's what matters most.

Conclusion

AWS provides over 300 security features. You don't need to implement all of them to be secure. The 80/20 framework in this guide identifies the critical practices that prevent most breaches.

The Foundation practices (your critical 20%):

  • Root user protection with hardware MFA
  • Least privilege IAM with temporary credentials
  • Security groups restricting access to specific IPs and ports
  • S3 Block Public Access at account level
  • CloudTrail enabled across all regions
  • GuardDuty enabled with all protection plans

Your 30-day implementation roadmap:

Week 1 (Detection Foundation):

  • Enable MFA on root user
  • Create multi-region CloudTrail trail
  • Enable GuardDuty in all regions
  • Enable Security Hub with default standards

Week 2 (Identity Hardening):

  • Audit IAM users, disable unused credentials
  • Review policies for wildcards and admin access
  • Implement MFA for all human users
  • Configure IAM Access Analyzer

Week 3 (Network Hardening):

  • Review security groups for 0.0.0.0/0 rules
  • Enable VPC Flow Logs for all VPCs
  • Verify databases are in private subnets
  • Remove default security group usage

Week 4 (Data and Monitoring):

  • Enable S3 Block Public Access account-wide
  • Verify encryption settings across services
  • Configure CloudWatch alarms for critical events
  • Review Security Hub findings and address Critical/High

Start here: Enable Security Hub, review your security score, and address critical findings first. That single action provides immediate visibility into your security posture.

Security is ongoing, not one-time. Schedule monthly reviews of Security Hub findings. Conduct quarterly IAM access reviews. Test incident response procedures annually. The practices in this guide provide the foundation, but continuous improvement makes the difference.

Get a Professional AWS Security Assessment

I conduct comprehensive security reviews of your AWS environment, identifying misconfigurations, compliance gaps, and security risks. Receive a prioritized remediation roadmap tailored to your organization.

Frequently Asked Questions

How do I set up firewalls so AWS instances are safe without losing connectivity?
Use security groups as your primary firewall. Allow only necessary ports (443 for HTTPS, specific database ports from application security groups). For management access, use AWS Systems Manager Session Manager instead of opening SSH/RDP. Add NACLs as a second layer for subnet-level control. VPC endpoints provide private AWS service access without internet exposure.
Which ports and services should be open for EC2 web applications?
For web applications: Allow port 443 (HTTPS) inbound from 0.0.0.0/0. Allow port 80 only for HTTP-to-HTTPS redirect. Never expose database ports (3306, 5432) publicly. Place databases in private subnets and allow database ports only from your application security group. Avoid opening SSH (22) or RDP (3389) entirely by using Session Manager.
What's the best way to monitor and stop DDoS and brute-force attacks?
Enable GuardDuty for automatic brute force detection with findings like UnauthorizedAccess:EC2/SSHBruteForce. AWS Shield Standard provides free DDoS protection for CloudFront, Route 53, and ELB. For critical applications, Shield Advanced adds 24/7 DDoS Response Team access. Use AWS WAF with rate limiting rules for application-layer protection.
What IDS/IPS strategy works for AWS?
GuardDuty serves as managed IDS, analyzing CloudTrail, VPC Flow Logs, and DNS logs using machine learning to detect threats. For IPS capabilities, use AWS Network Firewall with stateful inspection and intrusion prevention rules. WAF provides application-layer protection. This combination provides comprehensive detection and prevention without managing IDS/IPS infrastructure.
What are the most common AWS security mistakes to avoid?
The top eight: overly permissive IAM policies (especially admin access), unencrypted data storage, publicly accessible resources (S3 buckets, databases), disabled CloudTrail logging, weak network security (0.0.0.0/0 rules), credential management failures (hardcoded secrets), missing MFA, and unused resources with stale permissions.
How do I know if my AWS security measures are actually working?
Enable Security Hub and review your security score regularly. Use IAM Policy Simulator to test policies before deployment. Run IAM Access Analyzer to detect external access and unused permissions. Deploy AWS Config conformance packs for continuous compliance checking. Conduct periodic penetration tests to validate controls work under attack.

Share this article on ↓

Subscribe to our Newsletter

Join ---- other subscribers!