You scaled from 5 to 50 microservices, your Series B is in the bank, and your AWS bill just went from $22K to $47K in three months. Your CFO is asking questions you don't have answers to, and your engineering team is too busy shipping features to dig through Cost Explorer.
This is the exact scenario where most CTOs start wondering whether an AWS cost audit is the right move, or if they should just tell the team to "look at the bill" during the next sprint planning.
I'm going to walk you through exactly what happens during an assessment, phase by phase, so you know what you're getting before you reach out to anyone. I've run these assessments for startups at your stage, and the pattern is remarkably consistent: 30-60% of the bill is recoverable, with quick wins showing up in the first week.
Why Your AWS Bill Doubled After Launch
Before we get into the assessment itself, a proper AWS bill review helps to understand why this happened. Not because you need a lecture, but because the patterns I'm about to describe are the same ones the assessment is designed to find.
Your bill didn't double because of one bad decision. It doubled because several reasonable decisions compounded, and understanding why AWS costs keep climbing is the first step to fixing it. Engineering teams provision for peak load "just to be safe." That's fine for one service. But multiply that across four environments (production, staging, QA, dev) and dozens of services, and you're paying for capacity you'll never touch.
The Over-Provisioning Multiplier
Here's the pattern I see in nearly every startup assessment: the team picks an m-family instance for every workload out of habit, and most of those instances sit below 10% CPU utilization at peak. That over-provisioning alone accounts for roughly 30% of wasted cloud spend.
And it's not just compute. EBS volumes provisioned as gp2 when gp3 would cost 20% less. RDS instances sized for a traffic spike that happened once. Lambda functions allocated 1 GB of memory for a task that needs 256 MB. Each one seems minor in isolation, but they add up fast.
The real kicker? 53% of organizations used zero commitment instruments in 2023, meaning they're paying full On-Demand pricing for predictable workloads. That's like renting a car by the hour for your daily commute.
The Hidden Costs That Scale With You
Then there are the costs that nobody budgeted for. Cross-AZ data transfer at $0.01/GB in each direction doesn't matter when you have two services. It matters a lot when you have 50 microservices chattering across availability zones. NAT Gateway charges, internet egress, CloudWatch log ingestion: these "hidden" fees can inflate your bill by 10% or more, and they scale with your traffic, not your team's awareness of them.
Add in idle resources (that sandbox RDS cluster someone spun up three months ago is still running), and without a tagging strategy, up to 50% of your budget becomes untraceable to any specific team or project. No accountability means no optimization.
This is the mess an assessment is designed to untangle.
What an AWS Cost Optimization Assessment Actually Is
An AWS cost optimization assessment is not someone skimming your Cost Explorer dashboard and emailing you a list of tips you could have Googled. It's a structured, methodology-driven review of your entire AWS environment, built on the AWS Well-Architected Framework Cost Optimization Pillar.
The Well-Architected Framework organizes cost optimization around five design principles and five best practice areas, covering everything from financial management and expenditure awareness to resource efficiency and continuous optimization. A professional assessment systematically evaluates your environment against all of them.
Here's the key distinction: DIY optimization means you know the tips but lack the time, tooling, and cross-service context to apply them systematically. A professional AWS cost review combines tooling with experience to deliver prioritized, quantified recommendations, not just "you should rightsize your instances" but "these 14 instances should move from m5.xlarge to m6i.large, saving you $3,400/month, and here's the order to do it in." If you're looking for a broader maturity-based cost optimization approach, that guide covers the strategic framework. This post is about what happens when you bring someone in to do the analysis for you.
So what does this look like in practice? Let me walk you through the phases.
How the Cost Optimization Assessment Works: Phase by Phase
The entire AWS cost optimization assessment runs about 2-3 weeks end-to-end. Your time commitment is minimal: a kickoff call and roughly 30 minutes to set up read-only access. The rest happens in the background with zero disruption to your environment. No downtime, no deployment freezes.
Phase 1: Discovery and Data Collection
This is the foundation. We start with a kickoff call to scope the assessment: which accounts, environments, and workloads are in play. For most startups, this means everything, but some companies want to focus on production first.
You provide read-only IAM access. No write permissions, no ability to modify anything. I deploy monitoring to collect 2-3 weeks of utilization data: CPU, RAM, storage throughput, IOPS, and network patterns. Ideally this captures at least one peak business period so the data is representative rather than misleadingly quiet.
During this window, I also run the native AWS tools: Cost Explorer analysis, Trusted Advisor checks, Compute Optimizer enrollment, and Cost Optimization Hub review. These tools do a lot of the heavy lifting, but they don't talk to each other well, which is part of why a manual review adds value.
Phase 2: Analysis and Deep Dive
This is where the real work happens. Over 2-5 days, I analyze cost data against Well-Architected Framework best practices, looking at every major spend category.
The analysis covers utilization patterns across all resource types, commitment coverage gaps (are you using Savings Plans or Reserved Instances, and are they covering the right workloads?), idle and orphaned resources, data transfer patterns, and storage tiering opportunities.
The tooling has gotten significantly better. Cost Explorer now offers 36 months of historical data and 18-month forecasting with AI-powered explanations. Compute Optimizer provides ML-driven rightsizing recommendations across 7 resource types, from EC2 instances to Lambda functions to RDS databases. And Cost Optimization Hub consolidates 18 types of recommendations with built-in deduplication, so you don't get double-counted savings estimates.
But here's what the tools miss: context. Cost Optimization Hub will tell you an instance is over-provisioned. It won't tell you that the instance runs your payment processing service and needs a careful migration plan, or that three other "savings opportunities" become irrelevant if you migrate that workload to Graviton first. That's where a structured spending analysis adds value over running the tools yourself.
Phase 3: Findings and Recommendations
Every finding gets prioritized across three dimensions: savings potential, implementation effort, and risk. Each recommendation comes with specific dollar figures, not just percentages, because "$4,200/month" is more actionable than "28% savings on compute."
Findings get categorized into quick wins you can implement this week, commitment optimizations that need a business decision, and architectural changes that require engineering planning. This categorization matters because it gives you a clear sequence: capture the easy savings first, then invest the freed-up budget (or time) into the bigger changes.
Phase 4: The Implementation Roadmap
The final deliverable is a 30/60/90-day roadmap. Short-term wins (0-1 month) are things like EBS volume migrations and idle resource cleanup. Medium-term optimizations (1-3 months) include commitment purchases and rightsizing. Long-term architectural changes (3-6 months) cover Graviton migration, serverless adoption, and storage tiering.
The roadmap also includes governance recommendations: what tagging strategy to enforce, what budget alerts to set up, and what review cadence to maintain so the savings stick.
By the end of this process, you walk away with a specific set of deliverables.
What You Actually Get: Assessment Deliverables
The deliverables from an AWS cost optimization assessment aren't a slide deck with vague recommendations. They're working documents your team can act on immediately. Here's what's included.
The Cost Breakdown Report
This is a complete picture of where every dollar goes. Service-by-service cost attribution, per-environment breakdown (so you can see that staging is costing 40% of what production costs despite serving zero customers), trend analysis showing your cost trajectory, and anomaly identification flagging services with unusual spend patterns.
The report accounts for your existing discounts. If you already have Savings Plans, the savings estimates reflect what you'd save on top of those, not a theoretical number that ignores your current commitments.
Prioritized Savings Recommendations
Each recommendation includes:
- Estimated monthly and annual savings in dollar amounts
- Implementation effort (low, medium, or high)
- Risk level and whether the change is reversible
- Category: quick win, commitment optimization, or architectural change
The prioritization is the part you can't easily replicate with free tools. Cost Optimization Hub deduplicates overlapping recommendations (it won't count rightsizing savings for an instance that should be terminated), but it still presents a flat list. The assessment turns that into a sequenced action plan.
The 30/60/90-Day Roadmap
This is where the assessment becomes a plan:
- Week 1-4: Quick wins like gp2-to-gp3 EBS migrations, idle resource cleanup, and non-production scheduling
- Month 2: Commitment optimization with Savings Plans analysis and purchase recommendations
- Month 3-6: Architectural recommendations including Graviton migration and serverless opportunities
- Ongoing: Governance setup with tagging, budgets, and anomaly detection
The roadmap includes clear ownership recommendations. Some items your team handles, some might need outside help, and some (like commitment purchases) need business approval.
What We Typically Find (And What It Saves)
After running these assessments, the categories of findings are predictable even though the specifics vary. Here's what shows up in almost every startup engagement, organized by how quickly you can capture the savings.
Quick Wins (Week 1)
These are low-risk changes that deliver immediate savings:
- gp2-to-gp3 EBS migration: 8-20% savings on storage with zero downtime. gp2 forces you to over-provision volume size just to get the IOPS you need. gp3 decouples IOPS from storage size, so you stop paying for 3 TB of storage when you only need 500 GB.
- Idle resource cleanup: Unattached EBS volumes, unused Elastic IPs, idle load balancers, and stopped instances with attached storage. These pile up fast in growing startups.
- Non-production scheduling: Shutting down dev and staging environments during evenings and weekends can cut those specific costs by up to 75%.
I typically find $2,000-$5,000/month in quick wins alone for startups in the $20K-$50K/month range.
Commitment Optimization (Month 1-2)
This is where the bigger numbers live. If you're running predictable workloads on pure On-Demand pricing, you're overpaying significantly.
- Compute Savings Plans: Up to 66% off On-Demand, with flexibility to change instance families, regions, and even compute services (EC2, Fargate, Lambda)
- EC2 Instance Savings Plans: Up to 72% off, but locked to an instance family in a specific region
- Database Savings Plans: 12-35% savings on Aurora, RDS, DynamoDB, and other database services (announced December 2025)
AWS recommends Savings Plans over Reserved Instances for most use cases. The savings difference is marginal (at most ~3%) while Savings Plans offer significantly more flexibility. The assessment models different commitment levels against your actual usage to find the right balance between savings and flexibility.
Architectural Changes (Month 2-6)
These take longer but deliver compounding savings:
- Rightsizing: 25-40% of compute costs when moving from over-provisioned to properly sized resources
- Graviton migration: 20-40% additional savings on EC2, plus ~20% lower cost on Fargate and up to 34% better price-performance on Lambda
- S3 Intelligent-Tiering: 24-60% savings on storage depending on access patterns, with no retrieval charges
The compounding effect matters: rightsizing first, then applying Savings Plans to the right-sized instances, then migrating to Graviton stacks the discounts.
| Strategy | Typical Savings | Implementation Effort | Timeline |
|---|---|---|---|
| gp2-to-gp3 EBS migration | 8-20% on storage | Low | Week 1 |
| Idle resource cleanup | $1,000-$5,000/month | Low | Week 1 |
| Non-production scheduling | Up to 75% on those envs | Low-Medium | Week 1-2 |
| Savings Plans | Up to 66-72% vs On-Demand | Low | Month 1-2 |
| Rightsizing | 25-40% of compute | Medium | Month 1-3 |
| Graviton migration | 20-40% on top of rightsizing | Medium-High | Month 2-6 |
| S3 Intelligent-Tiering | 24-60% on storage | Low | Month 1 |
Those are the categories. Let me show you what this looks like for a real startup.
Real Numbers: A Startup Assessment in Practice
Here's a representative example of what a typical engagement looks like for a Series B SaaS company. The details are composited from real assessments, but the pattern is one I've seen repeatedly.
The situation: $45K/month AWS bill that had doubled in 6 months after a product launch. Team of 30 engineers, microservices on EKS, PostgreSQL on RDS, heavy use of S3 for file storage. No one on the team had looked at the bill beyond the monthly total.
What the assessment found:
- 3 idle RDS instances from a failed migration, still running in staging: $2,800/month
- Over-provisioned EKS cluster with 60% excess node capacity and no cluster autoscaler: $6,200/month recoverable through rightsizing
- Zero Savings Plans on $28K/month of predictable compute: $8,400/month in missed discounts (at a conservative 30% coverage target)
- 47 gp2 EBS volumes that could be migrated to gp3: $1,200/month
- Dev and staging environments running 24/7 with no scheduling: $3,800/month (assuming 12/5 scheduling)
Total identified savings: ~$22,400/month ($268,800/year), roughly 50% of the total bill.
Quick wins implemented in week 1: Idle RDS termination + gp2-to-gp3 migration = $4,000/month recovered immediately. That's nearly $48K/year from changes that took less than a day to implement.
For this team, the savings translated directly to extended runway. At a $45K/month AWS bill, cutting $22K/month freed up $264K/year, roughly equivalent to one senior engineering hire. That's the kind of number that gets brought up in board meetings.
Startups at Series A through C typically achieve 30-60% savings through a structured assessment, with quick wins recoverable immediately and the full savings materializing over 3-6 months.
Can You Do This Yourself?
Honest answer: it depends on your situation.
For simple setups under $5K/month with a handful of services, DIY is often sufficient. AWS gives you good free tools. Cost Explorer is free (the web interface, at least). Compute Optimizer is free for the default 14-day lookback. Cost Optimization Hub is free and consolidates 18 recommendation types across accounts. If you have a Business or Enterprise support plan, Trusted Advisor gives you full cost optimization checks.
Where DIY falls short is on time and context. A thorough self-assessment takes 40-80 engineering hours, time your team probably doesn't have. A professional cost optimization service combines tooling with experience to deliver cross-service analysis in a fraction of that time. And the tools work per-service. They don't give you a holistic view that connects your compute rightsizing to your commitment strategy to your architectural patterns.
The industry benchmark data on commitment coverage tells the story: organizations under $500K in annual compute spend had a 0% median Effective Savings Rate. Not because the tools aren't available, but because nobody has time to use them systematically.
If you want to try the DIY path first, start with our AWS cost optimization checklist for a structured starting point.
| Dimension | DIY (AWS Native Tools) | Professional Assessment |
|---|---|---|
| Time investment | 40-80 engineering hours | 1-2 hours of your time |
| Scope | Per-service, per-tool | Cross-service, holistic |
| Commitment strategy | Coverage reports only | Strategy with modeling |
| Architectural review | Not included | Included |
| Savings quantification | Estimates per tool | Unified, deduplicated |
| Implementation support | Self-directed | Prioritized roadmap |
| Best for | Simple setups, <$5K/month | Complex architectures, $15K+/month |
What to Prepare Before Your Assessment
If you've decided an assessment makes sense, here's what to have ready. The goal is to minimize your time investment and maximize the quality of findings.
- AWS account access: A read-only IAM role. If you're running a multi-account setup, cross-account access to the management account and key workload accounts. I provide a CloudFormation template for this.
- Cost Explorer enabled: It's free, but it needs to be turned on. If it's not already active, enable it now so historical data starts accumulating.
- Architecture context: Which environments exist (production, staging, QA, dev), what's running where, and which workloads are business-critical. A quick architecture diagram or even a Slack message with the basics is enough.
- Business context: Growth plans, peak usage patterns (seasonal? event-driven?), upcoming launches, and any planned migrations. This shapes the commitment strategy recommendations.
- Current commitments: Any existing Reserved Instances or Savings Plans, including terms and expiration dates.
None of this requires deep preparation. Most clients pull it together in 30 minutes. The key is giving enough context so the assessment findings are actionable, not generic.
What Happens After the Assessment
The assessment report is a starting point, not the finish line. Here's what comes next.
Implementation Support
Not every recommendation needs the same approach:
- Quick wins (gp2-to-gp3, idle cleanup, scheduling): Often implementable in the same week with low risk. Your team can handle these independently, or I can implement them as part of a follow-up engagement.
- Commitment purchases: Need business approval and some modeling. Typically month 1-2. The assessment provides the analysis; your finance or engineering lead makes the call.
- Architectural changes (rightsizing, Graviton migration, serverless): Require engineering planning and sprint allocation. The roadmap sequences these so you're not trying to do everything at once.
For context on the architectural side, multi-account architecture for cost savings is a common post-assessment recommendation for startups still running everything in a single account.
Ongoing Cost Governance
The savings from an assessment erode if you don't put guardrails in place. Here's the governance stack I recommend:
- AWS Budgets with automated actions: Set budget thresholds with alerts at 80% and automated responses at 90%. Budget Actions can apply IAM policies or SCPs to prevent new resource provisioning when thresholds are exceeded.
- Cost Anomaly Detection for early warning: ML-based monitoring that runs roughly three times per day, flagging unusual spend patterns before they become bill shock.
- Tagging enforcement via Service Control Policies: Require tags (Environment, Owner, CostCenter) on every resource. Without tagging, cost accountability is impossible.
- Monthly review cadence: Day 1 of each month, send a dashboard snapshot to workload owners showing prior-month variance and top 3 cost drivers. Mid-month, run an anomaly standup if Cost Anomaly Detection flags anything. Month-end, review commitment coverage and utilization.
This follows the FinOps lifecycle: Inform (gain visibility), Optimize (act on findings), and Operate (maintain governance). The assessment handles the first two. Governance keeps the third running on autopilot.
If your environment also needs a security review alongside cost optimization, our security review process follows a similar structured approach.
The Bottom Line
An AWS cost optimization assessment is a 2-3 week engagement with minimal time investment from your side. You get a prioritized, quantified roadmap with specific dollar figures, not generic advice.
Startups at Series A through C typically find 30-60% savings, with quick wins recoverable in week one. The assessment pays for itself within the first month of implemented savings, and it's the kind of thing that shifts your AWS bill from a line item nobody understands to a number your team actively manages.
Having cost governance in place after the assessment is what keeps the savings permanent. Without it, the same patterns (over-provisioning, idle resources, missing commitments) creep back in within 6 months.
If your AWS bill has doubled and you want clarity on where the money is going and what to do about it, a discovery call is the first step. And if you want to understand the broader optimization framework before committing, start with our cost optimization best practices guide.
Get Clarity on Your AWS Spend
I run a structured cost optimization assessment on your AWS environment and deliver a prioritized savings roadmap with specific dollar figures. Most startups find 30-60% savings, starting with quick wins in week one.
![AWS Cost Optimization Checklist: The Maturity-Based Framework [2026]](/_next/image?url=%2Fimages%2Fblog%2Faws-cost-optimization-checklist%2Fhero.png&w=3840&q=70)