Skip to main content
Back to Blog
Compliance

SOC 2 Common Audit Findings: The 8 Issues We See in Every Audit

The eight most common SOC 2 audit findings that delay certification—with exact remediation steps so you can fix them before your auditor flags them.

PlatOps Team
Author
Published: April 7, 2026
12 min read

SOC 2 Type II audits take 6 to 12 months. The companies that finish in 6 months spent time fixing control gaps before the audit window opened. The ones that take 12 months discover those gaps during fieldwork—and then scramble to remediate while the clock is running.

After running dozens of SOC 2 readiness engagements, the same eight findings appear in nearly every environment. Not because the engineering teams are careless. Because these particular controls are operationally easy to neglect, and auditors know exactly where to look for them.

This post covers all eight—what auditors are looking for, why it commonly fails, and the precise steps to fix it before they arrive.


What SOC 2 Auditors Are Actually Checking

Before the eight findings: it helps to understand the audit methodology. SOC 2 auditors are not scanning your systems. They are sampling evidence. For each control, they will ask you for documentation proving the control operated effectively over the audit period—typically the preceding 6 to 12 months.

"We have a policy" is not sufficient evidence. Neither is "we do this informally." Auditors need dated, attributed records: tickets, screenshots, logs, signed documents, meeting minutes. A control that exists but lacks evidence is treated the same as a control that doesn't exist.

That framing matters for every finding below.


Finding 1: Missing or Outdated Risk Assessment

What auditors look for: A formal, documented risk assessment—identifying threats to confidentiality, availability, and integrity—completed within the last 12 months and signed off by management.

Why this fails: Risk assessments get done once to satisfy a previous audit, then drift. The original document references a technology stack that's changed, vendors that were replaced, and team structures that no longer exist. When an auditor asks "how recently was this updated and by whom," the answer is usually a year or more ago and it's not clear who owns it.

More common: there's no risk assessment at all. The company has security practices but never formalized the risk identification process that those practices are supposed to address.

The fix:

  1. Schedule a risk assessment session with at least two stakeholders from engineering, operations, and leadership. 90 minutes is enough for a focused session.
  2. Document threats in each TSC category you're being audited against (Security, Availability, Confidentiality, Processing Integrity, Privacy). For most companies this means identifying 15–30 risks.
  3. For each risk: document likelihood, potential impact, current mitigating controls, and residual risk rating.
  4. Record who attended, the date, and who reviewed and approved the output.
  5. Set a calendar reminder to repeat this annually and document that review cycle in your information security policy.

The format matters less than the evidence that it happened and that someone with authority signed off on it. A well-structured spreadsheet with signatures is more defensible than a polished risk management platform with no dated entries.


Finding 2: Access Reviews Not Performed Quarterly

What auditors look for: Evidence that user access across critical systems—cloud environments, production databases, SaaS tools, VPN—is reviewed and recertified by an appropriate manager on a defined schedule. For SOC 2, quarterly is the standard that satisfies most auditors.

Why this fails: Access reviews get planned and then deprioritized. Or they happen once before the audit, which auditors immediately identify as point-in-time activity rather than an ongoing control. When an auditor asks for four quarters of evidence and you have one cycle, the control fails.

The second failure mode: access reviews are performed but not documented. A manager verbally confirms that her team's access looks correct. Nothing gets written down. No evidence, no credit.

The fix:

  1. Define a quarterly schedule (January, April, July, October are clean markers).
  2. For each review cycle, export a current user list from each in-scope system.
  3. Send that list to the system owner or team manager with explicit instructions: confirm each user still requires that level of access, mark any accounts for removal or downgrade.
  4. Document the response—a reply email, a ticket, a signed spreadsheet. Any dated artifact showing the review occurred and who approved it.
  5. Execute the removals within 5 business days of the review and document the ticket or change record.
  6. Store all of this in a dedicated evidence folder: access-reviews/2026-Q1/, access-reviews/2026-Q2/, and so on.

For teams with 10+ systems, access review tooling (Vanta, Drata, Secureframe) automates most of this. For smaller teams, a disciplined manual process with consistent documentation works fine.


Finding 3: No Formal Change Management Process

What auditors look for: Evidence that changes to production systems—code deployments, infrastructure modifications, configuration changes—go through a defined approval and testing process before reaching production.

Why this fails: Most engineering teams have informal change processes. Engineers review each other's pull requests. Deployments go through CI/CD. But there's no written policy defining what requires approval, who can approve it, and what testing gates must pass. When an auditor asks "show me your change management policy and three examples of it being followed," the policy doesn't exist and the examples can't be tied to a documented process.

The second issue: emergency changes. A midnight incident fix gets pushed directly to production without review. That happens, and auditors understand it. But there needs to be a documented process for retroactively reviewing emergency changes within 24–48 hours, or those become exceptions that accumulate against the control.

The fix:

  1. Write a change management policy that covers: what constitutes a change, who can approve it (separation of duties—don't let developers approve their own production deployments), required testing gates, and the emergency change procedure.
  2. In your ticketing system (Jira, Linear, GitHub Issues), create a "production deployment" ticket type that captures: change description, testing performed, approver, and deployment timestamp.
  3. Configure your CI/CD pipeline to require a passing build, required reviewers on the PR, and—if you have it—a linked ticket ID before merging to main.
  4. For the 12 months leading into your audit, you need a consistent trail of these tickets. Start this immediately—auditors pull a sample of deployments and trace them back to change records.
  5. Document emergency changes in a separate log with a post-incident review note attached within 48 hours.

A change management process doesn't need to be bureaucratic. It needs to be consistent, documented, and tied to actual evidence that it was followed.


Finding 4: Missing Encryption at Rest for Databases

What auditors look for: Confirmation that all databases and storage containing customer or sensitive data are encrypted at rest. This includes primary databases, read replicas, automated backups, and snapshots.

Why this fails: Encryption at rest is often enabled on the primary instance but missed on backups, read replicas, or secondary databases that were provisioned separately. In AWS, RDS encryption must be enabled at creation time—you cannot enable it on a running unencrypted instance without taking a snapshot, creating an encrypted copy, and restoring from it. Teams that provisioned databases pre-SOC 2 frequently have unencrypted instances that require a migration to fix.

S3 buckets are another common gap. Default encryption settings changed in 2023, but buckets created before that may use SSE-S3 with no enforced key management. Auditors also check that you can demonstrate you know what's encrypted and how.

The fix:

  1. Inventory all databases and storage in scope: RDS, Aurora, DynamoDB, ElastiCache, S3, EFS, EBS volumes attached to database servers.
  2. Verify encryption status for each. In AWS: RDS console → storage → "Storage encrypted" column. S3 → Properties → "Default encryption."
  3. For unencrypted RDS instances: take a snapshot → copy snapshot with encryption enabled (select KMS key) → restore from encrypted snapshot. This requires a maintenance window but can be done with minimal downtime using AWS DMS for continuous replication during the switch.
  4. For S3: enable default encryption on all buckets. Enforce it with a bucket policy denying s3:PutObject requests without server-side encryption headers.
  5. Document the encryption configuration for each system in your asset inventory: which systems are encrypted, which KMS key is used, and when the configuration was verified.
  6. Set up AWS Config rules rds-storage-encrypted and s3-bucket-server-side-encryption-enabled to alert on any new unencrypted resources.

The fix is straightforward for new resources. For existing unencrypted instances, budget a maintenance window and test the migration in a non-production environment first.


Finding 5: Incident Response Plan Exists but Has Never Been Tested

What auditors look for: An incident response plan and documented evidence that it has been tested—via tabletop exercise, simulation, or actual incident—within the audit period.

Why this fails: Every company at this stage has an incident response plan. Usually written by a consultant or assembled from a template. It's in a Google Doc or Confluence page. The people responsible for executing it have never read it in full, the escalation contacts haven't been verified in 18 months, and it has never been walked through from start to finish.

Auditors ask two questions: show me your IRP, and show me evidence it was tested this year. The first answer is easy. The second surfaces the problem.

The fix:

  1. Schedule a tabletop exercise. This requires 90 minutes and the right people in the room: engineering lead, operations lead, a member of leadership, and your security contact.
  2. Pick a realistic scenario—ransomware affecting a production database, unauthorized access to customer data, a third-party vendor breach—and walk through your IRP step by step. Who declares an incident? Who leads response? Who communicates to customers?
  3. Document the exercise: date, attendees, scenario used, gaps identified, and any updates made to the plan afterward.
  4. Update the IRP based on what you found. Stale escalation contacts, missing steps, unclear ownership—these should be corrected and the document re-versioned.
  5. Do this annually. The first tabletop usually surfaces 5–10 things worth fixing. Subsequent ones become faster.

Bonus: if you have an actual incident during your audit period and responded to it, that's also valid evidence—provided you documented the incident timeline, the response actions, and the post-incident review. Auditors prefer this over tabletops because it demonstrates the plan works under real conditions.


Finding 6: Vendor Management — No Third-Party Risk Assessments

What auditors look for: A vendor management process that identifies third parties with access to customer data or critical systems, assesses their security posture, and maintains evidence of due diligence. For SOC 2, this means annual review of key vendors' security documentation (SOC 2 reports, security questionnaires, ISO 27001 certificates).

Why this fails: Companies know who their critical vendors are—AWS, Stripe, Salesforce, their payroll provider—but haven't documented a formal assessment process. When an auditor asks "how do you assess third-party risk and when was your last vendor review," the answer is usually "we check that they're SOC 2 compliant before we sign up" with no ongoing review and no written record.

The specific gap: collecting a vendor's SOC 2 report once during onboarding and never following up when it expires.

The fix:

  1. Build a vendor inventory. List every third party with access to systems or data in scope: cloud providers, SaaS tools, subprocessors, contractors. Include the type of data they handle and their level of access.
  2. For each vendor, collect their current security documentation: SOC 2 Type II report, ISO 27001 certificate, or responses to a security questionnaire. Note the report date and expiration.
  3. Set calendar reminders to request updated documentation before each vendor's annual report cycle. Most SOC 2 reports expire after 12 months.
  4. Document the review: who reviewed it, date, any issues identified, and whether the vendor's controls are acceptable for the data they handle.
  5. For vendors that don't have SOC 2 or ISO 27001 certification, use a security questionnaire (CAIQ, SIG Lite, or a custom version) and review responses annually.

The artifact auditors want is a vendor register with assessment dates, documentation references, and evidence that someone reviewed them. This doesn't need to be complex—a well-maintained spreadsheet is sufficient.


Finding 7: Monitoring and Alerting Gaps — No Evidence of Log Review

What auditors look for: Logging enabled on in-scope systems, centralized log collection, alerts configured for security-relevant events, and—critically—evidence that those alerts were reviewed and acted on.

Why this fails: Logging is usually configured. Alerts are often configured. What's missing is evidence that humans looked at them. An auditor asking "show me your security monitoring process" often gets shown a SIEM dashboard and a list of alert rules. The follow-up question—"show me examples of alerts that fired and how they were handled over the past six months"—is where this control breaks down.

The specific evidence gap: no tickets, no documented responses, no record of who reviewed what and when. If your on-call engineer acknowledges a Datadog alert by memory and the alert auto-resolves, that's invisible to an auditor.

The fix:

  1. Ensure logging is enabled on all in-scope systems and centralized in one location: CloudTrail, VPC Flow Logs, application logs, authentication logs, and database audit logs at minimum.
  2. Define a set of security-relevant alert categories: unauthorized access attempts, privilege escalation, unusual data access volumes, configuration changes to security controls, and failed authentication spikes.
  3. Configure alerts in your monitoring platform and route them to a ticket system or documented incident log. Every alert should create a record.
  4. For each alert, require documented disposition: acknowledged by whom, on what date, and what action was taken (investigated and closed, escalated, remediated).
  5. Conduct and document a monthly log review: pull summary metrics for the period, note anything unusual, record who reviewed it and when. A 30-minute monthly review with a written output satisfies this control.

The goal is a paper trail. An auditor sampling six months of your monitoring activity should find consistent records of alerts being reviewed and dispositioned. Systems that fire alerts into a void—even a very sophisticated void—don't satisfy the control.


Finding 8: Employee Security Training Not Documented

What auditors look for: Evidence that all employees completed security awareness training within the audit period—typically annually at minimum. This includes a training completion log showing employee names, training dates, and the content covered.

Why this fails: Companies conduct security training but track completion informally. An all-hands Zoom session on phishing without attendance records. A security policy sent via email that employees are expected to read. A third-party training platform with completion data that nobody exported before the audit.

The second failure mode: new employees who onboarded during the audit period with no training on record. SOC 2 requires that training occurs within a defined timeframe of hire—typically 30 or 60 days—and auditors check hire dates against training records.

The fix:

  1. Use a platform that generates completion certificates: KnowBe4, Proofpoint Security Awareness, Hoxhunt, or even a simple LMS like TalentLMS. The requirement is a dated completion log attributed to named employees.
  2. Define a training schedule in your security policy: annual completion for all employees, completion within 30 days of hire for new employees.
  3. Export and archive completion reports quarterly. Store them in your evidence folder.
  4. Cover at minimum: phishing recognition, password management, acceptable use, data handling, and incident reporting. Document the curriculum so auditors can confirm it's substantive.
  5. For policies (AUP, data classification, clean desk, etc.), require annual signed acknowledgment. DocuSign, an HRIS with policy sign-off, or a timestamped Google Form all produce audit-ready evidence.

Bonus: phishing simulations are not required for SOC 2 but are increasingly expected. If you run them, document the simulation dates, click rates, and any follow-up training triggered. Auditors view this as evidence of an active security culture, not just a checkbox.


How Long Does It Take to Fix These?

Assuming you're starting from zero on most of these, here's a realistic remediation timeline:

FindingRemediation EffortLead Time Before Audit
Risk assessment4–8 hours2 weeks
Access reviews2–4 hours per cycle3 months (for one full cycle)
Change management1 week to implement, ongoing3–6 months (to build history)
Encryption at rest2–8 hours per system4 weeks
IR plan testing2–4 hours2 weeks
Vendor management4–8 hours initial, ongoing4 weeks
Monitoring evidence2 weeks to instrument, ongoing3 months
Security training1–2 weeks to deploy6 weeks

The findings that need the most lead time are the ones that require building an evidence trail: access reviews, change management, and monitoring logs. You cannot retroactively create three months of quarterly access review records the week before your audit starts. Start those now, regardless of where you are in the process.


Running Your Own Readiness Check

Before engaging an auditor, walk through this list with the person who owns each control area:

  1. Can you produce a risk assessment completed in the last 12 months with a named approver and date?
  2. Can you show four consecutive quarters of access review records for your top five systems?
  3. Can you pull 20 random production deployments from the last six months and show a corresponding change ticket for each?
  4. Can you confirm encryption status for every database and storage system in scope—and produce the configuration proof?
  5. Can you show a tabletop exercise or incident response test from the last 12 months with attendees and documented output?
  6. Can you produce current SOC 2 reports or security questionnaires for your top 10 vendors?
  7. Can you show six months of security alert disposition records—not just alert configurations?
  8. Can you export a training completion report showing 100% of current employees completed security training within the required window?

If you answer "yes" to all eight with documentation in hand, you are ready for fieldwork. If more than two of those answers are "no" or "I'd need to check," schedule remediation time before you open an audit engagement.


We cover all eight of these areas in our SOC 2 Compliance service, and we've published a detailed SOC 2 Readiness Checklist that maps every control gap to the specific TSC criterion it affects.

If you want a direct assessment of where your environment stands—what's remediated, what's missing, and what the realistic path to certification looks like—schedule a free security assessment. We'll review your current controls against the SOC 2 Trust Services Criteria and give you a prioritized remediation list before you commit to an audit timeline.

Put this into practice

Get a free assessment of your current security and infrastructure posture, or check your email security in 30 seconds.

Tags:soc-2complianceaudit-findingssecuritysoc-2-certificationsoc-2-readiness

Get articles like this in your inbox

Practical security, infrastructure, and DevOps insights for teams in regulated industries. Published weekly.

Weekly digestUnsubscribe anytimeNo spam, ever

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Want to Discuss This Topic?

Schedule a call with our team to discuss how these concepts apply to your organization.

30 Minutes

Quick, focused conversation

Video or Phone

Your preferred format

No Sales Pitch

Honest, practical advice

Schedule Strategy Call
Get Free Assessment