All posts
ArchitectureTechnical DebtOrg Health

Salesforce Technical Debt: How We Audit an Org in a Week

Reis Warman·November 5, 2025·7 min read

Most orgs we walk into have the same six problems. Here is the five-day audit process we run for clients, the rubric we use to score what we find, and what the results usually tell us about where to spend the next two quarters.

We do a lot of technical debt audits. They are our favorite engagement type, because they are short, concrete, and the client always walks away with a plan they did not have when they hired us.

The thing about Salesforce technical debt is that most orgs have the same problems. Fields added by someone who left four years ago, flows that overlap with triggers that overlap with process builders, automation that fires on record creation and then fires again on update because nobody knew, hard-coded IDs, test classes that only exist to hit coverage. You would think every org is unique, and at a detail level they are, but the patterns repeat.

Here is the five-day framework we run, for anyone who wants to audit their own org or have a better conversation with a consultancy that is doing it for them.

Day one: data model

We start here because the data model constrains everything else. If the Account object has 340 custom fields and no one knows which ones are in use, every automation layered on top is going to be messier than it needs to be.

What we look at:

  • Field counts per object, and the ratio of used to unused fields.
  • Field history tracking on things that do not need it, and missing on things that do.
  • Object relationships. Master-detail vs. lookup choices made for sharing reasons that no longer apply.
  • Record types that duplicate what page layouts should do, and record types that should exist but do not.
  • Naming conventions. An object with half its fields in CamelCase and half in snake_case is not the end of the world, but it tells you the team did not have shared standards.

A quick rule we use: if more than 30 percent of custom fields on a major object have not been written to in 90 days, the data model has drifted and needs a prune before anything else gets built on it.

Day two: automation

This is where most debt accumulates. Every "just add a quick flow" is a reasonable decision in isolation and a disaster in aggregate.

We inventory:

  • Every active flow, process builder, workflow rule, and Apex trigger, grouped by object.
  • Overlap: which automations fire on the same object and event, and in what order. This is where bugs hide.
  • Scheduled automations that reference criteria nobody can articulate.
  • Error volume. If there are 40,000 flow errors in the last 30 days and nobody is watching the queue, that is a finding all by itself.
  • Automation that duplicates what a single well-designed handler class would do in a quarter of the code.

Our rule of thumb: on a heavily customized object, there should be one automation framework per event (either flow or Apex handler, not both), and the number of automations should be countable on two hands.

Day three: Apex and LWC

Code tells a story about the team that wrote it. We read enough of it to understand that story.

What we check:

  • Test coverage by class, not org-wide. 75 percent overall can hide the critical class at 0.
  • Assertion density. A test with no assertions is not a test, and we find plenty.
  • Cyclomatic complexity of the top ten largest classes. One method over 200 lines is almost always worth rewriting.
  • Anti-patterns: SOQL in loops, hard-coded IDs, System.debug left in production, email alerts fired from triggers.
  • LWC quality: prop drilling, event handling, standalone vs. page components, SLDS hygiene.
  • Governor limit headroom. If the org is hitting 80 percent of any per-transaction limit on normal workload, a spike will break it.

We use a small suite of internal tools plus the Salesforce Code Analyzer and PMD rulesets. Nothing exotic. The skill is in reading the output, not generating it.

Day four: integrations and security

Most orgs have three or four integrations and no documentation on any of them. This is the day where surprises show up.

We map:

  • Every named credential, connected app, and external system.
  • API consumption: which integrations use the most API calls, and whether the org is at risk of hitting the 24-hour limit.
  • Authentication patterns. Long-lived passwords in named credentials, OAuth flows that should be migrated, service accounts with unknown ownership.
  • MuleSoft or similar middleware: the transformations, the error handling, the monitoring.
  • Security baseline: profiles and permission sets, field-level security, sharing model, health check score, and any Setup audit trail worth flagging.

The integration and security layer is also where compliance risk hides. We flag anything that would be uncomfortable in a SOC 2 or HIPAA audit, even when the client is not asking about compliance, because it almost always comes up later.

Day five: synthesis

The last day is writing. We produce three deliverables.

A scored rubric. Ten categories, each scored one to five. Data model, automation, code quality, test coverage, security, integration health, deployment process, admin operability, documentation, and performance. A radar chart with the scores on it is useful because it turns a 60-page document into a thirty-second conversation.

A prioritized backlog. Usually 20 to 40 findings, ranked by cost to fix vs. value of fixing. We are explicit about which items are P0 (ship this quarter), P1 (next two quarters), and P2 (watch, fix opportunistically). The prioritization matters more than the list.

A one-page executive summary.Two paragraphs of findings, a "here is where we are strong" section, a "here is what is at risk" section, and the top three recommendations. This is the document that gets forwarded to a CIO. The rubric is the document that gets read by the architect.

What we always find

Six patterns, more or less universal across orgs above a few hundred users.

  1. Duplicate automation that nobody has pruned because removing it feels risky.
  2. Flows that do what a trigger handler would do in half the lines, and triggers that do what a flow would do with less overhead.
  3. Apex test classes that exist only for coverage, with no assertions, written in a hurry before go-live.
  4. Fields on core objects that have not been written to in over a year but are still included in page layouts, reports, and flows.
  5. Sandbox sprawl. Four developer sandboxes, two partial copies, a full copy that has not been refreshed in eighteen months.
  6. A release process that relies on one person's tribal knowledge and a change set.

None of these are catastrophic in isolation. Together they slow a team down, and they compound. A team shipping against an org with all six is spending a meaningful part of every sprint fighting drag.

What it costs to ignore

Two costs, one loud and one quiet.

The loud cost is the eventual incident. An automation conflict that corrupts data. An integration that silently drops records. A performance issue in production that forces an emergency architecture change. These are rare but expensive when they happen, and they are almost always traceable to debt that had been accumulating for months.

The quiet cost is velocity. A team shipping against a clean org is two to three times faster than the same team shipping against a messy one. Over a year, that is the difference between shipping the roadmap and shipping half of it.

You do not have to fix everything

The point of the audit is not to produce a list that overwhelms the team. It is to produce a small number of high-leverage fixes that can be executed in the next two quarters without derailing roadmap work.

Usually that is three to five initiatives: a data model prune, an automation consolidation, a test coverage push on the highest-risk classes, a CI/CD upgrade, and a documentation pass. Each is a few-week project. Each returns months of velocity.

If you are staring at an org you inherited and you have that uncomfortable feeling that something is off, this is the week of work to run. Either we can help, or you can run the playbook yourself. Either way, seeing it clearly is most of the fight.

Technical DebtOrg HealthArchitectureAudit

Keep Reading

Need help putting this into practice?

We build and run Salesforce for enterprise teams. If anything in this post raised a question about your own org, we'd be happy to talk.

Get in Touch