This debate has been recycled for a decade, and most of the takes are bad. Here is how we actually decide between Apex and Flow on real projects, where each one wins, and the hybrid pattern that avoids the worst of both.
Apex vs. Flow has been the tired debate in the Salesforce ecosystem for ten years. "Clicks not code." "Code not clicks." Both camps have ideology, both have evidence, and both are frequently wrong about when their preferred tool is the right one.
On actual projects, the answer is almost never either/or. It is where the boundary sits, and how disciplined the team is about respecting it. Here is how we think about it.
The case each side keeps making
The Flow-first camp argues that automation belongs with the admins who understand the business, that Flow has caught up to Apex in power, and that code is a liability every time an admin cannot evolve the platform.
The Apex-first camp argues that Flow is slow, fragile, hard to test, difficult to version control, and that the worst production incidents they have seen in the last five years were caused by flows that were never meant to scale.
Both camps are correct about the other side's weak cases. Neither is correct that their tool is the answer to every question.
Where Flow wins cleanly
Flow is the right choice when all of the following are true.
- The logic is mostly field updates, record creations, or simple branching based on data already on the record.
- The rule is genuinely business logic that should be owned by admins or operations, not engineering.
- The workflow is isolated to one object or a small cluster of related objects.
- Volume is moderate. Not tens of thousands of records touching it on a single transaction.
Classic examples: a case escalation rule, a partner approval workflow, a status rollup on a parent record, a welcome email when a contact is created. Writing Apex for these is overkill. Flow handles them in thirty minutes, and the admin team can evolve them without pulling in a developer.
The other place Flow wins clearly is screen flows. As interactive wizards, step-by-step data capture, guided processes, Flow is legitimately great. Building the same thing in LWC takes days. In Flow it takes a few hours, and the result is admin-maintainable.
Where Apex wins cleanly
Apex is the right choice when any of the following are true.
- The logic is algorithmic rather than procedural. Calculations with edge cases, recursion, complex branching, state machines.
- The operation needs to handle bulk at scale. Thousands of records per transaction, with governor limits to respect.
- The operation interacts with external systems in non-trivial ways. Callouts, structured parsing, error handling, retries.
- The logic needs real test coverage with asserted behavior, not just coverage percentage.
- The business rule needs to be reused across multiple entry points (triggers, flows, API, batch jobs) without duplication.
- Performance matters enough that the overhead of Flow's interpreter is a problem.
Classic examples: a pricing engine, an inventory allocation algorithm, an integration that syncs orders with an ERP, a bulk data processor that runs nightly on a million records, anything financial that needs to be auditable and tested against specific scenarios.
Writing this in Flow is technically possible. We have seen it tried. It is usually the single biggest source of debt in an otherwise healthy org.
The hybrid pattern we actually use
The durable version of this pattern is not "Apex owns object X, Flow stays away from it." Both can safely run on the same object, and most healthy orgs do exactly that. What makes it work is architecture: a clear trigger orchestration layer, a pattern for where business rules live, and explicit ownership of who writes what. Here is how we set that up.
One trigger per object
Every SObject that needs Apex automation gets exactly one trigger. That trigger does nothing except hand the transaction off to a handler class. No logic in the trigger body, no SOQL, no conditionals. This is the first rule and the one most orgs break. The usual symptom is three triggers on Account, each written by a different consultant two years apart, firing in an order nobody remembers, with duplicated logic and contradictory side effects.
The handler class is where orchestration lives. It dispatches before-insert, after-update, before-delete, and so on to the right methods, and it controls recursion so a trigger that updates records on its own object does not fire itself in a loop. Even a lightweight homegrown framework beats having no framework at all. Kevin O'Hara's trigger framework is a reasonable starting point if you want to borrow something well-worn. We use a variant of it on most projects.
The Domain pattern is the mature version
Once a project grows past a couple thousand lines of Apex, the trigger handler pattern tends to bend under its own weight. Methods pile up, cross-object logic creeps in, and the handler becomes a grab bag. The SObject Domain pattern is the next step.
Domain is a concept borrowed from Martin Fowler's Patterns of Enterprise Application Architecture and brought to Salesforce through the fflib library (also known as apex-enterprise-patterns, maintained by Andrew Fawcett and contributors). The idea is that a class represents a collection of records of a given type, and all the behaviour that applies to that type lives as methods on the class. An Accounts class wraps a list of Account records in-flight. Defaults, validations, derived fields, and the rules that fire on insert or update live inside it. The trigger handler becomes a thin dispatcher that instantiates the Domain and calls the right method.
The full pattern extends beyond Domain. A Service layer handles use-case level operations that span multiple objects ("close this case and notify the customer"). A Selector layer centralises SOQL so queries are not scattered through the codebase. A Unit of Work collects inserts and updates across a transaction and commits them together, which keeps DML counts and governor limits under control. Together, they give you a codebase that reads like business rules instead of a pile of trigger handlers.
We do not start every project with fflib. For smaller orgs, a clean trigger handler framework without the full Domain/Service/Selector/UoW scaffolding is usually enough. But once the Apex surface area is meaningful, the Domain pattern repays the setup cost many times over. Rules live in one place, they are testable in isolation, and a new engineer can find them by looking for the object they operate on rather than grepping across twelve handler classes.
Flow and Apex on the same object
Flow can run on the same object as an Apex trigger. The trick is being explicit about ownership and understanding the order of execution.
Salesforce's order of execution for a single record save, in rough order: before-save record-triggered flows run first, then before-triggers, then system and custom validation, then the record writes in memory, then after-triggers, then assignment and auto-response rules, then after-save record-triggered flows, then workflow rules, then escalation, entitlements, roll-ups, and eventually the commit. Both Apex and Flow can touch the same record, but they touch it at different points. A before-save flow that stamps a field and a before-trigger that recomputes the same field will see the trigger win. An after-save flow that issues an update to a record the Apex handler also updated will land after the trigger and can even re-enter your trigger if the handler has no recursion control.
The workable rule is one owner per field, not one owner per object. A single field should be written by exactly one automation, and that automation should be documented. Account.Industry might belong to Flow, driven by admin-maintained rules. Account.Tier might belong to Apex, driven by an algorithm with edge cases. Both can live on the same object without fighting, provided the boundaries are explicit and the team respects them.
The failure mode we see most often is not Flow and Apex coexisting, it is Flow and Apex silently competing for the same field because nobody drew the map. A quick exercise we run on audits: list every automation that writes to every field on the object, on one page. If a field has two owners, that is a bug waiting to find you.
Invocable Apex is the bridge
When a Flow needs to do something complex, a pricing calculation, a callout with retry, a transaction that spans several objects, expose it as an invocable Apex method rather than wiring the logic into nodes. The rule lives in one place, gets real test coverage, and the Flow becomes a thin orchestration on top. Admins keep the composition they want, and you avoid three copies of the same logic slowly drifting apart across three flows.
Recursion control belongs to Apex
If Flow updates a record that Apex processes, or Apex updates a record that a record-triggered flow reacts to, you can get recursion. Flow has limited tooling for detecting its own recursion, so the guardrails live on the Apex side: a static set of processed record IDs inside the handler, a recursion counter, or the stateful-trigger mechanism that fflib provides. If your org has any non-trivial automation, you need one of these. Without it, a bulk update from a single Flow can hit governor limits in ways that are painful to diagnose at 4pm on a Friday.
None of this is exotic. It is how a lot of well-run Salesforce orgs operate. What it asks for is an architect who will draw the boundaries clearly, a team that respects them, and a willingness to invest in the scaffolding before the org is complicated enough to need it.
The performance reality
Flow has gotten much better over the last few years. It is still slower than Apex for the same work. How much slower depends on what you are doing.
Rough numbers from our own benchmarks, not official: a simple record-triggered flow runs in the low single-digit milliseconds for a single record. Not a problem. Run that flow against 10,000 records in a single transaction and the story changes. The interpreter overhead compounds. We have measured cases where Apex handled in 400ms what Flow took 6 seconds to do, and the DML and CPU time limits became real.
You will not notice this on an org with modest volume. You will definitely notice it on an org with data loader jobs, integrations, or bulk operations. If any of those are part of your platform, the automation layer needs to account for it.
Testing is where the rubber meets the road
The single biggest argument for Apex is testability. A well-tested Apex class is a living specification of the business rule. You change the rule, you change the test, and the change is self-documenting in the repo.
Flow testing exists, and it is better than it was, but it is not there yet. Testing a flow still feels like testing a system you cannot fully mock. Dependencies are implicit. Failure modes are surprising. You cannot version-control the artifact the way you can with Apex.
For anything that needs to be verifiable (compliance-relevant logic, anything financial, anything you will be audited on), Apex is the honest choice. For anything where the behavior is simple enough that the flow reads like a spec, Flow is fine.
The version control problem
Flows are metadata, and they deploy. But reading a flow diff in a pull request is painful, and merge conflicts on flows are frequently unresolvable without opening the org. Any team that ships from Git and runs real code review is going to find this frustrating.
Apex diffs beautifully. Apex conflicts resolve sanely. If your team has a mature engineering practice, a lot of your business logic is going to gravitate toward Apex for this reason alone, regardless of the elegance debate.
What we default to at different org sizes
Small orgs, admin-owned. Flow-first. The team does not have engineers on staff. Apex handlers are a small number of targeted uses. Flow runs the business.
Mid-sized orgs, small engineering team. Hybrid, tilting toward the pattern above. Apex owns anything complex, Flow orchestrates, invocable methods bridge. Architect enforces the boundary.
Large orgs, dedicated engineering practice. Apex-heavy. Flow is reserved for screen flows and genuinely admin-maintainable workflows. Every meaningful business rule lives in versioned, tested code. Deployment goes through CI/CD.
A test that usually works
When you are unsure, ask two questions. First, if this logic is wrong in production, how much does it cost? If the answer is "a bad email goes out," you can use Flow. If the answer is "we misbill customers" or "we break an integration," use Apex.
Second, who is going to maintain it? If the honest answer is "an admin," Flow. If the honest answer is "an engineer, forever," Apex. Pretending an admin will maintain complex logic the engineering team wrote in Flow is how you end up with flows nobody understands in year three.
The tired debate, resolved
The honest answer is that the debate was always the wrong framing. Flow and Apex are different tools for different problems. A good architecture uses both, deliberately, with a clear rule about which one owns what.
If your org has drifted into a pattern where logic is accidentally split across both (some rules in Flow, some in triggers, some in process builder that nobody has migrated), the work is not picking a side. The work is drawing the line and then consolidating.
We spend a lot of time doing exactly that with clients. It is less flashy than building new things. It is one of the highest-leverage things an engineering team can do in a long-lived Salesforce org.