Salesforce Code Review & Handover Bingo: Common Mistakes and Red Flags 🚩

Whenever a project is handed over to you, some sort of review or state-of-the-art check should be done. We can make it a more fun, team activity and fill the bingo card during the investigation!
salesforce-development-practices-bingo
Image done using Bingo Baker.

To make this bingo play have some fun-factor, the mentioned concepts have different severity – some are harmless, annoying inconveniences, others are bigger code- or architectural-smells, indicating something bigger hidden or incoming troubles. Treat it as a light-hearted team activity – a conversation starter that helps you spot risks, share laughs, and maybe cry a little on the inside.
Got a full row? Congrats! A wild Technical Debt story appeared in the backlog!

All classes in without sharing mode

Sharing controls what record can the executing User read or read & write. Having without sharing is removing that control. Should your SAP Integration User, who is inserting Products, have a possibility to see personal details of all of your clients, or overwrite them by mistake due to some bug?
Absolutely not (unless business wants that 🙃).
The default sharing keywords to use should be with sharing for domain classes, and inherited sharing for helpers and tools.

There are cases when without sharing is needed, like performing update calls from unauthorized Guest Users, but in my opinion, it should be handled as unsafe code. I like to hide in a separate, inner private class that does absolutely minimum and is as hard to change or do unwanted stuff as possible.

public with sharing class ContactDomain {

    public void updateDetails(List<Contact> contactDetailsList) {
        if (UserInfo.getUserType() == 'Guest') {
            new ContactUpdaterGuestUser(contactDetailsList).updateContacts();
        }
    }

    private without sharing class ContactUpdaterGuestUser {
        private final List<Contact> contactDetailsList = new List<Contact>();

        private ContactUpdaterGuestUser(final List<Contact> contactDetailsList) {
            this.contactDetailsList.addAll(contactDetailsList);
        }

        private void updateContacts() {
            final List<Contact> contactsToUpdate = new List<Contact>();

            for (Contact c : contactDetailsList) {
                Contact contactToUpdate = new Contact();
                contactToUpdate.Id = c.Id;
                // add only fields that Guest user is allowed to change
                contactToUpdate.LastName = c.LastName;
                contactToUpdate.Phone = c.Phone;
                contactsToUpdate.add(contactToUpdate);
            }

            update contactsToUpdate;
        }
    }
}

Exactly 75% test coverage

Keeping the test coverage at the bare minimum of the mandatory level means that whenever you add a single line anywhere, there is a risk that deployment won’t proceed. If we encounter such a situation while something in Production is failing and a quick hotfix is needed, a ↗️boost to frustration and ↘️reverse boost to SLA are guaranteed 🥳.

No test data factory or builder

There is nothing better than changing 50 tests classes and Contact creation, just because new Required field was added and tests started to fail. Making all tests use common Test Factory methods to provide the same record makes code maintenance easier. Another advantage is documentation – most likely, your business use certain state of record for most of the operations (like Lead with LeadSource = 'Web', Rating = 'Warm'), and you can provide that state of record with Factory, giving new developer some hints what should be expected from org data.
There are plenty examples of it on the Web, simple ones like on Apex Hours, or more complex on public GitHub or by Piotr Gajek, a plethora of options.
For some use-cases, a Builder pattern can be very helpful – when code requires very specific conditions or rules for different tests, it may be easier to implement flexible builder, rather than multiple factory’s methods, for example: processing of Account’s status depending on how many Contacts it has, what is the Opportunities Amount sum, what the highest Amount across the Opportunities, and how old these Opportunities are.

No flags to deactivate trigger and validation rules declaratively

It’s a good idea to have a simple Boolean in Custom Settings that admins can use to temporarily disable triggers and validation rules. This helps with edge cases, data fixes, imports, or restoring backups — all without breaking business logic for standard users, who still must obey the rules as Hierarchical Custom Settings can be adjusted to certain Users or Profiles. Bonus: you can loosen validation for API users too. Adding this early is cheaper than implementing it into complex system.
I personally prefer not using Custom Metadata for this, because they are deployable. I’m paranoid about deploying disabled flag by mistake in CI/CD, making Production not running Validation Rules or Triggers for some time – the potential cost of it seems too big to risk that possibility.

Multiple triggers on one sObject

When multiple Apex triggers are present on one object, they are being fired in a random order, thus you cannot rely on them and cannot test them properly. These are a big no-no, just use (any) trigger handler and define the order of execution of your code.
The only exception here is triggers installed from packages, and nothing can be done about that.

SOQLs without WHERE or LIMIT outside Batch

Any unlimited query may lead to surpassing governor limits, throwing an unhandled LimitException and terminating the whole processing in Production. In the case of frequently used objects like Account or Opportunity, this is rather inevitable.
thanos-snap

There can be a temptation to not use a WHERE clause for objects that are going to be used as some technical flags or mappers, but this is an architectural smell that can make future design harder to do – whenever there will be a need to expand the use of the queried object in other places, old queries may need to be adjusted to not retrieve these new parts. It could be avoided at the start by ensuring only the right records are fetched from the database.
Asking business people about conditions or states whether these records may not be valid to use is a good start, or even adding simple IsActive__c boolean or Category__c picklist field.
The scope of changing old queries can be reduced by good use of the Selector pattern, like SOQL Lib, as queries are placed in separate, well-defined places.

Multiple Record Triggered Flows on same OperationType on one sObject

Similar to multiple triggers on one sObject, having multiple Flows for one trigger operation is also not a great idea. It is not as bad, because we can define the order of Record Trigger Flows execution with Order property, but other problems remains: multiple, unstructured triggers are harder to read and follow-up, harder to add new features, prone to duplication of logic or logic collisions,
and more time-consuming to debug. And referencing Jan Śledziewski blog-post again, multiple Record Trigger flows can be devastating for the performance if hundreds of records are being processed in the transaction.
Another discussion is how to handle Asynchronous paths and Scheduled Path. In my opinion, if the async or scheduled logic is complex, it’s reasonable to place it in a separate Record-Triggered Flow. Just like with synchronous logic, the rule still applies: only one Flow for Async and one Flow for Scheduled Paths. These should act as entry points, calling proper Subflows from them.

Using Profiles instead of Permission Sets

Profiles were even announced to their End-of-Life for Spring’26, however this was later postponed to unannounced –yet future in the Salesforce’s Help article. However, there are very little left that Profile can do, and in the mentioned article these are written down. While Profiles are still mandatory on User record, recommended practice is to assign Minimum Access – Salesforce and expand it with Permission Sets – if you are not doing it on your org yet, you should start soon. Check out User Access Policies, these can help your transition to the Brave New World of small, manageable files of Permission Sets.

Excessive amount of System.debug()

Leftover System.debug('Collection insert'); not only clutters the code, forcing our eyes to move additional millimeters, but it also has a negative impact on performance. Bob Buzzard in his blog post did some benchmarking, and these can add up surprisingly quickly on a CPU Limit, especially on object printing.
If you need to have some logging running in Production, think about whether it should run all the time and for all Users. If not, there are a couple of solutions:

  1. Put System.debug() in if statements and control whether they should be enabled or not by Custom Metadata (like in the blog post, Delete All the Debugs? section), Custom Settings, or Custom Permission with FeatureManagement (Pawel Wozniak’s comment on Bob’s blog)
  2. Set a higher level of logging for certain User and use SFDX with streaming logs, or Apex Debuggers – check out Piotr Gajek’s Apex Debugging post.
  3. Nebula Logger for heavy-duty monitoring.

Checking collection emptiness before DML

I have seen this on every project I was involved in, and shamefully wrote that check myself too:

if(!recordsToInsert.isEmpty()) {
   //No additional logic, just DML
   insert recordsToInsert;
}

Apparently, that’s not needed at all – if a List has no records, 0 DMLs are executed. Try it yourself 😉
Related Stack Exchange post mentions that isEmpty check is faster than not using it, although the difference is about 1ms per 100 DML calls – more time is spent reading the condition than executing it will ever save.

No Fault Paths in Flows

Fault Paths are equivalent of Apex’s try-catch blocks, allowing proper error handling, without presenting ugly message to the User. For some reason, a lot of Flows I have encountered didn’t use them, and the Users were forced to see the Unhandled Red Text of Shame.
Fault Path not only gives an opportunity to make the error more presentable, it also allows to store it as log, do rollback, or return User to the previous screen with guide about what should be corrected.
The biggest issue I have with Fault Paths is that they are preventing creation of Error Flow Interviews, that gives full debugging information with variable’s values and what visual guides on which exact Block failed. Special shout-out to Event and Task Record Trigger Flows, on which Debug mode still don’t let you specify record’s data or choose operation type, making debugging an exercise of tracking everything in the Flow with mental cache or with notepad, just like in the good ol’ days of punched-cards 👴🏻.

For readers who understand Polish language – there is a very good conference video about dealing with errors in Flows, made by Paweł Dobrzyński.

Gigantic Flow instead of multiple Subflows

A gargantuan chart with dozens of Action blocks, resembling more of tangled strands of spaghetti, that is impossible to memorize and follow-up, overloading your brain-level cache everytime you look into it, is really, really sad to work with. Don’t punish your future self or your colleagues with it, just split it into some well-named Subflows. Like methods in code, a Subflow doesn’t need to be reusable by multiple Flows to justify its existence – it can simply be a tool to give a complicated subprocess a descriptive name.

Small caveats here – there are at least two valid reasons to not divide big Flows into smaller Subflows. The first one is working with Before-trigger Flows, as they don’t allow adding any Subflow block into the Flow.
Another reason is related to performance in bulk-operations. Jan Śledziewski at his blog-post "Importance of Having a Single Flow Trigger" showed that an additional Subflow call takes around 2 milliseconds extra. For a 1000 records update, it adds up to 2 seconds taken from CPU time. Saying that, if your system regularly handles massive updates on hundreds of records, you should already consider going with Apex instead of Flows.

Only catching general Exception in try-catch

There are cases when different Exceptions should be handled differently. For example, when a User tries to insert a record during which some calculations are made, failing because a value of 0 was put into the divisor is not the same as failing due to a lack of field-level permissions or a missed required field.
In order to specify which Exceptions should be handled, the developer is forced to think about what can fail, in the long term it should give a better understanding of the process. Not only that, it’s a documentation for others, as it shouts "Hey! I can fail here, please be sure you know about this constraint! 📣"

Excessive lack of .startTest() and .stopTest()

These methods are doing the following:

Executing asynchronous code should be a part of testing whenever we use any DML operation (making them more of integration tests, rather than unit test in my opinion), as it implicitly call the Triggers and Flows, which may include Future, Queueable, Batch calls or Flow’s Asynchronous paths. Even if these are not present right now, they might be present in the future. Our tests should be resilient and performing async code can detect possible side effects.

Hard-coded authentication instead of using Named Credentials

Named Credentials can be used in Apex code directly:

// Wrong ❌❌❌
    HttpRequest req = new HttpRequest();
    req.setEndpoint('https://api.paypal.com/v1/');
    req.setHeader('Client-Id', '6779ef20e75817b79602');
    req.setHeader('Client-Secret', 'jih0VuTzAVa4Mqm3auATbAQgCA0a');

// Good ✅✅✅
    HttpRequest req = new HttpRequest();
    req.setEndpoint('callout:My_Named_Credential/v1');

// Also good ✅✅✅
    HttpRequest req = new HttpRequest();
    req.setHeader('X-Username', '{!My_Named_Credential.Username}');
    req.setHeader('X-Password', '{!My_Named_Credential.Password}');

Having secrets like clientId and clientSecret explicitly written in Apex code, which is also put in the Git repository, is straight-forward security violation.
Named Credentials not only hide the secrets, but also limit access to them by individual permissions in Permission Set, so any person able to execute Anonymous Console won’t be able to use the credentials and make the authenticated callout.

Code in trigger instead of Trigger Handler

There should be only one trigger per each object, and code in trigger files should be limited to invoking some handler framework. Why? Because putting logic directly in triggers makes it hard to test, reuse, or control execution order. Avoid having one file with hundreds of code line in one trigger file. A good, modular trigger handler is easy to adjust for new features or refactoring, and can also quicken your unit tests execution – instead of doing DML calls in the tests, you will be able to pass a record collection to a method called by Handler, and check that exact place only, without going through all the insert/update processing.
Another big problem with not using a Trigger Handler is when developers decides they need one, then they can use different one than their colleague are using on other project in the company, adding unnecessary complexity to the future handovers. Better practice is to use the same or similar trigger frameworks across the organisation to reduce the scope of knowledge transfers, keeping that process more focus on business needs and process, rather than structure of the code.

No assertions in Tests

I absolutely hate it 😡. It gives me a false sense of confidence that something works, while the only thing it actually proves is that the covered Apex code compiles.
I’d honestly rather deal with completely untested code, that I need to figure out from scratch, than stare at those green coverage lines of lies, pretending everything’s fine.
I’d even rather see fake coverage with long, semi-empty methods than tests with no assertions, because fake coverage clearly indicates to me that something is wrong.

Let me quote Tomek Kaczanowski and his book about unit tests, "Bad Tests, Good Tests":

There is only one (evil!) reason to write tests without assertions: to inflate the code coverage metrics without doing anything useful!

The horror of this approach breaks the Geneva Conventions of software development.

If you spotted such assertion-free places in your code base, remember about newer Assert class, that should be used in place of older System.assert() methods. If you missed that recent feature of Winter’23, Adam Osiecki made a nice comparison in the blog.

Public Read/Write Org-Wide Defaults (OWDs)

There is absolutely no point in allowing your new junior Sales Representative to see a support Case record about missing cutlery basket in the ordered dishwasher (unless business explicitly asks for it 🙃).
Having OWDs set as Private, you reduce the risk of mistakenly made data updates, or unwanted pair of eyes seeing the personal data of your customers or key business partners 👀.
Salesforce sharings are permissive, not restrictive, it may not be possible to hide some records from some users when you will need it, but we have a lot of tools to display hidden records to someone. More about this can be read at this StackExchange post or Salesforce’s Security Guidelines.

Getting IDs by loop instead of Map(records)

Sometimes the set of IDs needs to be extracted, for example to make some mappings from one object to another. Instead of iterating over a List, a Map constructor can be used:

public static Set<Id> getContactIds(List<Contact> receivedContacts) {
    // Not so good ❌❌❌ and older 👴👵
    Set<Id> contactIds = new Set<Id>();
    for (Contact c : receivedContacts) {
        contactIds.add(c.Id);
    }
    return contactIds;

    // Shorter, better ✅✅✅
    return new Map<Id, Contact>(receivedContacts).keySet();

    // SOQL version
    return new Map<Id, Contact>([SELECT Id FROM Contact WHERE CreatedDate = TODAY]).keySet();

}

Dan Appleman claimed in his book, Advanced Apex Programming, that Map implementation is up to 5x faster in CPU time. I used to receive similar results two years ago, however this seems to not be a case any longer – collection iteration and Map gives similar CPU times. However, HeapSize can be different for some reason.
Nevertheless, the latter is shorter, also requires fewer lines to read and cover with tests.

Hard-coded ID

Bonus points for IDs hidden in Custom Labels or Custom Settings to refer a RecordType in the Flow. Hard-coding IDs needs to be avoided, because

  • After deployment, Production org will have different IDs for these records, and future Sandbox refresh can desynchronize it again on their side
  • Unit testing can be harder or straight impossible, because the IDs are different in test context
  • a "minor" mistake of overwriting such Custom Label with RecordType suddenly can have gigantic consequences, and these can be hard to debug and fix quickly – not because they are complex by any means, but because a developer might not consider checking that at the beginning on investigation, as there are other places to check that are more likely to fail.

Missing bulk tests in test classes

Developing in Apex is working around the governor limits, moreover Triggers can handle at most 200 records at the same time. If you insert 201 records, it fires twice: first chunk for 200 record, then second chunk for 1. Some bugs or invalid resource management can be visible only when more than one record or chunk is being processed in one transaction. I like to write at least one test involving DMLs with 251 records – this number covers two chunks in triggers, exceeds all governor limits, and gives a little extra workout on CPU and Heap size limits.

Not testing it means that our system may not be prepared for any bulk data loading. If Admin will be requested to do some records correction with Data Loader, or some API service will process more records, we might end with disgusting problem of unsynchronized database state, which can be undetected for a long period.

SOQLs in loop

Common knowledge, but sometimes it happens. The reason behind avoiding it is the existence of governor limits – if we hit 100 SOQLs (200 for async), the transaction terminates, and an unhandleable LimitException is thrown. How to increase the chance of detecting these? See the point about bulk tests.

DMLs in loop

The same case as SOQLs but even worse – DML limit is bigger, however each database operation can invoke a trigger, annihilating CPU time and other governor limits as well.

Non-descriptive variable names in Flow

Sadly, there in not a widely-approved Naming Convention for Flow elements yet (SFXD is trying with their described guideline), however I encountered names like "Loop 1", "variable", "Assignment" more than zero times, and it made my day sad. In the Clean Code book by Robert C. Martin, he wrote that every name should be reveal intention, avoid disinformation, make meaningful distinctions, and few more. Name "Assignment" is meaningful only for the author, and only for a day.

References:

  1. Bob Buzzard – The Impact of System.debug() blog post
  2. Stack Exchange – Skipping DML on Empty Lists
  3. Jan Śledziewski – Importance of Having a Single Flow Trigger blog post
  4. Arcan – Architectural Smells blog post
  5. SOQL Lib – Apex library
  6. Tomek Kaczanowski – Bad Tests, Good Tests book
  7. Salesforce Assert Class – Salesforce documentation
  8. Adam Osiecki – System.Assert Class vs System.assertEquals() blog post
  9. Apex Hours – Test Data Factory blog post
  10. GitHub – TestDataFactory Apex library
  11. Piotr Gajek – Apex Test Data Factory blog post
  12. Apex Hours – Builder Pattern blog post
  13. Stack Exchange – Salesforce Sharing
  14. Salesforce Security Guide – Salesforce documentation
  15. Salesforce Asynchronous Paths – Salesforce documentation
  16. Salesforce Scheduled Paths – Salesforce documentation
  17. Dan Appleman – Advanced Apex Programming, Fifth Edition: 5 – Fun With Collections, Using Maps to Obtain Sets
  18. Trailhead – Fault Paths
  19. Paweł Dobrzyński – Error Handling in Flows (Polish) – conference video
  20. Salesforce Profiles EOL Postponed – Salesforce documentation
  21. User Access Policies – Salesforce documentation
  22. SFXD – Flow Naming Conventions – SFXD wiki post
  23. Robert C. Martin – Clean Code: Chapter 2 Meaningful Names
  24. Salesforce Named Credentials – Salesforce documentation
  25. Gherkin Reference – Gherkin Documentation
Stanisław Zań
Stanisław Zań
Salesforce Developer
Salesforce Developer since 2020. An Apex and Flow enjoyer, taking advantage of everything the blue platform has to offer, delivering robust solutions on the beloved Custom Cloud.

You might also like

Importance of Having a Single Flow Trigger
May 13, 2025

Importance of Having a Single Flow Trigger

Splitting trigger logic into multiple flows seams tempting, but it comes with a big disadvantage. Check how to avoid this mistake and adjust your strategy based on your project.

Jan Śledziewski
Jan Śledziewski

Senior Salesforce Developer

SOQL Lib

SOQL Lib

The SOQL Lib provides functional constructs for SOQL queries in Apex. Use the SOQL Lib as a Selector Layer on your project.

Piotr Gajek
Piotr Gajek

Senior Salesforce Developer

System.Assert Class vs System.assertEquals methods
November 7, 2022

System.Assert Class vs System.assertEquals methods

Salesforce released new Assert methods in Winter 23. Are they better or worse than existing assert methods? Should we replace the old ones with new methods? Check this article to find out!

Adam Osiecki
Adam Osiecki

Salesforce Developer