Beyond Best Practices – Why Following the Rules Can Break Your Code? – Polish Dreamin’ 26
Dreyfus Model of Skill Acquisition
Have you heard about the Dreyfus Model of Skill Acquisition?
The Dreyfus Model of Skill Acquisition describes how individuals move through different stages as they develop their skills. It helps us understand how team members grow over time -from novice to expert. In software development, we typically label these stages as Junior, Mid, and Senior developers. Each of these titles gives us a rough expectation of their level of competence.
As shown in the image above, each level in the model is associated with a different way of thinking and acting. We all aim to become experts – people who have a vision of what’s possible, an intuitive grasp of situations, and who transcend reliance on rules.
The problem appears when people who are considered Leads, Seniors, or Architects are still operating at an earlier stage of the Dreyfus model. They may hold an “expert” title, but their thinking is still at a “competent” level. This means they may blindly follow rules without fully understanding why they exist or when they should be broken. Following rules “just because” can be harmful to both the project and the team.
To simplify, in the context of this article, we can assume:
- Junior devs don’t know the rules yet.
- Intermediate devs know the rules and obsess over following them.
- Senior devs have internalized the rules and know when to ignore them.
I want to show that breaking the rules isn’t bad – as long as you understand why you’re doing it. Things we are usually told to avoid – hardcoding, duplication, etc. – can be perfectly acceptable if they make the code clearer or simpler.
During my research for this article, I came across a great quote:
Don’t try to be clever – be clear!
Clever code is expensive. It takes longer to write, longer to review, and you pay for that complexity throughout the life of the project. It’s more likely to introduce bugs and increases long-term maintenance costs. “Clever” often means strictly following rules – design patterns, generic solutions, flexibility, configurability, and so on.
I’ve worked on dozens of projects. Below are some of the rules that made my life harder rather than easier. This post is my personal vendetta 😄
Each point connects with the others, and you may see some ideas repeated in different contexts.
Let’s dive into it.
Rule 1: Code Should Be Configurable
Imagine you’ve written fully functional code that is ready for code review. Suddenly, a senior developer or architect steps in and says:
“Your code isn’t configurable enough – you should add Custom Metadata so we can change its behavior in the future.”
I’ve been in this situation multiple times in my career. The code was complete and working, but during review I was told it needed to be more configurable and dynamic.
Naturally, the question becomes: Why? And “just in case” is not a strong enough reason.
Why Do We Need Configurable Code?
First, let’s understand why we might need configurable code and what the most common motivations are:
- Flexible solutions
- Allowing admins and non-technical users to control system behavior
- Making quick changes without updating code (no deployment required)
- Easily enabling or disabling a feature
- Keeping configuration, business rules, and mappings in one place
All of this sounds great – and there’s a lot of truth in these arguments. But the devil is in the details. We need critical thinking and a solid understanding of the trade-offs before making design decisions. I want to show the dark side of configurable code: while it can be genuinely useful, in many cases solutions become too generic and too flexible, and over time they do more harm than good.
The most common ways developers try to make code configurable include:
- Custom Metadata
- Custom Settings
- Custom Labels – yes, even those
Even in the Salesforce documentation we can find that:
[…] you can use custom metadata types for these use cases.
- Mappings – Create associations between different objects, such as a custom metadata type that assigns cities, states, or provinces to particular regions in a country.
- Business rules -Combine configuration records with custom functionality. Use custom metadata types and some Apex code to route payments to the correct endpoint.
- Primary data – Let’s say that you use a standard accounting app. Create a custom metadata type that defines custom charges such as duties and VAT rates. If you include this type as part of an extension package, subscriber orgs can reference this primary data.
- Allowlists – Manage lists, such as approved donors and preapproved vendors.
In this section, I’ll try to convince you that you don’t always need configurable or dynamic code, and that over-configuration can lead to something I call “Custom Metadata hell.”
NOTE!: By configurable code, I don’t mean code that is easy to extend, easy to use, or follows S.O.L.I.D principles and design patterns. I mean code that can be configured by admins or non-technical users through UI changes – for example, by modifying Custom Metadata records.
The Hidden Cost of Configurable Code
Configurable and flexible code is not inherently good – there is a hidden cost behind it.
Highly configurable code can lead to:
- Debugging nightmares
- Hidden bugs
- Reduced clarity and maintainability
- Increased testing complexity
- Longer development time
Below, I will provide more arguments that highlight the hidden costs of configurable code. I will focus on Custom Metadata, which is probably the most popular way to make code configurable.
Debugging Nightmare
Configurable solutions can easily become a debugging nightmare. One problematic pattern I’ve encountered across multiple projects is storing unstructured JSON inside Rich Text fields in Custom Metadata.
{ "title": "My Component", "actions": ["save", "delete"], "columns": ["name", "link", "createdDate"] }
Each metadata record contains a different JSON structure, used by various LWC components or Apex classes. The metadata name is passed between components through an @api property.
As you can imagine, figuring out which metadata record is being used, or what is missing or misconfigured in the JSON quickly becomes difficult.
Another issue is that Custom Metadata always introduces an additional step when trying to understand what the code is doing. Do you remember the quote from earlier?
Don’t try to be clever – be clear!
Heavy configuration often makes code clever – but rarely clear.
// Static version public static List<Account> getMyAccounts() { return [ SELECT Id FROM Account WHERE RecordType.DeveloperName = 'Partner' ]; } // Dynamic version public static List<Account> getMyAccounts() { String accountRecordType = AccountSettings__mdt.getInstance('RecordTypeSettings').Value__c; return [ SELECT Id FROM Account WHERE RecordType.DeveloperName = :accountRecordType ]; }
In software engineering, we talk about Cognitive Complexity – a measure of how difficult a piece of code is to understand. Custom Metadata often increases the number of mental steps required to understand behavior.
The example above is still simple – so let’s look at a more realistic (and more problematic) case.
public static List<Account> getAccounts(String metadataConfig) { SOQLFilters__mdt filtersSettings = SOQLFilters__mdt.getInstance(metadataConfig); String query = 'SELECT Id, Name FROM Account WHERE'; if (String.isNotBlank(filtersSettings.RecordTypes__c)) { query += 'RecordType.DeveloperName = ' + filtersSettings.RecordTypes__c; } if (String.isNotBlank(filtersSettings.Industry__c)) { query += 'Industry = ' + filtersSettings.Industry__c; } // etc }
Now the code is significantly harder to reason about.
To understand what’s happening, you must:
- Understand how
SOQLFilters__mdtworks - Know the structure of the metadata
- Trace where
metadataConfigis coming from
Imagine there are 50 metadata records that can be passed into this method – dynamically selected in LWC based on runtime criteria. Debugging becomes guesswork.
With Custom Metadata, understanding behavior often requires:
- Checking which configuration is used
- Verifying the execution path
- Determining why something was enabled or disabled
- Sometimes even asking someone who originally implemented it
You might hear: “But we don’t write code expecting to debug it – our code works.”
Let’s move to the next argument.
Configurable Code Breaks More Frequently
Cognitive Load (Again)
Configurable code often increases cognitive complexity. We have to hold more information in our heads just to understand how the logic behaves. During development, it becomes easier to miss something:
“Oh wait – it doesn’t work because there are 10 other configurations I forgot to update.”
The more steps required to understand how something works, the less control we truly have over it.
I really like the quote:
“The best code is no code at all.”
Each new abstraction layer (such as Custom Metadata) introduces additional maintenance, debugging, and testing effort.
I’ve encountered many unsuccessful solutions that relied heavily on Custom Metadata, where developers struggled to understand how the system worked. Over time, teams became afraid to change configuration records because they didn’t know what the impact would be.
This fear usually appears when the logic becomes too difficult to follow – often due to high cognitive complexity.
Configurable Code Is Untestable
How do you properly test configurable code when the configuration itself can change?
A paradox I’ve encountered is that we make our code dynamic and configurable, yet our tests rely directly on configuration records (Custom Metadata or Custom Settings). This means every configuration change can potentially break our tests.
public class DiscountService { public static Decimal applyDiscount(Decimal amount) { DiscountSettings__mdt config = DiscountSettings__mdt.getInstance('Standard'); return amount - (amount * config.DiscountPercent__c); } } @IsTest private class DiscountServiceTest { @IsTest static void shouldApply10PercentDiscount() { Test.startTest(); Decimal result = DiscountService.applyDiscount(100); Test.stopTest(); Assert.areEqual(90, result, 'The result should be 90.'); } }
This test works only because it assumes that the Standard discount is set to 10%.
If someone changes the discount value, the test will fail – even though the code itself has not changed.
If a test assumes that a Custom Metadata record contains a specific value, then the behavior is effectively static. Tests also cannot run reliably unless we are certain that the required configuration records exist in the org. If those records are missing or differ from what the test expects, everything fails. In such cases, the logic might as well be hardcoded instead of pretending to be configurable.
Truly configurable code should not rely on real configuration records in tests. By depending on actual metadata, we create a strong coupling between our tests and the environment – which is not far from using SeeAllData=true.
One way to address this is through proper mocking. Instead of relying on real Custom Metadata records, tests can use mocked configurations. However, very few teams actually mock metadata when testing solutions based on Custom Metadata.
The Impact
As mentioned earlier, one of the main reasons for using configurable code is to allow admins and non-technical users to manage configuration. While this sounds like a noble goal, we should first ask: _Why should admins and non-technical users be able to make configuration changes? Do they fully understand the potential impact of those changes?_
Let’s assume someone modifies a Custom Metadata record directly in production. Custom Metadata or Custom Settings may be used in:
- Apex code
- Unit tests (as already discussed)
- Formulas
- Validation rules
- Flows
- And more
Such a change can impact multiple parts of the system and potentially break existing functionality. In my opinion changes should be only made by people who understand their impact.
CI/CD Is Enough
Using Custom Metadata often makes sense when the CI/CD process is slow and deployments take hours – or even days. However, this raises an important question: Do configuration changes really need to happen immediately?
There are only a few scenarios where instant updates are truly valuable. For example:
- Feature flags – when rolling out new functionality, you may want the ability to quickly disable it if something goes wrong.
- Temporary trigger control – when performing mass data operations, you may want to disable triggers during the process and re-enable them afterward.
However, in the vast majority of cases, configuration changes do not need to take effect instantly. If your Custom Metadata stores business rules, mappings, or UI configuration, wouldn’t it be just as effective to deploy those changes through the CI/CD process? These changes must be aligned with the repository anyway, and if needed, they can be delivered quickly as a hotfix. If immediate updates are not required, then why make the behavior configurable instead of implementing it as static/hardcoded logic?
How to Decide If Code Should Be Configurable?
Ask yourself the following questions:
- Do I need the change immediately, or can it be deployed through the CI/CD process?
- If the change is not time-critical – Use CI/CD (no configuration needed)
- If the change must happen immediately – Consider making it configurable
- Can non-technical users make this change safely, without unintended impact?
- If yes – Configurable
- If no – Not configurable
- Do unit tests rely on these records and expect specific values?
- If yes – Not configurable
- If no – Configurable
If the behavior must remain stable for tests, clearly understood by developers, and safely deployed – Don’t configure it – use static code.
If the behavior needs to change quickly and safely by business users – Make it configurable.
Summary
How can we break this rule?
- Use static or hardcoded logic when appropriate.
- Avoid relying on Custom Metadata or Custom Settings by default.
There are many scenarios where configurable code is the right choice – there’s no doubt about that. My main point in this section is that you need configurable code less often than you might think. In most cases, static code is sufficient, and Custom Metadata or Custom Settings should be used only when truly justified. Building flexible solutions “just because” is rarely optimal. Flexibility should not be the default approach – it should be intentional and well considered.
Static (or “hardcoded”) code is often easier to understand, maintain, debug, and fix. When written well, it can also be easy to extend and resilient to change. Let’s dive into it!
Rule 2: Static/Hardcoded Code Is Bad
Hardcoded code embeds configuration directly in the source code of a program, rather than loading it from Custom Metadata, Custom Settings, or database records.
To better understand this, let’s look at a real example.
I worked on a project that included an integration with SharePoint. When a user clicked the “Create Folder in SharePoint” button on the Account page, a folder structure had to be created in SharePoint. The structure was hierarchical – each folder contained subfolders with their own configuration. It looked something like this:
folder name: Contracts subfolders: [ { folder name: Master Agreement read access groups: [Legal Team] edit access groups: [Legal Operations] }, { folder name: NDA read access groups: [Legal Team, Sales] edit access groups: [Legal Operations] } ] read access groups: [Global Sales, Legal Team] edit access groups: [Legal Operations]
The folder was created via a REST integration. The request body had to include a very long string representing the entire structure. In practice, the configuration shown above had to be translated into the required request format.
This led to an architectural decision: Where and how should we store the folder configuration?
Let’s look at the available options.
Just a Callout
This is the most straightforward approach: make a callout and hardcode the request body directly in Apex.
private class SharePointCallout { public static void createFolderStructure() { HttpRequest request = new HttpRequest(); request.setEndpoint('callout:SharePoint' + '/_api/$batch'); request.setMethod('POST'); request.setHeader('Content-Type', 'multipart/mixed; boundary=batch_e3b6819b-13c3-43bb-85b2-24b14122fed1'); request.setHeader('Accept', 'application/json;odata=verbose'); String boundary = 'batch_e3b6819b-13c3-43bb-85b2-24b14122fed1'; String body = '--' + boundary + '\r\n' + 'Content-Type: application/http\r\n' + 'Content-Transfer-Encoding: binary\r\n\r\n' + 'GET https://fabrikam.sharepoint.com/_api/Web/lists/getbytitle(\'Composed%20Looks\')/items?$select=Title HTTP/1.1\r\n\r\n' + '--' + boundary + '\r\n' + 'Content-Type: application/http\r\n' + 'Content-Transfer-Encoding: binary\r\n\r\n' + 'GET https://fabrikam.sharepoint.com/_api/Web/lists/getbytitle(\'User%20Information%20List\')/items?$select=Title HTTP/1.1\r\n\r\n' + '--' + boundary + '--'; request.setBody(body); // etc } }
However, the SharePoint batch request format is complex and extremely difficult to read. To create folders with subfolders and apply read/edit permissions, the request body would easily grow to thousands of characters. At that point, even small future changes become painful to implement, and debugging becomes nearly impossible without deep SharePoint API knowledge.
I couldn’t justify doing it this way. So how can we make it better?
Custom Metadata
We’ve already discussed using Custom Metadata to create configurable solutions. One option would be to store the SharePoint folder structure in metadata records and use them to dynamically build the request body.
Parent Folder structure:
| Name | Read Access Groups | Edit Access Groups |
|---|---|---|
| Contracts | Global Sales, Legal Team | Legal Operations |
| Proposals | Sales Team, Pre-Sales | Sales Operations |
| Customers | Account Management, Support | Account Management |
| Vendors | Procurement, Finance | Procurement |
SubFolder structure:
| Name | Parent Folder | Read Access Groups | Edit Access Groups |
|---|---|---|---|
| Master Agreement | Contracts | Legal Team | Legal Operations |
| NDA | Contracts | Legal Team, Sales | Legal Operations |
| Pricing Breakdown | Proposals | Sales Team | Sales Operations |
| Implementation Plan | Customers | Support, Delivery | Delivery Team |
private class SharePointSettings { public static String getFolderStructure() { // query metadata // build the structure } } private class SharePointCallout { HttpRequest request = new HttpRequest(); request.setEndpoint('callout:SharePoint' + ... + '/_api/$batch'); request.setMethod('POST'); request.setHeader('Content-Type', 'multipart/mixed;'); request.setHeader('Accept', 'application/json;odata=verbose'); request.setBody(SharePointSettings.getFolderStructure()); }
In our case, I would need not just one, but two Custom Metadata Types – one for parent folders and another for child folders. There were around 15 folders, 10 subfolders, and 20 groups. In total, this would result in approximately 25 metadata records. On top of that, group names would be manually copied and pasted, meaning even a small spelling mistake could break the solution.
At that point, I had to ask myself: do I really need this to be configurable? Of course not – this was a fixed structure. So why store it in Custom Metadata?
Hardcoded Structure
Flexibility should be intentional and justified – not the default approach.
In this case, I knew the folder structure was fixed and unlikely to change frequently. And even if it did, updates could be deployed through the CI/CD process. So did I really need metadata to store it? Did I want to manage an additional 25 records? Not at all.
The entire structure could simply be hardcoded. Yes – you read that correctly – I hardcoded it.
I introduced a FIXED_FOLDER_STRUCTURE variable that contains the full folder definition.
private static final List<Folder> FIXED_FOLDER_STRUCTURE = new List<Folder>{ new Folder('Contracts') .addReadAccessToGroup(SharePointGroupType.GlobalSales) .addReadAccessToGroup(SharePointGroupType.LegalTeam) .addEditAccessToGroup(SharePointGroupType.LegalOperations) .addNestedFolder( new Folder('Master Agreement') .addReadAccessToGroup(SharePointGroupType.LegalTeam) .addEditAccessToGroup(SharePointGroupType.LegalOperations) ) .addNestedFolder( new Folder('NDA') .addReadAccessToGroup(SharePointGroupType.LegalTeam) .addReadAccessToGroup(SharePointGroupType.SalesTeam) .addEditAccessToGroup(SharePointGroupType.LegalOperations) ), new Folder('Proposals') .addReadAccessToGroup(SharePointGroupType.SalesTeam) .addReadAccessToGroup(SharePointGroupType.PreSales) .addEditAccessToGroup(SharePointGroupType.SalesOperations) .addNestedFolder( new Folder('Pricing Breakdown') .addReadAccessToGroup(SharePointGroupType.SalesTeam) .addEditAccessToGroup(SharePointGroupType.SalesOperations) ), new Folder('Customers') .addReadAccessToGroup(SharePointGroupType.AccountManagement) .addReadAccessToGroup(SharePointGroupType.Support) .addEditAccessToGroup(SharePointGroupType.AccountManagement) .addNestedFolder( new Folder('Implementation Plan') .addReadAccessToGroup(SharePointGroupType.Support) .addReadAccessToGroup(SharePointGroupType.DeliveryTeam) .addEditAccessToGroup(SharePointGroupType.DeliveryTeam) ), new Folder('Vendors') .addReadAccessToGroup(SharePointGroupType.Procurement) .addReadAccessToGroup(SharePointGroupType.Finance) .addEditAccessToGroup(SharePointGroupType.Procurement) };
What were the benefits?
- Zero cognitive complexity – You don’t need to navigate Custom Metadata relationships to understand the folder structure. Everything is defined in one place, written in plain English, and easy to follow. When the client asked about the folder structure, I simply shared a screenshot of the code – even non-technical stakeholders could understand it.
- Easy debugging and fixes – During testing, the client asked: “Why doesn’t the Support team have edit access to the ‘Vendors’ folder?” I had the answer in seconds: “Because they’re not included. Should I add them?” The fix required only a single line of code – no new metadata records.
- No duplication – Group types were defined as an enum (
SharePointGroupType). This eliminated duplication and reduced the risk of errors when assigning groups. - Flexibility – Hardcoded does not mean inflexible. The structure was easy to extend: new groups could be added, and the hierarchy could be deepened to as many levels as needed – something that would have been significantly harder with Custom Metadata.
As this example shows, static (hardcoded) code is not inherently a problem.
Rule 3: Low-Code Is The Way
In general, it’s best to consider declarative tool options before exploring custom code options. Automation created with declarative tools is usually easier to create and to support. From a people perspective, learning code takes more time and can often be more difficult, making people who code harder to find. Code-based projects are generally more expensive to build and maintain. ~ Salesforce [9]
Salesforce encourages us to adopt low-code solutions. We often hear that before writing any custom code, we should first consider using Flows – a point-and-click automation tool designed to help build business logic faster and at a lower cost.
I’ve already covered this topic in Salesforce Flow Considerations. In this section, I want to highlight the biggest challenges I’ve encountered when working with Flows, and explain why the common narrative – that code-based solutions are more expensive and harder to build – is becoming less relevant in the age of AI.
Here, I’ll focus specifically on Record-Triggered Flows (I do like Screen Flows).
Over the course of my career, I’ve worked on more than 30 projects of varying size. Perhaps I’ve been unlucky, but I’ve rarely seen Record-Triggered Flows implemented in a truly maintainable way. In fact, some projects were actively migrating logic from Flows back to Apex.
This doesn’t surprise me. In the long run, Record-Triggered Flows often become difficult to manage, hard to maintain, extremely challenging to extend, and adding new functionality can turn into a real nightmare.
I don’t want to repeat myself, as many of the points I want to discuss are already covered in Salesforce Flow Considerations.
The biggest problems with Flows are:
- They are not as easy to use as they appear
- They tend to grow into large, complex structures
- They have functional limitations
- They lack certain advanced features
- The UI can be difficult to work with
- They are hard to review in pull requests
- They often cause merge conflicts
- They are difficult to properly unit test
Cost of code
The strongest argument for using Flows has always been that low-code solutions are faster than writing Apex – less code means less time, which in turn makes projects cheaper. This may have been true a few years ago, but in the AI era, that argument is becoming less relevant. Writing code is getting faster and cheaper, as AI can generate it in seconds. Additionally, Apex does not have many of the limitations that Flows do, which often makes it a better long-term choice.
You might argue that admins shouldn’t “vibe-code” Apex if they don’t fully understand it, especially since business logic can become complex. However, if the logic is complex, grows too large, or becomes difficult to follow, then a Flow is not the right place for it – you need Apex anyway. If the logic is simple and small, AI-assisted coding can make implementing it in Apex just as accessible.
Rule 4: Avoid Code Duplications
Why to Avoid Code Duplication?
I’d like to revisit a quote I mentioned earlier: “The best code is no code at all.” Duplicated code increases the overall codebase – often repeating the same logic in multiple places. This can lead to higher maintenance costs, since updating behavior may require changes in several locations instead of just one. It also increases the risk of errors: if one instance is missed during an update, inconsistencies can appear. Over time, this can add complexity and make debugging more difficult. These are valid concerns, and the DRY (Don’t Repeat Yourself) principle exists for good reason.
public class PriceService { public static Decimal calculateRegularPrice(Decimal amount) { if (amount == null) { return 0; } Decimal price = amount.setScale(2); return price * 1.23; } public static Decimal calculateDiscountPrice(Decimal amount) { if (amount == null) { return 0; } Decimal price = amount.setScale(2); return price * 0.90; } } public class PriceService { public static Decimal calculateRegularPrice(Decimal amount) { Decimal price = normalizePrice(amount); return price * 1.23; } public static Decimal calculateDiscountPrice(Decimal amount) { Decimal price = normalizePrice(amount); return price * 0.90; } private static Decimal normalizePrice(Decimal amount) { if (amount == null) { return 0; } return amount.setScale(2); } }
However, developers sometimes take this rule too far and attempt to extract or abstract duplicated logic at all costs. I want to show that duplication can sometimes make sense – and that eliminating it should not always be the default goal.
It’s Not That Bad
The DRY (Don’t Repeat Yourself) principle is often interpreted as “never duplicate code.”
However, that’s a misleading simplification. DRY is not really about eliminating code duplication – it’s about maintaining conceptual clarity. The actual idea behind DRY is: “Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.”
This definition says nothing about avoiding duplicated code. Instead, it focuses on avoiding duplicated knowledge or responsibilities. Responsibilities often manifest as code, but superficial duplication does not always need to be removed.
For example, I may need to call two different APIs using a REST client. At first glance, the calls might look similar enough to combine into a single function. A literal interpretation of DRY might suggest doing so. However, these are two distinct pieces of knowledge. If I later need to change how one API is used, it doesn’t necessarily follow that the other should change in the same way. By combining them too early, I introduce parameters and conditionals – which often adds unnecessary complexity. In this case, forcing consolidation actually harms the system by merging separate responsibilities into a single abstraction.
My approach is to follow DRY – except in these situations:
- When the code looks similar but represents different intent, keeping it separate supports future divergence and easier maintenance.
- When abstraction reduces readability. In large codebases, most of the time is spent reading code, not writing it – so clarity should take precedence over strict reuse.
As programmers we are conditioned to despise copy-pasted code, but there’s always a trade-off as we refactor two methods into a shared abstraction. Even when the original code is nearly identical, the two methods may well model different aspects of the problem domain. When we refactored such code into a shared representation we give that new method different reasons to change and when that happens our shared abstraction breaks down in a heavy rain of control flags and Boolean parameters, which is a worse problem than orginal duplication. ~ Software Design X-Rays, Adam Tornhill
// Duplicated Code public class CaseNotifier { public static void notifySales(Case c) { Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage(); email.setToAddresses(new String[] { 'sales@company.com' }); email.setSubject('New high-priority case: ' + c.CaseNumber); email.setPlainTextBody('A new case requires your attention.'); Messaging.sendEmail(new Messaging.SingleEmailMessage[] { email }); } public static void notifySupport(Case c) { Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage(); email.setToAddresses(new String[] { 'support@company.com' }); email.setSubject('New high-priority case: ' + c.CaseNumber); email.setPlainTextBody('A new case requires your attention.'); Messaging.sendEmail(new Messaging.SingleEmailMessage[] { email }); } } // Common Abstraction public class CaseNotifier { public static void notifyTeam(String emailAddress) { Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage(); email.setToAddresses(new String[] { emailAddress }); email.setSubject('New high-priority case: ' + c.CaseNumber); email.setPlainTextBody('A new case requires your attention.'); Messaging.sendEmail(new Messaging.SingleEmailMessage[] { email }); } public static void notifySales(Case c) { notifyTeam('sales@company.com'); } public static void notifySupport(Case c) { notifyTeam('support@company.com'); } } // Later public class CaseNotifier { public static void notifyTeam( String emailAddress, String subject, String body, Boolean useHtml, Boolean includeCaseLink, Boolean onlyBusinessHours, Boolean includeOpportunityContext ) { if (onlyBusinessHours && !BusinessHours.isWithin( BusinessHours.getDefaultId(), System.now() )) { return; } if (includeCaseLink) { body += '\nCase Link: /' + subject; } if (includeOpportunityContext) { body += '\nRelated opportunity details included.'; } Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage(); email.setToAddresses(new String[] { emailAddress }); email.setSubject(subject); if (useHtml) { email.setHtmlBody(body); } else { email.setPlainTextBody(body); } Messaging.sendEmail(new Messaging.SingleEmailMessage[] { email }); } public static void notifySales(Case c) { notifyTeam( 'sales@company.com', 'New high-priority case: ' + c.CaseNumber, 'A new case requires sales attention.', false, false, true, true ); } public static void notifySupport(Case c) { notifyTeam( 'support@company.com', 'New case: ' + c.CaseNumber, 'A new case requires support attention.', true, true, false, false ); } }
Code can be duplicated when:
- The code looks similar, but the intent is different
- Each part may evolve in a different direction over time
- Keeping it separate makes the domain clearer
- Abstraction would require flags, booleans, or conditionals
- Sharing logic would give one method multiple reasons to change
- Duplication improves readability more than abstraction would
- The repeated code is small, simple, and easy to understand
- Removing duplication would introduce a misleading abstraction
No Magic Variables – Use Constants
Let’s start with an example.
// ❌ Decimal potentialEnergy(Decimal mass, Decimal height) { return mass * height * 9.81; } // ✅ // Option A. static final Decimal GRAVITATIONAL_CONSTANT = 9.81; Decimal potentialEnergy(Decimal mass, Decimal height) { return mass * height * GRAVITATIONAL_CONSTANT; } // Option B. Decimal potentialEnergy(Decimal mass, Decimal height) { final Decimal GRAVITATIONAL_CONSTANT = 9.81 return mass * height * GRAVITATIONAL_CONSTANT; }
The magic variables rule primarily focuses on numbers – a magic number is a numeric value with no obvious meaning. Avoiding them generally makes sense because they can make code harder to understand and refactor later.
However, this rule often gets extended to other values, especially strings. I’ve seen solutions where even ".", " ", or "," were stored as constants.
public class Consts { public static final String DOT = '.'; public static final String SPACE = ' '; public static final String COMMA = ','; // etc. }
Duplicating a string a few times in a class is not inherently a problem.
// ❌ @IsTest private class MyTest { private static String ASSERT_MESSAGE = 'The result should match expected value'; @IsTest static void testMethod1() { // ... Assert('1', result, ASSERT_MESSAGE); } @IsTest static void testMethod2() { // ... Assert('2', result, ASSERT_MESSAGE); } @IsTest static void testMethod3() { // ... Assert('3', result, ASSERT_MESSAGE); } @IsTest static void testMethod4() { // ... Assert('4', result, ASSERT_MESSAGE); } } //✅ @IsTest private class MyTest { @IsTest static void testMethod1() { // ... Assert('1', result, 'The result should match expected value'); } @IsTest static void testMethod2() { // ... Assert('2', result, 'The result should match expected value'); } @IsTest static void testMethod3() { // ... Assert('3', result, 'The result should match expected value'); } @IsTest static void testMethod4() { // ... Assert('4', result, 'The result should match expected value'); } }
Yes, we are technically duplicating the message “The result should match expected value”.
But the intent is not the same. The purpose of constants is to enable change in one place when the meaning is shared.
What if we later want: “The result should match ‘1’ because of fallback logic.”
We cannot safely update ASSERT_MESSAGE without breaking the meaning of other assertions.
Another issue is readability. In a large test class (say, 500 lines), encountering:
Assert('4', result, ASSERT_MESSAGE);
forces the reader to scroll back to find what ASSERT_MESSAGE means. This adds unnecessary cognitive load.
Sometimes duplication is clearer – and harmless.
// ✅ @IsTest private class MyTest { @IsTest static void testMethod1() { // ... Assert('1', result, 'The result should match expected value'); } @IsTest static void testMethod2() { // ... Assert('2', result, 'The result should match expected value'); } @IsTest static void testMethod3() { // ... Assert('3', result, 'The result should match expected value'); } @IsTest static void testMethod4() { // ... Assert('4', result, 'The result should match expected value'); } }
Concert #1: Not all strings need to be constants.
Another common mistake is defining public static final constants at the top of the class – even when they are used only once.
// ❌ public class MyCallout { private static final String POST = 'POST'; private static final String PATCH = 'PATCH'; public static void call() { HttpRequest request = new HttpRequest(); request.setEndpoint(...); request.setMethod(POST); request.setHeader('Content-Type', 'application/json'); request.setBody(getBody()); new Http().send(request); } private static String getBody() { return new Map<String, Object>{ 'method' => PATCH, 'url' => ..., 'referenceId' => ..., 'body' => new Map<String, Object>{ ...} } } } // ✅ public class MyCallout { public static void call() { HttpRequest request = new HttpRequest(); request.setEndpoint(...); request.setMethod('POST'); request.setHeader('Content-Type', 'application/json'); request.setBody(getBody()); new Http().send(request); } private static String getBoy() { return new Map<String, Object>{ 'method' => 'PATCH', 'url' => ..., 'referenceId' => ..., 'body' => new Map<String, Object>{ ...} } } }
Using POST and PATCH directly is perfectly fine.
Why avoid unnecessary class-level constants?
- Memory allocation – class-level constants exist even if rarely used.
- Cognitive load – jumping between the method and the top of the class interrupts reading flow.
A good compromise is to keep constants at the method level when appropriate.
// ❌ Decimal potentialEnergy(Decimal mass, Decimal height) { return mass * height * 9.81; } // ✅ // Option A. static final Decimal GRAVITATIONAL_CONSTANT = 9.81; Decimal potentialEnergy(Decimal mass, Decimal height) { return mass * height * GRAVITATIONAL_CONSTANT; } // Option B - Const on method level. Decimal potentialEnergy(Decimal mass, Decimal height) { final Decimal GRAVITATIONAL_CONSTANT = 9.81 return mass * height * GRAVITATIONAL_CONSTANT; }
Concern #2: Constants don’t always need to be class properties.
They should be promoted only when shared across multiple parts of the logic.
What should be stored as constants?
Good candidates include:
- Global picklist values
- SObject picklist values, record type names, etc.
- Profile names and Permission Set names
Not every string qualifies as a constant. Constants are a useful tool – but not a universal rule.
How to organize constants?
I recommend using a structured approach like Apex Consts Lib.
It’s less a library and more a clean organizational pattern:
Account acc = new Account( Name = 'My Account', Type = Consts.ACCOUNT.TYPE.PROSPECT, Rating = Consts.ACCOUNT.RATING.HOT );
Rule 5: ApexDoc – Code Is Not Enough
As we can read in the Salesforce documentation:
ApexDoc is a standardized comment format that makes it easier for humans, documentation generators, and AI agents to understand your codebase. We recommend using ApexDoc comments to facilitate code collaboration and increase long-term code maintainability.
~ Salesforce
ApexDoc includes the following tags: @author, @description, @example, @group, @param, @return, @version, and several others. The full list can be found here.
Below is an example copied directly from the documentation:
/** * Provides services for geolocation and address conversion. * @author Dennis Smith * @version 0.3.0 * @since 0.1.0 */ global with sharing class GeolocationService { /** * Represents geographic coordinates (latitude and longitude). */ global class Coordinates { @AuraEnabled public Decimal latitude; @AuraEnabled public Decimal longitude; global Coordinates(Decimal lat, Decimal lon) { this.latitude = lat; this.longitude = lon; } } /** * Converts a full address string to approximate latitude * and longitude coordinates. This method is deprecated and should no * longer be used due to its reliance on an older, less accurate geocoding * service and simpler parsing logic. It may not handle all address formats * correctly and has a lower success rate. * @param fullAddress The complete address string * (e.g., "123 Main St, Anytown, CA 90210, USA"). * @return A `Coordinates` object representing the approximate latitude and longitude. * @throws DeprecatedMethodCalledException If this method is invoked, * informing the user to migrate to the newer, more robust `geocodeAddress` method. * @deprecated in 0.2.0. Use {@link #geocodeAddress( * String street, * String city, * String state, * String postalCode, * String country)} instead. * @since 0.1.0 */ @Deprecated global static Coordinates convertAddressToCoordinates(String fullAddress) { throw new DeprecatedMethodCalledException( 'The method `GeolocationService.convertAddressToCoordinates(String fullAddress)` is deprecated. ' + 'Please use `GeolocationService.geocodeAddress(String street, String city, String state, String postalCode, String country)` ' + 'for all new and existing address-to-coordinate conversions to ensure better accuracy and reliability.' ); } /** * Geocodes a structured address into precise latitude and longitude coordinates * using a robust external geocoding service. * This method provides higher accuracy and better handling of diverse address formats. * @param street The street address (e.g., "123 Main St"). * @param city The city (e.g., "Anytown"). * @param state The state or province abbreviation (e.g., "CA"). * @param postalCode The postal or ZIP code (e.g., "90210"). * @param country The country name or code (e.g., "USA"). * @return A Coordinates object containing the latitude and longitude. * @throws GeocodingException If the address cannot be geocoded, * if the external service is unavailable, or if required address * components are missing. * @example * {@code * try { * GeolocationService.Coordinates coords = GeolocationService.geocodeAddress( * '415 Mission St', * 'San Francisco', * 'CA', * '94105', * 'USA' * ); * } catch (GeolocationService.GeocodingException e) { * // handle failure * } * } * @since 0.2.0 */ global static Coordinates geocodeAddress( String street, String city, String state, String postalCode, String country ) { // Implement actual geocoding logic return new Coordinates(0, 0); } /** * Exception thrown when a deprecated method is called. * This indicates that the caller should migrate to the recommended alternative. */ global class DeprecatedMethodCalledException extends Exception { } /** * Exception thrown when a geocoding operation fails. * This provides specific context for issues during address-to-coordinate conversion. */ global class GeocodingException extends Exception { } }
Even Apex PMD includes a rule recommending that code be documented using ApexDoc. Before adopting it blindly, it’s worth understanding why such documentation can be valuable and what purpose it serves.
ApexDoc can be useful because:
- It makes code easier to understand for developers, documentation tools, and AI-assisted systems
- It supports collaboration and improves long-term maintainability
- It provides a structured documentation approach, similar to JavaDoc in Java
Salesforce recommends using ApexDoc, and Apex PMD supports this practice – so there are clearly good reasons to adopt it. However, there are also valid reasons not to rely on. Let’s explore those.
Comments Lie
You’ve probably heard the saying, “Comments lie.” It describes situations where the code has been updated, but the accompanying comment still refers to outdated logic. This can be confusing and misleading. Of course, it’s the developer’s responsibility to keep comments in sync with the code – but in practice, this doesn’t always happen. I’m a strong advocate of simplicity: no comments means nothing to forget to update – and no risk of them becoming misleading.
You don’t need them
Another example from Salesforce documentation:
public class OpportunityService { /** * Retrieves a list of open opportunities for a given account, * accessible from Lightning Web Components. If the set of open opportunities * can change during interaction with the component, the author will * need to use {@code refreshApex()}. * @param accountId The ID of the Account to retrieve opportunities for. * @return A List of open Opportunity records. Returns an empty list if no * open opportunities are found or if accountId is invalid. * @see OpportunitySelector */ @AuraEnabled(cacheable=true) public static List<Opportunity> getOpenOpportunities(Id accountId) { List<Opportunity> result = new List<Opportunity>(); result = new OpportunitySelector().byAccountId(accountId)...toList(); return result; } }
Oh, Isn’t It Obvious?
Let’s break it down:
- "Retrieves a list of open opportunities for a given account" – isn’t that obvious?
- "Accessible from Lightning Web Components" – we already see
@AuraEnabled - @param accountId The ID of the Account to retrieve opportunities for – also obvious
- @return A List of open Opportunity records… – again, visible from the method
Does this comment tell us anything we couldn’t learn directly from the code?
In practice, you still need to read the implementation – because, as mentioned earlier, comments can become outdated.
So now, instead of just reading the code, you:
- Read the comment
- Read the code
- Compare the two
That’s extra cognitive effort – for no real gain. Why make it more complicated than it needs to be?
Make the Code Better
If you feel the need to add comments, it often means the code itself doesn’t clearly express its purpose – and you’re trying to compensate with explanations outside the logic.
Instead, focus on improving the code:
- Use descriptive names for variables and methods
- Keep methods small and focused on a single responsibility
- Minimize the number of parameters
- Apply appropriate design patterns
In other words, follow clean code principles that make the implementation easy to understand on its own.
You Know Git, Right?
/** * Provides services for geolocation and address conversion. * @author Dennis Smith * @version 0.3.0 * @since 0.1.0 */ global with sharing class GeolocationService { //.. }
Have you heard of version control systems? We’ve had them for years – and the most popular one is Git. Git provides powerful features such as version history and change tracking. It also records who made each change. That’s why information like @author, @since, or @version is already captured in Git history. There’s no need to manually maintain version details in code comments when you can simply review the commit history – and even revert changes if needed.
If you’re not using Git, you absolutely should. It’s one of the most fundamental tools in modern software engineering.
JavaDoc
Based on the JavaDoc standard, ApexDoc provides specifications, such as specialized tags and guidelines, that are tailored to Apex and the Salesforce ecosystem.
~ Salesforce
Salesforce highlights that ApexDoc is inspired by JavaDoc, a widely adopted documentation standard in the Java ecosystem. However, there are two important differences.
JavaDoc was created to support public APIs and open-source projects. Based on code comments, developers can automatically generate an HTML documentation site out of the box.
Apex code, on the other hand, is rarely used to build public APIs or open-source libraries. Most of the time, we write Apex to meet specific business requirements within a project.
Additionally, unlike JavaDoc, Salesforce does not provide a native solution for generating documentation from ApexDoc comments. In other words, Salesforce encourages developers to write structured documentation comments – but does not supply a built-in tool to transform them into actual documentation.
But What About AI Agents?
There is a common belief that AI agents understand code better when comments are present. The logic seems sound: comments provide more context, and more context should lead to better results.
This assumption is partially true – but only under certain conditions.
Not all comments are beneficial for AI assistants. Comments that simply restate the obvious, explain unclear code that should be refactored, or mark positions in source files create noise that can potentially confuse AI systems.
— The Importance of Code Comments for Modern AI Coding Assistants
Research consistently shows that AI benefits far more from comments explaining why something exists than from comments describing what the code does.
What Comments Should Be Avoided?
Perhaps unsurprisingly, the same rules that apply to humans apply to AI.
Avoid:
- Redundant “what” comments – AI can already infer this from the syntax. These comments add noise rather than clarity.
- Duplicated documentation – If the same “truth” exists in multiple places (e.g., README + code + comments), it will eventually drift. Maintain a single source of truth and reference it where needed.
- Out-of-sync comments – These can mislead both humans and AI. Research shows that inconsistencies between code and comments are a common maintenance challenge.
What Comments To Use?
“Why” Comments
Good comments should explain why, not what or how. As developers, we are expected to understand how code works – so repeating the implementation details adds little value. However, unexpected or non-obvious behavior should be documented, along with the reasoning behind it.
exportCsvData() { setTimeout(() => { // timeout to not block UI during export(params) .catch(error => {}) .finally(() => {}); }, 0); }
Documentation Links
Links to external documentation – especially for APIs – can be extremely helpful. They make it easier to understand request structures and how integrations should be implemented.
// SharePoint Endpoint: // https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/make-batch-requests-with-the-rest-apis public with sharing class SharePointFoldersCallout { // ... }
“Eureka” Comments
Comments should highlight non-obvious constraints or implementation details – essentially another form of “why” comments.
public void count() { // COUNT() must be the only element in the SELECT list. this.clearAllFields(); this.aggregateFields.add('COUNT()', ''); }
Summary
This post was intended to show that software development is about balancing rules. It’s rarely black and white. Even the most reasonable rules can be broken – when there is a clear justification. Many practices we follow blindly on a daily basis can sometimes do more harm than good.
Great software developers consistently ask “Why?” and remain curious. There is no place for blindly following rules, because rules only make sense within a specific context – not universally.
That’s why experienced developers often say, “It depends.” . This reflects maturity and awareness, not uncertainty.
Everything described in this post was meant to highlight that even popular approaches should be thoughtfully justified. It doesn’t mean I avoid Custom Metadata, hardcode everything, never use Flows, or tolerate unnecessary duplication. I simply try to choose the best approach for the context I’m working in.
Also remember the Dreyfus Model of Skill Acquisition. If you follow rules because you don’t yet see how to break them, that’s completely fine – deeper understanding comes with experience. Just keep an open mind and focus on reasoning. The hardest people to work with are those who hold rigid views and never listen to others. Don’t be one of them – we can learn a great deal from different perspectives.
Note: This post was not written by AI. I only used AI to help refine my grammar because I am not a native speaker.
Resources
[1] When Code Duplication Is Acceptable
[2] Dreyfus model of skill acquisition – Wikipedia
[3] Dreyfus model of skill acquisition – Brainbok
[4] Replace Magic Number with Symbolic Constant
[5] Document Your Apex Code
[6] The Importance of Code Comments for Modern AI Coding Assistants
[7] Do Comments and Expertise Still Matter? An Experiment on Programmers’ Adoption of AI-Generated JavaScript Code
[8] Do not use Javadoc
[9] Go with the Flow
[10] Best practice for commenting code



