Triggers – workflow – recursion control – callouts – allOrNone

An insidious set of circumstances:

Starting condition:
  • You have an afterUpdate trigger that if a condition when oldvalue changes to newValue, you want to do a callout
  • You also have a workflow that updates the same object and that workflow’s entry criteria is satisfied when the for the same update event

Now, you might not know this, but Workflow field updates will cause the trigger to re-execute AND, the value of Trigger.old will be as it was when the record was initially updated. Here’s what it says from the well-thumbed Triggers and Order of Execution Apex Doc:

Trigger.old contains a version of the objects before the specific update that fired the trigger. However, there is an exception. When a record is updated and subsequently triggers a workflow rule field update, Trigger.old in the last update trigger won’t contain the version of the object immediately prior to the workflow update, but the object before the initial update was made

Thus, your future method will be called TWICE, and, if it is doing a callout, will callout twice. Here’s a simple proof:

Apex trigger
trigger LeadTrigger on Lead (before insert, before update, after insert, after update) {

    if (Trigger.isAfter && Trigger.isUpdate)
        new LeadTriggerHandler().onAfterUpdate(Trigger.new,Trigger.oldMap);
}
Apex Trigger handler
public class LeadTriggerHandler {

    public void onAfterUpdate(Lead[] leads, map<ID,Lead> oldLeads) {
        for (Lead l: leads) { 
            Lead oldLead = oldLeads.get(l.Id);
            if (l.Company != oldLead.Company) {
                System.debug(LoggingLevel.INFO,'company has changed from ' + oldLead.Company   + 
                             'to ' + l.Company + ' .. request an @future to dowork');
                doCallout(l.Company);
            }
        }
    }
      
    @future
    static void doCallout(String company) {
        System.debug(LoggingLevel.INFO,'future method to do callout for ' + company);
        // .. callout details not important
    }
}
Workflow
  • Evaluation Criteria: Evaluate the rule when a record is created, and any time it’s edited to subsequently meet criteria
  • Rule Criteria: If Lead.Company contains ‘Changed’
  • Action: Field update Lead.Mobile to ‘650-555-1212’
Anonymous apex to demonstrate
Lead[] leads = new list<Lead> {
    new Lead(Company = 'Foo00', LastName = 'LName00'),
    new Lead(Company = 'Foo01', LastName = 'LName01')
    };

insert leads;
leads[0].Company = 'Foo00Changed';
leads[1].Company = 'Foo01Changed';
update leads; // force future to execute in handler
Debug log(s)

Workflow - Recursion - Trigger - Callout Ex 1

Ack! the future fired four(4) times! We should only have had two (2) as we updated only two records.

|USER_INFO|[EXTERNAL]|00540000000wbFS|cropredy@gmail.com|Pacific Standard Time|GMT-08:00
|EXECUTION_STARTED
//  lead inserted -- details omitted ...
//	Lead update event Lead.Company value changes
DML_BEGIN|[9]|Op:Update|Type:Lead|Rows:2
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
USER_DEBUG|[7]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork
USER_DEBUG|[7]|INFO|company has changed from Foo01to Foo01Changed .. request an @future to dowork
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_STARTED|[EXTERNAL]|Workflow:Lead
WF_RULE_EVAL_BEGIN|Assignment
WF_SPOOL_ACTION_BEGIN|Assignment
WF_ACTION|.
WF_RULE_EVAL_END
WF_RULE_EVAL_BEGIN|Workflow
WF_CRITERIA_BEGIN|[Lead: LName00 00Q1W00001Jh9VR]|onUpdate - Set Field|01Q1W000000RGk3|ON_CREATE_OR_TRIGGERING_UPDATE|0
WF_RULE_FILTER|[Lead : Company contains Changed]
WF_RULE_EVAL_VALUE|Foo00Changed
WF_CRITERIA_END|true
WF_CRITERIA_BEGIN|[Lead: LName01 00Q1W00001Jh9VS]|onUpdate - Set Field|01Q1W000000RGk3|ON_CREATE_OR_TRIGGERING_UPDATE|0
WF_RULE_FILTER|[Lead : Company contains Changed]
WF_RULE_EVAL_VALUE|Foo01Changed
WF_CRITERIA_END|true
WF_SPOOL_ACTION_BEGIN|Workflow
WF_FIELD_UPDATE|[Lead: LName00 00Q1W00001Jh9VR]|Field:Lead: Mobile|Value:650-555-1212|Id=04Y1W000000PfJV|CurrentRule:onUpdate - Set Field (Id=01Q1W000000RGk3)
WF_FIELD_UPDATE|[Lead: LName01 00Q1W00001Jh9VS]|Field:Lead: Mobile|Value:650-555-1212|Id=04Y1W000000PfJV|CurrentRule:onUpdate - Set Field (Id=01Q1W000000RGk3)

 // Workflow updates the Leads with Field Update
WF_ACTION| Field Update: 2; 
WF_RULE_EVAL_END

// before/after triggers on Lead re-fire (expected)
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]

// uh-oh, Trigger.old has values prior to the initial update DML, 
// not the values as of the after update conclusion 
USER_DEBUG|[7]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork
USER_DEBUG|[7]|INFO|company has changed from Foo01to Foo01Changed .. request an @future to dowork
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
WF_ACTIONS_END| Field Update: 2;
CODE_UNIT_FINISHED|Workflow:Lead
DML_END|[9]
CODE_UNIT_FINISHED|execute_anonymous_apex
EXECUTION_FINISHED

Solution 1 (sounds good)

Just add a static recursion control variable to your handler

public class LeadTriggerHandler {
    static set<ID> leadIdsAlreadySentToFuture = new set<ID>(); // recursion control
    public void onAfterUpdate(Lead[] leads, map<ID,Lead> oldLeads) {
        for (Lead l: leads) { 
            Lead oldLead = oldLeads.get(l.Id);
            if (l.Company != oldLead.Company && 
                !leadIdsAlreadySentToFuture.contains(l.Id)) { // have we already done this?
                System.debug(LoggingLevel.INFO,'company has changed from ' + oldLead.Company   + 
                             'to ' + l.Company + ' .. request an @future to dowork');
                doCallout(l.Company);
                leadIdsAlreadySentToFuture.add(l.Id);    
            }
        }
    }
      
    @future
    static void doCallout(String company) {
        System.debug(LoggingLevel.INFO,'future method to do callout for ' + company);
        // .. callout details not important
    }
}

This works as the debug log shows the future being called twice, once per Lead updated:
Workflow - Recursion - Trigger - Callout Ex 2

So, can I now work on my next JIRA ticket? Sorry ….

What if your Trigger/Handler is also invoked in a use case where partial success is allowed and one or more of the records fails to validate? AllOrNone = false can happen in many common use cases:

  • Data Loader
  • Any use of Apex Database.update(records,false); True also for the other Database.xxx methods.
  • Bulk, SOAP, or REST APIs that either default AllOrNone to false or set explicitly if available.

Here, we run into another little-known SFDC feature of trigger retries in the AllOrNone = false (i.e. partial successes allowed) use case. This is documented in the Apex guide as:

AllOrNone doc

Going back to the Triggers and Order of Execution, there’s one last tidbit as to why you can’t use static variables for recursion control in an AllOrNone = false use case:

When a DML call is made with partial success allowed, more than one attempt can be made to save the successful records if the initial attempt results in errors for some records. For example, an error can occur for a record when a user-validation rule fails. Triggers are fired during the first attempt and are fired again during subsequent attempts. Because these trigger invocations are part of the same transaction, static class variables that are accessed by the trigger aren’t reset. DML calls allow partial success when you set the allOrNone parameter of a Database DML method to false or when you call the SOAP API with default settings. For more details, see Bulk DML Exception Handling.

So, if you do a bulk update of two records, and one fails the validation rule, the static recursion control variable will be set on the first attempt, any @future calls are rolled back, and, when SFDC makes the second attempt on the non-failed record, the recursion control prevents the callout attempt from even happening so you end up with no callouts!

Let’s look at a proof:

Add a validation rule:

Website = 'www.failme.com'

Execute this code:
Lead[] leads = new list<Lead> {
    new Lead(Company = 'Foo00', LastName = 'LName00'),
    new Lead(Company = 'Foo01', LastName = 'LName01')
    };

insert leads;

leads[0].Company = 'Foo00Changed';
leads[1].Company = 'Foo01Changed';
leads[1].Website = 'www.failme.com';  // force partial success by failing this in VR
Database.SaveResult[] results = Database.update(leads,false); // allow partial success
Get this debug log

Workflow - Recursion - Trigger - Callout Ex 3

No future logs! Future never happened!

CODE_UNIT_STARTED|[EXTERNAL]|execute_anonymous_apex
// 1st time trigger is executed - both Leads passed:
DML_BEGIN|[11]|Op:Update|Type:Lead|Rows:2
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv, 00Q1W00001Jh9Vw]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv, 00Q1W00001Jh9Vw]
CODE_UNIT_STARTED|[EXTERNAL]|Validation:Lead:00Q1W00001Jh9Vv

// Validation rules execute
VALIDATION_RULE|03d1W000000Tdvy|Coerce_failure
VALIDATION_FORMULA|Website = 'www.failme.com'|Website=null
VALIDATION_PASS
CODE_UNIT_FINISHED|Validation:Lead:00Q1W00001Jh9Vv
CODE_UNIT_STARTED|[EXTERNAL]|Validation:Lead:00Q1W00001Jh9Vw
VALIDATION_RULE|03d1W000000Tdvy|Coerce_failure
VALIDATION_FORMULA|Website = 'www.failme.com'|Website=www.failme.com
// Fail the second Lead
VALIDATION_FAIL
CODE_UNIT_FINISHED|Validation:Lead:00Q1W00001Jh9Vw

// After update sees only the first, successful, Lead; future requested, static vbl set
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
USER_DEBUG|[8]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_STARTED|[EXTERNAL]|Workflow:Lead

// Workflow executes , causes field update on first lead
WF_RULE_EVAL_BEGIN|Workflow
WF_CRITERIA_BEGIN|[Lead: LName00 00Q1W00001Jh9Vv]|onUpdate - Set Field|01Q1W000000RGk3|ON_CREATE_OR_TRIGGERING_UPDATE|0
WF_RULE_FILTER|[Lead : Company contains Changed]
WF_RULE_EVAL_VALUE|Foo00Changed
WF_CRITERIA_END|true
WF_SPOOL_ACTION_BEGIN|Workflow
WF_FIELD_UPDATE|[Lead: LName00 00Q1W00001Jh9Vv]|Field:Lead: Mobile|Value:650-555-1212|Id=04Y1W000000PfJV|CurrentRule:onUpdate - Set Field (Id=01Q1W000000RGk3)
WF_ACTION| Field Update: 1;
WF_RULE_EVAL_END

// WF field update causes after trigger to re-execute (as expected)
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
// after trigger is NOP as recursion vbl says do nothing
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
WF_ACTIONS_END| Field Update: 1;
CODE_UNIT_FINISHED|Workflow:Lead


// SFDC retries the first record because AllOrNone=false; governor limits reset
//	But static variables are not reset
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_STARTED|[EXTERNAL]|Validation:Lead:00Q1W00001Jh9Vv

// WF fires again, updates the first Lead but no callout done  as recursion vbl prevents
WF_RULE_EVAL_BEGIN|Workflow
WF_CRITERIA_BEGIN|[Lead: LName00 00Q1W00001Jh9Vv]|onUpdate - Set Field|01Q1W000000RGk3|ON_CREATE_OR_TRIGGERING_UPDATE|0
WF_RULE_FILTER|[Lead : Company contains Changed]
WF_RULE_EVAL_VALUE|Foo00Changed
WF_CRITERIA_END|true
WF_SPOOL_ACTION_BEGIN|Workflow
WF_FIELD_UPDATE|[Lead: LName00 00Q1W00001Jh9Vv]|Field:Lead: Mobile|Value:650-555-1212|Id=04Y1W000000PfJV|CurrentRule:onUpdate - Set Field (Id=01Q1W000000RGk3)
WF_ACTION| Field Update: 1;
WF_RULE_EVAL_END
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]

// no callout request made in retry of first record
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
WF_ACTIONS_END| Field Update: 1;
CODE_UNIT_FINISHED|Workflow:Lead
DML_END|[11]
CODE_UNIT_FINISHED|execute_anonymous_apex
EXECUTION_FINISHED

So now what?

If we take the recursion static variable away, then the AllOrNone use case will still not pass – the future will get called twice on the successful record and never on the failed record.

... after the VR fails record[1] and before the WF executes 
USER_DEBUG|[8]|INFO|company has changed from Foo00 to Foo00Changed .. request an @future to dowork
... after the WF updates record[0] .. our original issue
USER_DEBUG|[8]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork
.. SFDC retries the successful records in trigger.new; skips the failed ones
.. trigger re-executes as if none of the above ever happened
USER_DEBUG|[8]|INFO|company has changed from Foo00 to Foo00Changed .. request an @future to dowork
... after the WF updates record[0] .. our original issue
USER_DEBUG|[8]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork

Workflow - Recursion - Trigger - Callout Ex 4

Clearly, static variables can’t be used to control redundant callouts when workflows and AllOrNone = false are combined in the same use case.

Solution 2 (better, but fragile)

    Go through your workflows that could update the Object where you are testing for make-a-callout conditions.

  • Move the field updates out and put them in the before insert/update triggers.
  • This way, the workflow will never force the trigger to re-execute with the original, start-of-transaction state of Trigger.old
  • Hence, your doCallout logic will only execute once in the transaction

This is fragile because you or some colleague could add at some later time a new workflow+field update that causes the trigger’s callout-evaluating condition to be re-assessed and you’ll be making duplicate callouts again.

Solution 3 – what could it be?

Clearly, you need to have state that:

  1. Persists across the trigger – workflow field update – trigger sequence
  2. Is rolled back when SFDC retries in the AllorNone = false (partial success) use case

We’ve seen that static variables won’t work. Platform cache would not work as it isn’t rolled back in the AllOrNone = false scenario.

The only thing that meets both criteria would be Sobject fields. A general approach …

  1. Trigger sets the field in Trigger.new (or updates some other Sobject with a known key)
  2. Workflow field update is made, trigger re-executes. Using the values in trigger.new, looks to see if the work was already done and if yes, avoids asking to do it again
  3. If trigger is running in a AllOrNone = false use case, and a record fails in the batch, the update made in step 1 is rolled back by SFDC. Thus, the trigger re-requests the work, persists the request in an sobject, and even though the workflow will re-fire, the persisted sobject can be inspected on the second trigger re-fire and the dowork request skipped

Now, what would you save in that sobject field?

  1. Initially, I was tempted to save the sessionId (encrypted as a MAC digest) as a pseudo signal that the callout request was made. As an sobject, it would be rolled back in the AllOrNone-false with error use case. But, when the Bulk API is used, there is no sessionId — it is null.
  2. Next idea was to exploit lastModifiedDate and lastModifiedById and compare the running user and current time to see if the trigger is being used in the trigger-workflow+field update-trigger use case as a sort of pseudo state variable. This seems problematic for several reasons like time skew and concurrent transactions coming from the same user
  3. Another idea was an unconditional workflow field update to set a field called Is_WF_Update_in_Progress__c. Thus, the second time through the trigger, the code would inspect the Is_WF_Update_in_Progress__c in Trigger.new, say, “ha”, I’m being called in a workflow field update-induced trigger invocation and bypass the request to do a callout. But, then the new field would have to be cleared (another DML) and, we’d be unnecessarily forcing the trigger to re-execute even if no other workflow’s rule criteria were satisfied. This slows down performance. This is complicated and doesn’t scale well. Every SObjectType needs its own clone of this if involved in workflows + triggers that compare trigger.old to trigger.new

A Workable, albeit a bit heavyweight solution

Create a Custom Object Transaction_State__c

Three relevant fields:

  • Name – This will be the transactionId
  • SObjectId__c (String) – Some Id visited in the transaction
  • Context__c – (String) – Some text that indicating the context for considering whether the SObjectId has been already visited this transaction
    1. Create a Transaction Service class
      public class TransactionService {
           // Although static, we don't care if this isn't rolled back on AllOrNone retry as it is not SobjectId-based
           static String txnId {
              get {
                  if (txnId == null) {
                      txnId = String.valueOf(System.currentTimeMillis()) + '_' + UserInfo.getName();
                  }
                  return txnId;
              }
          }
          
          public static set<ID> getVisitedIdsThisContext(String context, set<ID> proposedIds) {
              set<ID> unvisitedIds = new set<ID> (proposedIds);
              set<ID> visitedIds = new set<ID>();
              //	Get the complement of proposedIds and IDs we've already seen in some prior call this Txn
              //	Example: previous calls had us visiting  IDs 1, 3, 8
              //	Proposed Ids to see if "new" are 0, 3, 5
              //	Should return 0, 5 and also record 0, 5 in database in case trigger called a third time
              for (Transaction_State__c  txnState : [select ID, Name, SObjectId__c, Context__c from Transaction_State__c
                                                 where Name = :txnId and 
                                                 Context__c = :context and
                                                     SObjectId__c IN: proposedIds]) {
      			visitedIds.add((ID)txnState.SobjectId__c) ;                                 
              }
              unvisitedIds.removeAll(visitedIds);
              Transaction_State__c[] newlyVisitedTxnStates = new list<Transaction_State__c>();         
              for (ID unvisitedId : unvisitedIds)    
                  newlyVisitedTxnStates.add(new Transaction_State__c(Name = txnId, Context__c = context, SObjectId__c = unvisitedId));
              insert newlyVisitedTxnStates;  // save that we've been to these ids in this context       
              return unvisitedIds;    
          }
      }
      
      Modify the triggerhandler code as follows
      public class LeadTriggerHandler {
          public void onAfterUpdate(Lead[] leads, map<ID,Lead> oldLeads) {
          
              set<ID> unvisitedIds = TransactionService.getVisitedIdsThisContext('LeadDoFuture',oldLeads.keySet());
              for (Lead l: leads) { 
                  Lead oldLead = oldLeads.get(l.Id);
                  if (!unvisitedIds.contains(l.Id) && l.Company != oldLead.Company)    {
                      System.debug(LoggingLevel.INFO,'company has changed from ' + oldLead.Company   + 
                                   'to ' + l.Company + ' .. request an @future to dowork');
                      doCallout(l.Company);
          
                  }
              }
          }
            
          @future
          static void doCallout(String company) {
              System.debug(LoggingLevel.INFO,'future method to do callout for ' + company);
              // .. callout details not important
          }
      }
      

      The triggerhandler asks the Transaction Service to get all unvisited Ids for some context scope. Behind the scenes, the TransactionService saves the Ids + context scope + TransactionId in the database, thus creating a persistent store for the AllOrNone = true use case and a rolback-able store for the AllOrNone = false use case.

      Now, if you run an AllOrNone = true use case
      Lead[] leads = new list<Lead> {
          new Lead(Company = 'Foo00', LastName = 'LName00'),
          new Lead(Company = 'Foo01', LastName = 'LName01')
          };
      
      insert leads;
      leads[0].Company = 'Foo00Changed';
      leads[1].Company = 'Foo01Changed';
      update leads; // force future to execute in handler
      

      You see the future is called twice, once per record

      Workflow - Recursion - Trigger - Callout Ex 6 - allorNone true success

      If you run in an AllOrNone = false use case
      Lead[] leads = new list<Lead> {
          new Lead(Company = 'Foo00', LastName = 'LName00'),
          new Lead(Company = 'Foo01', LastName = 'LName01')
          };
      
      insert leads;
      
      leads[0].Company = 'Foo00Changed';
      leads[1].Company = 'Foo01Changed';
      leads[1].Website = 'www.failme.com';  // force partial success by failing this in VR
      Database.SaveResult[] results = Database.update(leads,false); // allow partial success
      

      You see the future is only called once for the record that does not fail validation rules
      Workflow - Recursion - Trigger - Callout Ex 5 - allorNone false success

ApexMocks, Selectors, and Formula Fields

This is part 3 of a series, see Part 1 and Part 2

If you’ve bought into mocking Sobjects for use in Unit Tests, you probably have run into the roadblock wherein formula fields and system audit fields can’t be created in SObjects. For example, these won’t compile:

 Account acct = new Account(LastModifiedDate = System.now(), Name = 'Foo');

 Opportunity oppo = new Opportunity(HasLineItems = true, ...);

An example
So, if you have a class method-under-test that wants to do work on Account Opportunities and the work varies based on the value of Opportunity.FormulaField__c

public void doSomeWork(set<ID> acctIds) {
   Integer count = 0;
   for (Account acct : [select Id, 
                            (select Id, FormulaField__c from Opportunities)
                            from Account where Id IN: acctIds])
     for (Opportunity o : acct.Opportunities)
        if (oppo.FormulaField__c == 'foo') doSomeFooWork();
        else doSomeBarWork();

As a good practitioner of Enterprise Patterns, you convert the code to use a Selector:

public void doSomeWork(set<ID> acctIds) {
   Integer count = 0;
   for (Account acct : AccountsSelector.newInstance()
                            .selectWithOpposById(acctIds))
     for (Opportunity o : acct.Opportunities)
       if (oppo.FormulaField__c == 'foo') doSomeFooWork();
       else doSomeBarWork();

I’m assuming you know how to create selectors and the corresponding entry in Application.cls.

The testmethod using ApexMocks

So, let’s set up a typical ApexMocks testmethod where we mock the AccountsSelector.

@isTest private static void testDoSomeWork() {
  fflib_ApexMocks mocks = new fflib_ApexMocks();

  // Given a mock selector
  AccountsSelector mockAcctsSelector = (AccountsSelector)
      mocks.mock(AccountsSelector.class);

  // Given a mock Selector with stubbed results for FormulaField__c
  mocks.startStubbing();
  mocks.when(mockAcctsSelector.SObjectType)).thenReturn(Account.SObjectType);
  mocks.when(mockAcctsSelector
      .selectWithOpposById((set<ID>)fflib_match.anyObject())
      )
      .thenReturn(mockAcctsWithOppos);
  mocks.stopStubbing();

  // Given injected mocks
  Application.Selector.setMock(mockAcctsSelector);

  // When doSomeWork is invoked
  new MyClass().doSomeWork(new set<ID> {});  // don't care about real AccountIds

  // Then verify (not shown here; 
  // perhaps verify uow.registerNew or uow.registerDirty)

So, the question, is, how do we create mockAcctsWithOppos since we need to have values for Opportunity.FormulaField__c?

There is only one way and that is to create JSON and deserialize into the Account Sobject. I’ve used three ways to do this:

Mocking SObjects with Json – method 1 – hard-coded strings

Account[] mockAcctsWithOppos = 
   (Account[]) Json.deserialize(someJsonString,list<Account>.class);

where someJsonString looks like this (example is a single Account with two Opportunities):

{
  "totalSize" : 2,
  "done" : true,
  "records" : [ {
    "attributes" : {
      "type" : "Account",
      "url" : "/services/data/v41.0/sobjects/Account/0013600001FGf1HAAT"
    },
    "Id" : "0013600001FGf1HAAT",
    "Opportunities" : {
      "totalSize" : 2,
      "done" : true,
      "records" : [ {
        "attributes" : {
          "type" : "Opportunity",
          "url" : "/services/data/v41.0/sobjects/Opportunity/0063600000PPLTPAA5"
        },
        "Id" : "0063600000PPLTPAA5",
        "FormulaField__c" : "foo"
      },
       {
        "attributes" : {
          "type" : "Opportunity",
          "url" : "/services/data/v41.0/sobjects/Opportunity/0063600000PPLTPAA5"
        },
        "Id" : "0063600000PPLTPAA6",
        "FormulaField__c" : "bar"
      }
    ]
    }
  }
]
}

You create this Json by using Workbench to generate a query and paste the results into either an Apex string or stick in a StaticResource.

Mocking SObjects with Json – method 2 – fflib_ApexMocksUtils.makeRelationship

The ApexMocks package includes a utility method that can construct the deserialized Json without you having to create the actual string

// Let's mock two Accounts, one with two Oppos, the other with none
ID[] mockAcctIds = new list<ID>();
ID[] mockOppoIds = new list<ID>();
for (Integer i = 0; i < 2; i++) {
  mockAcctIds.add(fflib_IdGenerator.generate(Account.SObjectType);
  mockOppoIds.add(fflib_IdGenerator.generate(Opportunity.SObjectType);
}

Account[] mockAcctsWithOppos = fflib_ApexMocksUtils.makeRelationship(
   Account.class,
   new list<Account> {
     new Account(Id = mockAcctIds[0], Name = '00Account'),
     new Account(Id = mockAcctIds[1], Name = '01Account')
   },
   Opportunity.Account,  // the relationship field
   new list<list<Opportunity>> {
     new list<Opportunity> { . // Two Oppos for Account[0]
         new Opportunity(Id = mockOppoIds[0], AccountId = mockAcctIds[0], 
                         FormulaField__c = 'foo'), 
         new Opportunity(Id = mockOppoIds[1], AccountId = mockAcctIds[0], 
                         FormulaField__c = 'bar')
     },
     new list<Opportunity>();  // no Oppos for Account[1]
  );

This is nice as it lets you do everything without messy string constants. But the utility is limited to only one child relationship so you can’t use it for mocking Accounts with Cases and Opportunities. I find the list> 4th argument to initially be confusing to construct and get right.

Mocking SObjects with Json – method 3 – sfab_SObjectFabricator

Matt Addy has a nice GitHub package to construct Sobjects that is more descriptive and isn’t limited by the number of children. Here is how to use it:

// Let's mock two Accounts, one with two Oppos, the other with none
ID[] mockAcctIds = new list<ID>();
ID[] mockOppoIds = new list<ID>();
for (Integer i = 0; i < 2; i++) {
  mockAcctIds.add(fflib_IdGenerator.generate(Account.SObjectType);
  mockOppoIds.add(fflib_IdGenerator.generate(Opportunity.SObjectType);
}
Account[] mockAcctsWithOppos = new list<Account> {
  (Account) new sfab_FabricatedSObject(Account.class) [0] has 2 Oppos
      .setField(Account.Id = mockAcctIds[0]).
      .setField(Account.Name = '00Account')
      .setChildren('Opportunities', new List<sfab_FabricatedSObject> {
        new sfab_FabricatedSObject(Opportunity.class)
          .setField(Opportunity.Id, mockOppoIds[0]),
          .setField(Opportunity.AccountId, mockAcctIds[0]),
          .setField(Opportunity.FormulaField__c, 'foo'), 
        new sfab_FabricatedSObject(Opportunity.class)
          .setField(Opportunity.Id, mockOppoIds[1]),
          .setField(Opportunity.AccountId, mockAcctIds[0]),
          .setField(Opportunity.FormulaField__c, 'bar')
    }).toSObject(),
  (Account) new sfab_FabricatedSObject(Account.class) //[1] has no Oppos
      .setField(Account.Id = mockAcctIds[1])
      .setField(Account.Name = '01Account')
      .toSObject()
  };

I like the sfab_SObjectFabricator approach as it is clear where you are defining children (and you can also do parents with setParent()).

Some final remarks

  1. Don’t let the inability to construct SObjects with formula or audit fields get in your way to using ApexMocks to mock either inputs to services or domain layers or mock results from services or domain layers. Choose one of the approaches above or roll your own to exploit Salesforce’s feature of constructing any Sobject’s fields via Json deserialization.
  2. The examples above are probably too verbose for the code-under-test. The Opportunity.AccountId, if never referenced, need not be mocked.
  3. Exploit the Unit of Work layer so you can use ApexMocks to verify that your DML (via registerXXX methods) is done as expected – without having to pay for the cost of real DML.
  4. You will still need what I call ‘end-to-end’ testing that doesn’t use mocks to verify that your selectors work against real data and return all the columns the code expects. You also need to verify that actual DML doesn’t run afoul of Validation Rules that otherwise aren’t executed when you mock the Unit Of Work.
  5. ApexMocks are a great way to explore in detail the unit test use cases by focusing the testing problem on the inputs and outputs of a given class/method.

ApexMocks and Email

Part two of a series. See Part 1 and Part 3

One of the problems in unit testing outbound email is that it is hard to verify that you constructed all of the properties of the outbound email as expected. This is compounded by the fact that sandbox orgs default with email deliverability ‘off’ so any attempt to use Messaging.sendEmail(emails) will throw an exception and your test breaks.

The example’s premise

Suppose we have a simple class method that constructs and sends an email:

public void sendEmail() {
   Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
   mail.setToAddresses(new list<String> {'foo@bar.com'};
   mail.setSubject('Greetings, earthlings!');
   ...
   Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
}

How would you unit test that the outbound email was sent to ‘foo@bar.com’ ? Not so simple. Same for the other properties of the outbound email.

Enter Enterprise Patterns and ApexMocks

Rework the Apex class to use the fflib UnitOfWork:

public void sendEmail() {
   fflib_ISobjectOfWork uow = Application.UnitOfWork.newInstance();
   Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
   mail.setToAddresses(new list<String> {'foo@bar.com'};
   mail.setSubject('Greetings, earthlings!');
   ...
   uow.registerEmail(mail); // let UnitOfWork know mail is part of Txn
   uow.commitWork(); // send the mail
}

The fflib_SobjectUnitOfWorkClass considers outbound emails as part of the transaction, hence the registerEmail(mail) method call.

Now, your testmethod looks like this:

@isTest private static void testSuccessPath() {
  fflib_ApexMocks mocks = new fflib_ApexMocks();
  // Given a mock UoW (injected)
  fflib_SobjectUnitOfWork mockUow = 
     (fflib_SobjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
  Application.UnitOfWork.setMock(mockUow);

  // When the email method is invoked
  new MyClass().sendEmail();

  // Then verify that an email was constructed and sent
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,
                                          mocks
                                           .times(1)
                                           .description('email sb constructed')))
    .registerEmail((Messaging.SingleEmailMessage) fflib_Match.anyObject());
    									
    									
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,
                                          mocks
                                           .times(1)
                                           .description('email sb sent')))
    .commitWork();

  // Then verify that the email was constructed as expected
  // We use ArgumentCaptors for this. There are four (4) steps:

  fflib_ArgumentCaptor capturedEmailArg	= 
        fflib_ArgumentCaptor.forClass(Messaging.SingleEmailMessage.class);
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,1))
       .registerEmail((Messaging.SingleEmailMessage)capturedEmailArg.capture());

  Object actualEmailAsObject = capturedEmailArg.getValue();
  Messaging.SingleEmailMessage actualEmail = 
        (Messaging.SingleEmailMessage) actualEmailAsObject;

  System.assertEquals('Greetings, earthlings!', 
                       actualEmail.getSubject(),
                       'subject is from friendly aliens');
  System.assertEquals(new list<String> {'foo@bar.com'},
                       actualEmail.getToAddresses()
                       'only @bar.com domains expected');
  ... other properties.
}

Let’s look at the argumentCaptor, one line at a time.

fflib_ArgumentCaptor capturedEmailArg	= 
        fflib_ArgumentCaptor.forClass(Messaging.SingleEmailMessage.class);

We declare a variable of type fflib_ArgumentCaptor and set it to be of the type we want to inspect.

((fflib_SobjectUnitOfWork) mocks.verify(mockUow,1))
       .registerEmail((Messaging.SingleEmailMessage)capturedEmailArg.capture());

We ask the mocking framework that when the UnitOfWork object is called with method registerEmail with single argument of type Messaging.SingleEmailMessage that we want to capture the value of that argument when the mock UoW is called. The capture() method of ApexMocks library does this. Note we use the mocks.verify(..) method to do this. That is, instead of verifying the value passed to registerEmail we are capturing that value for later inspection.

Object actualEmailAsObject = capturedEmailArg.getValue();
  Messaging.SingleEmailMessage actualEmail = 
        (Messaging.SingleEmailMessage) actualEmailAsObject;

Our declared variable capturedEmailArg has a method getValue() provided by the fflib_ArgumentCaptor class. It returns an Object. There is also a getValues() method for collections. We cast this to the type we care about – Messaging.SingleEmailMessage.

Now, we can assert against the actual email that the class-under-test constructed and verify each property.

System.assertEquals('Greetings, earthlings!', 
                       actualEmail.getSubject(),
                       'subject is from friendly aliens');

So, why is this cool?

  1. We are immune from the sandbox having to be configured to send emails. Because fflib is already unit tested, calls to registerEmail(someEmail) followed by commitWork() will send emails in an org configured to send emails. We merely need to verify that registerWork and commitWork got called. Since the UnitOfWork layer is mocked, we can use ApexMocks to verify that calls to a mockUow are as expected.
  2. The fflib_ArgumentCaptor feature of ApexMocks allows detailed inspection of arguments passed to any mockable class/method. In our example, it is a single argument of type Messaging.SingleEmailMessage, but it could be any arbitrary Apex type. Thus, your unit tests can be quite exhaustive about verifying that the contract between two objects is fulfilled without having to construct any real Sobjects, do any real DML, or query for the results.

Apex Mocks and Enterprise Patterns (First in a Series)

It took me a bit to get here but I can unequivocally state that Apex Mocks is well worth learning to improve your unit testing.

This blog post aims at the Apex developer who doesn’t have Java Mockito experience (which was me). Since most of the code at our org relies on the Apex Enterprise Patterns, the examples herein will exploit that. The most useful reference to me was chapter 11 – Unit Testing of Force.com Enterprise Architecture although there were some typos in the code examples in my printed copy.

See also Part 2 (Unit Testing email), and Part 3 (Selector mocking with formula fields)

The scenario
You have a CasesService, method reopen(set csIds)

Here’s the code: Note use of the Unit of Work Pattern

public virtual class CasesServiceImpl implements ICasesService{

    /**
    *	reopen	- reopen a set of Cases
    **/
    public virtual void reopen(set<ID> csIds) {
        try {
            fflib_ISobjectUnitOfWork uow = Application.UnitOfWork.newInstance();
        	reopen(uow,csIds);
        	uow.commitWork();
        }
        catch (Exception e) {
            throw new CasesService.CaseReopenException(e.getTypeName() + ' ' + e.getMessage() + ' ' + e.getStackTraceString());
        }
    }
    

    public virtual void reopen(fflib_ISobjectUnitOfWork uow, set<ID> csIds) {
        for (ID csId : csIds) {
            uow.registerDirty(new Case(Id = csId, Status = Cases.STATUS_NEW));
        }    
    }
    
}

I’m presuming if you are reading this, that you already know the Apex Enterprise Pattern and have created the ICasesService.cls, CasesService.cls, and updated the Application.cls.

Unit testing the service
In “classic” Apex development, you would test the CasesServiceImpl.reopen(set csIds) method by creating via DML a Case record (this might also require creating an Account and Contact record). Sort of like this:

@isTest private static void testCaseReopen() {
   Account[] mockAccts = new list<Account> {
     new Account(...),
     new Account(...)
     ...
   };
   insert mockAccts;
   // and so on for the Contacts (mockContacts) and Cases (mockCases)

   Test.startTest(); 
   CasesService.reopen(new set<ID> (new map<ID,Case>(mockCases).keySet());
   Test.stopTest();
   Case[] resultCases [select Id, Status from Case];
   for (Integer i = 0; i < resultCases.size(); i++)
      System.assertEquals('New',resultCases[i].Status,
                          '['+i+'] Status should be New');
}

So boring. And so much DML that makes your tests run longer – especially if you have hundreds or thousands of tests. You have to query the database to test your asserts. Think about testing Opportunities – Accounts, Contacts, Opportunities, Pricebooks, Product2s, PricebookEntries, and OpportunityLineItems may all have to be inserted into the database.

And here’s the beauty
Because you are using Apex Enterprise Patterns, you recognize that the reopen method only needs to be passed in Case Ids and then verified that the Cases were registered as dirty and that commitWork was called.

  • You don’t need real Cases to generate valid Case Ids
  • You don’t need to query the updated records to see if the Status was changed. Instead, you just need to know that registerDirty() was called with the expected status of ‘New’. As long as commitWork() was called, you can rely on the updates being done (written to the database) because commitWork() is already unit tested by the fflib library.


So, what does the unit test look like?

I’ll start off by saying that the syntax takes a bit of time getting used to. I’ll try to comment each step

@isTest private static void testReopen() {
  /** variable mocks represents the mocking framework object. 
     You can call it anything **/
  
  fflib_ApexMocks mocks = new fflib_ApexMocks();  // required and first
 
  // Given Case Ids to reopen 
  /** We use fflib to generate valid-for-the-sobjectType IDs **/
  
  ID[] mockCaseIds = new list<ID>();
  for (Integer i = 0; i < 2; i++) // 2 is arbitrary but goal is bulk testing 	
    mockCaseIds.add(fflib_IdGenerator.generate(Case.SobjectType));

  // Given mock UnitOfWork
  fflib_SobjectUnitOfWork mockUow = (fflib_SobjectUnitOfWork) 
                                       mocks.mock(fflib_SObjectUnitOfWork.class);
  Application.UnitOfWork.setMock(mockUow); // inject the mock for fflib factories

  /** Wrap your service call in try catch so you can easily debug any errors **/
  // When service invoked
  try {
   CasesService.reopen(new set<ID>(mockCaseIds)); 
  }
  catch(Exception e) {
   System.assert(false,'snb, service method sb success; ' + showException(e));
  }

  // Then verify Case status is New and objects committed 
  /** mocks.verify takes two arguments:
       1 - the object being mocked - in this case, the UnitOfWork
       2 - an object of type fflib_VerificationMode. 
           This object supports a fluent pattern with methods:
            times(n) - how many times the mock object's method M is called
            atLeast(n) - method called at least n times
            atMost(n) - method called at most n times
            atLeastOnce() - method called at least one time
            between(m,n) - method called between m and n times
            never() - method never called
            calls(n) - method called n times w/ InOrder verifier (see doc)
            description(text) - equivalent to the third argument for System.assertEquals()
  
       Once cast to the type of the mock object, then verification is that the 
       registerDirty method was called (twice) with an Sobject of type Case
  **/         
  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                  .times(2)
                  .description('both Cases should be marked as Closed'))
  )
   .registerDirty(fflib_Match.sObjectOfType(Case.SObjectType));


  /** In this verify, assert registerDirty was called with an SObject matching 
      1 - Case.Id = the first mocked CaseId
      2 - Case.Status is 'New'

     This is done with a matcher (type = fflib_Match) argument. 
     The framework looks to see if the method (registerDirty) 
     was called with an argument that "matches" the matcher.

     There are a large number of matchers, see class
     fflib_Match (most of the Mockito matchers are mirrored). 
     One of the Apex-specific ones is sObjectWith that takes a map of
     SObjectField => Object
  **/

  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                  .times(1)
                  .description('Case[0] sb reopened'))
  )
   .registerDirty(fflib_Match.sObjectWith(new map<SObjectField,Object> {
        Case.ID => mockCaseIds[0],    																			
        Case.Status => Cases.STATUS_NEW}));
		
  /** In this verify, assert the second Case ID also became status New **/
  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                  .times(1)
                  .description('Case[1] sb reopened'))
  )
   .registerDirty(fflib_Match.sObjectWith(new map<SObjectField,Object> {
        Case.ID => mockCaseIds[1],    																			
        Case.Status => Cases.STATUS_NEW}));

  /** Finally, assert records were committed to Database. 
      As commitWork has no args there is no Matcher **/

  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                   .times(1)
                   .description('both Cases should be committed'))
  )
   .commitWork();

}

Some meta remarks

  1. Using ApexMocks requires some wordy syntax that is not immediately apparent when you first see it
  2. It is especially useful for unit testing smallish methods that have few dependencies (which is how they should be written in the first place). Doing unit testing improves your confidence that methods will do what they are supposed to do when orchestrated into larger transactions.
  3. You can mock only non-static methods of objects.
  4. You need to invest time in learning how Matchers work. The class fflib_Match and fflib_ApexMocksTest are helpful. So are Mockito books and the Mockito web site
  5. As we’ll see in subsequent posts, you can mock selectors including formula fields and system audit fields without having to do any DML or SOQL. You can also mock the domain layer and service layer, and for that matter any non-static object method (except async)

The goal of this post was to explain the syntax. Try it out on a small class and you’ll feel you are up’ing your Unit test game, hence leaving a stronger legacy of test code to yourself, your team members, and your successors.

Lead Conversion, Tasks, and Order of Execution

Starting condition

  • Suppose you have a Lead with one or more activities.
  • Suppose when you convert the Lead, on the OOTB Lead Convert page, you add a new activity.

What happens at Lead Conversion and in what order

  1. Before/after update trigger fires on the Tasks that existed on the Lead pre-conversion.
  2. Before/after insert trigger fires on the new Task specified in the Lead Convert UI

In both use cases, the Task is prepopulated with a WhatId of the parent Opportunity

And, for Task Validation Rules…

  • If the Task Validation Rule fails the Tasks existing pre-conversion, the Lead Conversion UI will surface the error
  • If the Task Validation Rule fails the Task specified on the conversion page, the Task is not inserted nor is an error surfaced

If you want your trigger to detect the Task as participating as part of the Lead Conversion there is no robust solution.

Hacky-solution:
If all Lead Converts go through the OOTB Lead Convert page, then System.URL.getCurrentRequestUrl().getPath().endsWith('leadconvert.jsp') will work

Analytic Snapshot and Process Builder

Can the target object of an Analytic Snapshot be the target object of a Process Builder flow?

NO

The documentation states that workflows aren’t allowed but is silent on Process Builder.

Target object must not be included in a workflow.
The custom object in the Target Object field is included in a workflow. Choose a target object that is not included in a workflow.

I was trying to set a RecordTypeId after each snapshot record was created using values in the SObject. The only way to do this (as triggers aren’t supported either) is via a scheduled Apex job

HubSpot Synchronization and Deleted Leads

Don’t do this at home.

Our org had about 300,000 Salesforce Leads and around 200,000 HubSpot contacts. The Salesforce Leads had accumulated from 2006 and came from many sources, including previous Marketing Automation systems such as Marketo and Eloqua. For a variety of reasons, we decided to purge from Salesforce the obsolete Leads.

So, here’s what happened:

  • Run Data Loader mass delete job in Salesforce
  • Expect HubSpot to delete the corresponding contacts so the systems stay in sync
  • Discover over the course of a few months that HubSpot emails sent to reps on contact activity (like filling out a form) had broken links to the SFDC lead
  • Moderately annoying to the reps but at some point, sales insisted this be fixed

Analysis

The big reveal was that HubSpot doesn’t delete HubSpot contacts associated with SFDC leads if those contacts aren’t in the SFDC inclusion list!

Here’s an example:

  • Contact fred@foo.com comes into HubSpot from some form.
  • HubSpot adds contact to inclusion list (because form completed).
  • HubSpot contact syncs to Salesforce as a new Lead.
  • Sales marks lead with status Bogus Data.
  • HubSpot picks up change in status on next sync but ….
  • Because inclusion list rules say, exclude Status = Bogus Data, the contact is removed from the inclusion list. HubSpot maintains, as a property of the contact, the SFDC Lead Id in case the contact re-enters the inclusion list filter.
  • Salesforce mass delete removes the fred@foo.com Lead.
  • Because the SFDC Lead is no longer in the inclusion list, the delete event is not recognized by HubSpot and contact remains in HubSpot. The HubSpot contact is an orphan from the point of view of synchronization.

Remedies
We’re still exploring this but I believe the conceptual answer should be:

  1. If the HubSpot contact is ever sync’d to Salesforce, the contact should remain in the inclusion list. Thus, deletes from the Salesforce side will be deleted in HubSpot.
  2. Use HubSpot smart lists to filter out contacts marked as disqualified or bogus data or otherwise not worth engaging in new campaigns.
  3. Run periodic (monthly) Salesforce batch jobs to delete Leads that are bogus/spam after x days of existence
  4. Don’t delete from HubSpot as a HubSpot delete won’t delete in Salesforce, leaving you with an unbalanced system
  5. Make your inclusion list rules succinct – they need to be readable on one page, without scrolling

Custom domain – Napili Community Template

This post written to address a confusing point in the documentation.

Let’s say you want to create a custom domain name for your Community based on the Napili template. You desire https://customers.foo.com.

Stumbling block number One

When enabling Communities in an org (and custom domains are only available in PROD), you get this screen:
communities-enablement

(screen shot is from Dev Edition so PROD will be slightly different in the default URL)

Your eyes feast on the “Important: The domain name will be used in all of your communities and can’t be changed after you save it”. But you want your community to be called customers.foo.com with no force.com in the domain at all. You get scared.

RELAX. Typically, you will use your company name, say foo, in the entry box. Think of this as the master URL for all your communities (up to 50) that Salesforce needs to host your communities. It isn’t until later that you will bind your Community to its custom domain and then to the master URL

Stumbling block number Two
When building a Community, say with the Napili template, where do you define the custom domain name?

ANSWER: You don’t. Just give your Community a good label to distinguish it from any other Communities you might create. The Community setup process prompts you for an optional suffix for your Community URL. And this URL uses a force.com domain as in foo.na2.force.com. Still not a custom domain. RELAX.

And here’s the secret sauce how it all comes together

Salesforce doesn’t make things easy with a wealth of terminology, some of which doesn’t seem to apply for the poor Napili template customizer. But the key thing to remember is that Napili-template Communities are Sites. That is, Sites with a capital S.

So step 1 to the Custom Domain (actually steps 1-4) are described in excellent detail in this Knowledge Article.

  1. Update your DNS Server With Your Custom Domain
  2. Create a Certificate Signing Request & Obtain an SSL Certificate for your domain
  3. Update your signed SSL certificate in Salesforce.
  4. Create a Custom Domain in Salesforce. Note that Communities has to be enabled in your PROD org to finish this step as you won’t be able to assign the certificate to the custom domain until Communities is enabled. Hence, previous comments above re: relaxing

At this point you are almost there. All that is left is binding your Community to the custom domain.

Go to Domains Management | Domains. The custom domain you created in step 4 above (customers.foo.com) will appear here with the attached certificate/key. Select the custom domain. Click New Custom URL.
custom-domain-custom-url

The domain field is prepopulated as you would expect but what value goes into Site? Well, it is a lookup field so click the spyglass and, YES, you will see your Community in the list of available Sites. Select it and save. Your Community is now bound to the custom domain that is bound to the Community as hosted in Salesforce under a force.com domain name.

So, what’s happening under the hood?

  1. Your Community is hosted at Salesforce hence it has a force.com domain
  2. You define an alias in DNS (the CNAME entry) between your custom domain name and a domain name that Salesforce works with that includes your orgId. In this example, customers.foo.com is aliased to customers.foo.com.yourorgId.live.siteforce.com. Full details on how CNAME works can be found in many places such as here.
  3. When your users visit customers.foo.com the actual request via DNS goes to customers.foo.com.yourorgId.live.siteforce.com. Salesforce uses the binding between your custom domain and the Site (i.e. your published/activated Community) to find and render the pages of your community – the one you maintain in Community Builder. But the URL shown on the browser is what you want.

Pro tip:

Turns out you can migrate Communities from sandboxes to PROD with Changesets. Not every setting is copied but your pages will be. In the Changeset list of components, select Sites.com. A list of Communities will appear to choose from. See, knowing that Communities are Sites comes in handy.

Schedulable Jobs – Constructors and execute()

This issue threw me for a loop for some time.

Suppose you have a Schedulable class and when it runs, the behavior of the execute() method doesn’t match what you expect. It looks like there’s memory of previous execute()s

Here is a specific example that distills the issue

public class MySchedulable implements Schedulable {

   Account[] accts = new List<Account>();

   public void execute(SchedulableContext sc) {
      accts.addAll([select id, name from Account where YTD_Total__c < 1000.0]);
      // do something interesting with this 
   }

You run the job weekly. You observe on week n, where n > 1, that Accounts are being processed that currently don’t have a YTD_Total__c < 1000. Accounts with YTD_Total__c > 1000 are being processed (?!?)

Explanation

  • When the job is scheduled, the class constructor is called (here, the implicit constructor) and the object variable accts is initialized empty.
  • On week 1, when execute() is called, the accounts with YTD_Total__c < 1000 as of week 1 are added to list accts.
  • On week 2, when execute() is called, the accounts with YTD_Total__c < 1000 as of week 2 are added to list accts.

Note that the class constructor is not reinvoked for each execute(). Hence, the accts list grows and grows.

Solution
Reinitialize all object variables within the execute().

public class MySchedulable implements Schedulable {
 
   public void execute(SchedulableContext sc) {
      Account[] accts = new List<Account>();
      accts.addAll([select id, name from Account where YTD_Total__c < 1000.0]);
      // do something interesting with this 
   }

Email2Case with recordTypes

This was annoying and worth documenting

Starting conditions

  • Record types defined on Case.
  • Email2Case configured with email address support@foo.com using RecordType A and email address orders@foo.com using RecordType B.
  • RecordType A is the default record type for the automated case user (as configured in Case Settings).
  • Automated Case User’s profile has access to both RecordType A and RecordType B.

You start testing

Since it typically requires liaison with your mail server team to establish the email addresses, verify-to-Salesforce their validity, and then auto-forward the email to the Salesforce email services address (e.g. verylongname@salesforce.com), you will be tempted during testing to try sending emails to verylongname@salesforce.com as this is what Salesforce actually would receive.

You will be surprised.

If the email you send to is for the Automated Case User’s profile’s non-default recordtype, then the recordType assigned to the Case will be the default record type (!?!). This happens in either of the following circumstances:

  • Email-to-case address is not verified, or ..
  • Email-to-case address is verified

That is, in order to get the correct recordType assigned, you have to send the email to your company’s email domain and have that email forwarded to Salesforce.

Example:

  • RecordType A is default recordType for Automated Case User
  • Email-to-Case address orders@foo.com is configured to use RecordType B
  • Email-to-Case address orders@foo.com is associated to orders-verylongname@salesforce.com

Results:

  • Send email to orders-verylongname@salesforce.com. Result: Case created with recordType A ?!?
  • Verify orders@foo.com and send email to orders-verylongname@salesforce.com. Result: Case created with recordType A ?!?
  • Establish forwarding rules in your mail server that forward orders@foo.com to orders-verylongname@salesforce.com. Send email to orders@foo.com. Result: Case created with recordType B Hooray!