Migrating to Salesforce Integration User Licenses

If you are like me, you’ve been following best practices and assigning one user license to each of your third party integrations – especially those that cause some DML to occur in your org. Having a separate user license per integration:

  • Allows for traceability in the Field History related list as well as the CreatedBy and LastModifiedBy fields on your records.
  • Facilitates assigning the integration user only those permissions it needs to do its job via a Permission Set

Most AppExchange packages will include a Permission Set for what the package documentation typically calls a “service account user” dedicated to performing integration functions between the package and your org.

$$$$

Of course, if you’ve done this, you are paying full license price per integration for a non-human user that probably never uses the UX. But, after intense customer pressure (and I was there at Dreamforce in 2022 when Bret Taylor, then President of Salesforce , agreed to provide lower cost integration-only licenses to jubilation from the crowd), Salesforce delivered five (5) free integration-only licenses to every Enterprise Edition org and offered additional integration-only licenses for the low, low cost of $10/month each (2024 pricing).

Since no one likes spending more Salesforce subscription money than they have to, I was tasked with converting thirteen (13) integration users to our five free + eight (purchased, but cheap) Salesforce Integration licenses. I had to do the conversion before our annual subscription renewal so we could avoid paying another year of thirteen full price user seats (~$1800-2000 p.a.).

Starting Conditions

Being a good boy, many of my (full, expensive license) integration users were already set up with:

  • Profile = Minimum Access - Salesforce
  • Permission Set XXX-Integration with requisite CRUD, FLS, and other permissions

The oldest integrations were set up with

  • Profile = MyCompany Integration User Profile
    • Modify All Data and FLS on every field
  • No Permission Set

Vendor Documentation Review

An advantage of doing this work in late 2023 was that many of our third party vendors already had documentation of their support for the Salesforce Integration license. A survey of their Knowledge Bases yielded:

  • Supported with tips (a good example of this was OwnBackup)
  • Explicitly not supported (Talkdesk as of January 2024)
  • Silent

Those vendors that had nothing to say got a support case filed from me so I didn’t waste time deploying a migrated user license and having it fail or work mostly but not always.

Internal third parties were a bit easier as I could ask the product owners or do some reports and SOQL inside the org to gain insight.

Some Doubts (Alleviated)

Could I change a user with a full license to a Salesforce Integration license without destroying any OAuth tokens, passwords, etc?

A simple experiment proved that changing the Salesforce license of a user didn’t affect any authentication already established.

  • A user with a Salesforce Integration license can still login to Salesforce (even though it does not have UX permissions)
  • That user can change their password
  • That user can even set up Two-Factor Authentication

The Nitty-Gritty

I started off in my dev environment.

Permission Sets

I developed a Permission Set that I called Universal API User System Permissions. This contained every System Permission that I wanted every integration user to have (like API Enabled, Access Activities, Edit Tasks, Edit Events) You’ll decide what makes sense for your org.

For each Integration User, I either reviewed the existing permission set or created a new one. This contained all the CRUD, FLS, and other permissions that that integration user needed. Notably, it can not contain VisualForce Pages or Lightning Experience User permissions (among others that involve UX features).

If creating a new one, be sure to not set the Permission Set’s license as Salesforce. Instead, leave it blank.

User License

Then I changed the integration user’s License in Setup > Users from Salesforce to Salesforce Integration. This has the effect of deleting all the permission set and permission set group assignments (but fortunately, leaving public group membership intact).

IMPORTANT – I then assigned the Salesforce API User Permission Set License to the user. You have to do this before assigning Permission Sets. While it may have been obvious to the SFDC Product Manager, assigning a Permission Set License to the user is not typically done as ordinary Enterprise Edition users get an implicit Permission Set License. If you forget this step, you’ll get an error in assigning Permission Sets to the user – permission not supported for the Salesforce Integration license.

Permission Set Assignment

I then assigned the two permission sets to the integration user.

  • Universal API User System Permissions
  • XXX-Integration (where XXX was the name of the integration, e.g. Hubspot, OwnBackup, etc.)
At this point you may encounter errors
  • Your permission set includes a permission that isn’t supported by the Salesforce API User permission set license. Remove that permission.
  • Your permission set isn’t listed in the available permission sets to assign(!). This happens if you started with a permission set that defined it was available only for Salesforce licenses (rather than blank).
    • You can’t edit a Permission Set to change its license. You can’t clone in the UX and change the license field.
    • Your only option is to go to your IDE (or Workbench) and create a new .permissionset file with a new name, then
      • Copy-paste the old permission set XML into your new file
      • Change the value of the <label>...</label>
      • Delete the XML for <license>Salesforce</license>
      • Then save (deploy) the new .permissionset file to your org.
    • I also renamed the old, now useless Permission Set, with a suffix (retired). This would get deleted later after final deployment and testing

Testing

In many orgs, including my own, you aren’t lucky enough to have staging versions for all or any of your third part integrations.

By careful study of what I knew the third party integration did (for example, Hubspot reads/edits Accounts, Contacts, Leads, Opportunities, and Tasks), I then

  • Used Workbench (or Postman or equivalent) and logged into my org as the integration user.
  • Composed REST requests to query and DML records, simulating what the third party app does (to the best of my knowledge)
    • If you use Event Monitoring, you could examine the REST API logs to gain insight into the GETs, POSTs, PATCHes, and DELETEs used by the third party app.
  • In some cases, based on the tests, I needed to revise the Permission Set (usually because I had forgotten some CRUD or FLS permission).

Deployment

Once dev testing was done, I used my DevOps tool (Gearset) to deploy the permission set(s) to the downstream branches and orgs in my pipeline.

  • In each deployed org, you have to manually swap the integration user’s license to salesforce Integration, assign the Permission Set License, then assign the Permission Sets.
  • In some cases, I had staging 3rd party systems bound to my SFDC staging org. Here I could do true end-end testing.

Summary

This is just tedious work but rewarding once done as by converting thirteen full user licenses to Salesforce Integration licenses, I saved my org probably $25,000 p.a. in subscription costs. A bonus was that each integration user got a full review of its permissions and where too permissive, I brought the hammer down, yielding a more secure configuration.

If you have an org where you’ve lumped all your integrations to share a common Salesforce user, this is a good opportunity to follow best practices and dedicate a free/cheap integration user license to each integration.

Disappointments

The disappointments were the vendors who didn’t support (as of January 2024) the Salesforce Integration user license:

  • Talkdesk – The integration user needs access to Visualforce pages and tabs.
  • Groove – For “reasons”

Special cases

Backup / restore integrations

If converting your integration user for backup/restore vendors, you’ll need to create a Permission Set that includes CRUD on every object you want to backup/restore + FLS on every field that you backup/restore + recordtype access (all).

Platform Event Publishers

One of our integrations only published Platform Events (via REST calls) to our org. Since Platform Events run by default as Automated Process User (or configured to run as a Salesforce user), the integration user only needed CRUD on the XXX__e objects. No FLS is necessary to publish Platform Events

Apex Mocks and Verifying Multiple Custom Type Arguments

Part five of a series. Posts include:

Let’s say you have an AccountsService with method doStuff that has two arguments: an fflib_SObjectUnitOfWork and a custom type Map. CancelRequest looks like this:

public class CancelRequest {
  Id id;
  Date cancelDate;
  String cancelReason;

  public CancelRequest(Id val) {this.id = val;}
  public CancelRequest withCancelDate(Date val) {this.cancelDate = val; return this;}
  public CancelRequest withCancelReason(String val) {this.cancelReason = val; return this;}
  public Date getCancelDate() {return this.cancelDate;}
  public String getCancelReason() {return this.cancelReason;}  
}

Now, let’s say you have some code that calls the AccountsService.cancel (this could be a Visualforce controller, invocable method, domain class method, batchable execute(), Apex REST class, etc. – for purposes of this example, it doesn’t matter).

public MyClass {
  public void doStuff(Set<Id> accountIds, String cancelReason) {
    fflib_ISobjectUnitOfWork uow = Application.UnitOfWork.newInstance();  
    Map<Id,CancelRequest> cancelRequests = new Map<Id,CancelRequest>();
    for (Id accountId: accountIds) {
      cancelRequests.put(accountId,new CancelRequest(accountId)
                                    .withCancelDate(Date.today())
                                    .withCancelReason(cancelReason) );
    }
    AccountsService.cancel(uow,cancelRequests);  // cancel accounts w/ date+reason
    uow.commitWork();
  }
}

To unit test MyClass.doStuff(..), you want to mock the AccountsService as all you’re really interested in is that it is called once and with the proper arguments. You have a separate unit test for the actual AccountsService.cancel that checks for the proper DML.

So, let’s build the testmethod…

@isTest
private static void givenAccountIdsAndReasonVerifyDelegationToAccountsServiceCancel() {
   fflib_ApexMocks mocks = new fflib_ApexMocks(); // framework
   
   // Given some accountIds
   Id[] mockAccountIds = new List<Id> {
          fflib_IdGenerator.generate(Account.SObjectType),
          fflib_IdGenerator.generate(Account.SObjectType)
  };

  // Given a cancel reason
  String cancelReason = 'foo';

  // Given a mockAccountsService (assumes standard naming conventions for service implementations
  AccountsServiceImpl mockAccountsService = 
      (AccountsServiceImpl) mocks.mock(AccountsServiceImpl.class);
  Application.Service.setMock(IAccountsService.class,mockAccountsService);

  // Given a mock Unit of Work
  fflib_SObjectUnitOfWork mockUow = 
    (fflib_SObjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
  Application.UnitOfWork.setMock(mockUow);  

  // When doStuff called
  Test.startTest();
  new MyClass().doStuff(new Set<Id> (mockAccountIds),cancelReason);
  Test.stoptest();

  // Then verify service was called only once
  ((AccountsServiceImpl) mocks.verify(mockAccountsService,mocks.times(1)
                            .description('AccountsService.cancel sb called')))
           .cancel((fflib_ISobjectUnitOfWork) fflib_Match.anyObject(),
                   (Map<Id,CancelRequest>) fflib_Match.anyObject()
                  );
  //  Then verify that the service was called with the expected arguments.
  
  // Because the arguments are Apex custom types (and don't implement
  // an equals() and hashcode() method), there is no way for the ApexMocks
  // matchers to verify on equality. So, we fallback to argumentcaptors

  // Set up the captors, one per arg to method AssetsService.cancel
  fflib_ArgumentCaptor capturedUowArg = // arg0
    fflib_ArgumentCaptor.forClass(fflib_ISObjectUnitofWork.class);
  fflib_ArgumentCaptor capturedCancelRequestArg = // arg1
    fflib_ArgumentCaptor.forClass(Map<Id,CancelRequest>.class);

  // Capture the actual args used when the mock service was called
  ((AccountsServiceImpl) mocks.verify(mockAccountsService,1))
    .cancel((fflib_ISobjectUnitOfWork)capturedUowArg.capture(),
            (Map<Id,CancelRequest>)capturedCancelRequestArg.capture()
           );

  // Transform the capturedArgs (represented by type fflib_ArgumentCaptor)
  // into something we can inspect (using getValue() )
  fflib_ISobjectUnitOfWork actualUowArg =
    (fflib_ISobjectUnitOfWork) capturedUowArg.getValue();
 
  Map<Id,CancelRequest> actualCancelRequestArg =
    (Map<Id,CancelRequest>) capturedCancelRequestArg.getValue();
 
  // Now, whew, finally verify values
  System.assertEquals(mockAccountIds.size(),
          actualCancelRequestArg.size(),'all accts sb requested for cancel');

  for (Id accountId: actualCancelRequestArg.keySet() ) {
    System.assertEquals('Foo',
                        actualCancelRequestArg.get(accountId).getCancelReason());
    System.assertEquals(Date.today(),
                        actualCancelRequestArg.get(accountId).getCancelDate());
  }
}

ApexMocks, Answers and void no Argument Domain methods

Part four of a series. Posts include:

Let’s say you have an Opportunity domain method called applyDiscounts(). I’ve seen two approaches to coding (and invoking) this method:

Purist Approach

public void applyDiscounts() {
  for (Opportunity o : (Opportunity[]) Records) {
    .. do work, modifying Records ..
  }
}

Invoke this by:

Opportunities.newInstance(myOppos).applyDiscounts();

Easier To Mock Approach

public void applyDiscounts(Opportunity oppos) {
  for (Opportunity o : oppos) {
    .. do work, modifying calling argument oppos
  }
}

Invoke this by:

Opportunities.newInstance().applyDiscounts(myOppos);

I don’t really like the second approach because it perverts the intention of the Domain class which is to operate on a collection of SObjects provided through its constructor and available in super class variable Records. The Andrew Fawcett book on Enterprise patterns illustrates a custom domain method using the first approach on page 184 (second edition).

If you are using the second approach, you can mock the results of the void method applyDiscounts(oppos) using fflib_Answer. You use Answers when the mocked method returns modified values through its arguments. Enzo Denti has an excellent blog post on how to do this and I won’t bother to repeat this.

But let’s assume you are using the purist approach and need to mock the results of a void domain class method that modifies values passed to the domain class’s constructor. How would you do that?

Assume you have a class DoCoolOpportunityStuff that among other things, has a dependency on the Opportunities domain class and specifically the domain class’s applyDiscounts() method. Let’s set this up:

DoCoolOpportunityStuff (could easily be a service layer class)

public with sharing class DoCoolOpportunityStuff {
  public  void doApplyDiscounts(Set<Id> oppoIds, fflib_ISObjectUnitOfWork uow) {
    // Dependency 1 - Opportunity SObjects
    Opportunity[] oppos = OpportunitiesSelector.newInstance().selectById(oppoIds);

    //	Dependency 2 - Opportunity Domain
    Opportunities.newInstance(oppos).applyDiscounts();
    for (Opportunity o: oppos) {
      if (o.Amount < 0.00) {
	Messaging.SingleEmailMessage email = new Messaging.SingleEmailMessage();
	  email.setSubject(o.Name + ' discounted more than 100%');
	  uow.registerEmail(email);
      }
      uow.registerDirty(o);
     }
   }
}

The above method doApplyDiscounts fetches Opportunities based on a set of ids (Dependency 1) and then instantiates a domain class (Dependency 2) to calculate the discounts on the Opportunities. If the applied discount is more than 100%, it sends an email telling someone of the overly-zealous discount. Everything is done through the UnitOfWork layer so results can be easily tested using ApexMocks.

So, to test this with mocks, we need to mock the Selector (to return mock Oppos) but for good unit tests, we need to also mock applyDiscounts to return changed Opportunities some with Amounts greater than 0.00 and some with Anounts less than 0.00.

Here’s what we need to do in the test method:

static void givenOpportunitiesVerifyApplyDiscounts() {

  Opportunity[] mockOppos = new List<Opportunity> {
    new Opportunity(Id=fflib_IDGenerator.generate(Opportunity.SObjectType),
		Amount=10.0),
    new Opportunity(Id=fflib_IDGenerator.generate(Opportunity.SObjectType),
		Amount=20.0)
  };
  Set<Id> mockOppoIds = new Map<Id,Opportunity>(mockOppos).keySet();

  fflib_ApexMocks mocks = new fflib_ApexMocks();

  //  Given mocks for each of the dependencies
  OpportunitiesSelector mockOpportunitiesSelector = (OpportunitiesSelector) mocks.mock(OpportunitiesSelector.class);
  Opportunities mockOpportunitiesDomain = (Opportunities) mocks.mock(Opportunities.class);
  fflib_SObjectUnitOfWork mockUow = 
     (fflib_SObjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);

  mocks.startStubbing();
  mocks.when(mockOpportunitiesSelector.SObjectType())
        .thenReturn(Opportunity.SObjectType);
  mocks.when(mockOpportunitiesSelector.selectById(mockOppoIds))
        .thenReturn(mockOppos);

  mocks.when(mockOpportunitiesDomain.sObjectType())
        .thenReturn(Opportunity.SObjectType);
  ((IOpportunities) mocks.doAnswer(
		new MyApplyDiscountAnswer(mockOppos),mockOpportunitiesDomain))
		.applyDiscounts();

  mocks.stopStubbing();

  // Given mocks injected
  Application.Selector.setMock(mockOpportunitiesSelector);
  Application.Domain.setMock(mockOpportunitiesDomain);
  Application.UnitOfWork.setMock(mockUow);


  //	when service invoked
  new DoCoolOpportunityStuff().doApplyDiscounts(mockOppoIds,mockUow);

  //	then verify oppos domain applyDiscounts called
  ((Opportunities)mocks.verify(mockOpportunitiesDomain,mocks.times(1)
		.description('domain applyDiscounts sb called once')))
	.applyDiscounts();

  //	then verify all mocked oppos registered dirty
  ((fflib_SObjectUnitOfWork)mocks.verify(mockUow,mocks.times(2)
		.description('registerDirty sb called')))
	.registerDirty(fflib_Match.sObjectOfType(Opportunity.SObjectType));
  
  //	then verify mocked Oppo[1] - mocked to discount more than 100%
  //	was noted in an email
  ((fflib_SObjectUnitOfWork)mocks.verify(mockUow,mocks.times(1)
		.description('send email for oppo[1]')))
	.registerEmail((Messaging.Email)fflib_Match.anyObject());

  //	then verify each Oppo (discounted) was dirtied with discount
  ((fflib_SObjectUnitOfWork)mocks.verify(mockUow,mocks.times(1)
		.description('Oppo[0] should have discount')))
	.registerDirty(fflib_Match.sObjectWith(
		new Map<SObjectField,Object> {
		  Opportunity.Id => mockOppos[0].Id,
		  Opportunity.Amount => 0.50 * 10.00
		}
  ));
  ((fflib_SObjectUnitOfWork)mocks.verify(mockUow,mocks.times(1)
	        .description('Oppo[1] should have > 100% discount')))
	.registerDirty(fflib_Match.sObjectWith(
		new Map<SObjectField,Object> {
		  Opportunity.Id => mockOppos[1].Id,
		  Opportunity.Amount => -10.00
		}
  ));
}

Let’s focus on the mock answer for the applyDiscounts() method. Remember, it returns a value through the domain class’s Records variable (the opportunities provided to the domain class’s constructor. The other mocks for the selector and unit of work are standard issue mocks as described in earlier blog posts.

((IOpportunities) mocks.doAnswer(
        new MyApplyDiscountAnswer(mockOppos),mockOpportunitiesDomain))
        .applyDiscounts();

What do we have? We are telling ApexMocks that when the applyDiscounts method is called, answer with the side effects of class MyApplyDiscountAnswer.answer() using the mocked Opportunities (the ones the mocked selector returns). The class MyApplyDiscountAnswer looks like this:

class MyApplyDiscountAnswer implements fflib_Answer {

private Opportunity[] oppos;
  private MyApplyDiscountAnswer(Opportunity[] oppos) {
	this.oppos = oppos;
  }
  public Object answer(fflib_InvocationOnMock invocation) {
    for (Integer i = 0; i < this.oppos.size(); i++) {
	Opportunity o = this.oppos[i];
	  o.Amount = i ==1
	   ? -10.00		// oppo 1 mocked to answer with negative amount
	   : 0.50* o.Amount;
    }
    return null;	// answer must return something
  }
}

You have to implement the answer method. Because we want to return some Opportunities with more than 100% discount applied, we need to dependency inject the Answer with the opportunities returned by the mocked selector. Meta-dependency-injecting! Fun!

So, what happens when the testmethod “when” executes?

new DoCoolOpportunityStuff().doApplyDiscounts(mockOppoIds,mockUow);
  • The code under test starts
  • The code under test fetches Opportunities using the supplied IDs
  • Since we mocked the selector, the code under test fetches our testmethod’s mockOppos
  • These mockOppos serve as the input to the Opportunities domain newInstance(..) method
  • Since we mocked the domain class too, when the applyDiscounts method is requested; ApexMocks uses the custom Answer MyApplyDiscountAnswer to take the mockOppos and modify them with amounts less than and greater than 0
  • The code under test decides that some of the Oppos require an email (because we answered with Amount values less than 0.00). These emails are registered to the unit of work.
  • The code under test finally takes each fetched Opportunity and registers it as dirty (with the discounted amount)

So, the rest of the testmethod simply verifies that the expected emails were registered to the unit of work and the expected sobjects were registered dirty. As with all comprehensive ApexMocks examples – no DML was required to be set up and no DML was executed making the test method lightning fast.

Now it must be admitted that this answer technique only works if you know outside of the code under test what objects will be provided to the domain class’s newInstance(..) method. It works in our example because we’re mocking the output of the selector as the input to the domain construction. And, the output of the selector is under the control of the testmethod .thenReturn(mockOppos) method n the mock stubbing section.

Another example – objects passed to domain constructor not knowable by testmethod

Let’s suppose we have a domain method that derives the ownerId for a collection of Opportunities. Further assume that this void assignOwners() method is really complicated and relies on a massive scaffolding of custom metadata and reference sobjects. Way too much to conveniently prebuild in DML for the testmethod

Further assume we have some service class method that exploits the domain class but the Opportunities passed to the domain constructor are not know outside of the service method:

public  void doAssignOwners1(fflib_ISObjectUnitOfWork uow) {

  Opportunity[] oppos = new List<Opportunity>();

  // Contrived .. construct random # of Oppos. Dependency 1
  for (Integer i = 0; i < Math.mod(System.currentTimeMillis(),5); i++) {
	oppos.add(new Opportunity(Amount = i*100.00));
  }

  //	Dependency 2 - Opportunity Domain
  Opportunities.newInstance(oppos).assignOwners();
  for (Opportunity o: oppos) {
	uow.registerDirty(o);
  }
}

I contrived the method to randomly generate Opportunities but the point being is that the method doAssignOwners isn’t passed predictable Opportunities.

While you could mock the domain class, you have no way in ApexMocks to mocking the opportunities passed to newInstance() and furthermore, even if you did (via a subclass of your Opportunities domain class), these mocked Opportunities wouldn’t flow to the for loop that registers dirty the opportunities that were randomly generated.

So, what to do?

You need to change the doAssignOwners method to have a mockable way of generating the opportunities for the domain class. Remember in our first example, the service method used a selector to generate the Opportunities for use in the domain class. So, we rewrite the service method to look like this:

public class DoCoolOpportunityStuff {
  private final IOpportunityGenerator oppoGenerator;

  public DoCoolOpportunityStuff() {
    this.oppoGenerator = new OpportunityGenerator();
  }

  @TestVisible private DoCoolOpportunityStuff(IOpportunityGenerator mockOppoGenerator) {
    this.oppoGenerator = mockOppoGenerator;
  }

  public  void doAssignOwners2(fflib_ISObjectUnitOfWork uow) {

    // Contrived .. construct random # of Oppos. Dependency 1
    Opportunity[] oppos = this.oppoGenerator.generate();


    //	Dependency 2 - Opportunity Domain
    Opportunities.newInstance(oppos).assignOwners();
    for (Opportunity o: oppos) {
      uow.registerDirty(o);
    }
  }
}

The Opportunities generated for use in the domain class come from another class and, because it is a top level class, we can mock that as well. Note above that the production code of DoCoolOpportunityStuff instantiates an object variable (using the no arg constructor) with the production version of the OpportunityGenerator (below). But we added in a way for the testmethod to use dependency injection to insert a test (mocked) version of this OpportunityGenerator so we can have predictability of the generated Oppos (and hence know which will be passed to the domain class’s newInstance(..) method).

public  class OpportunityGenerator implements IOpportunityGenerator {
  public Opportunity[] generate() {
    Opportunity[] oppos = new List<Opportunity>();
    for (Integer i = 0; i < Math.mod(System.currentTimeMillis(),5); i++) {
      oppos.add(new Opportunity(Amount = i*100.00));
    }
    return oppos;
  }
}

So, now we can test all this using everything we have learned as shown below:

@IsTest
private static void givenNothingVerifyAssignOwnersV2 () {
  // Given mockOppos
  Opportunity[] mockOppos = new List<Opportunity> {
	new Opportunity(Id=fflib_IDGenerator.generate(Opportunity.SObjectType),
			Amount=10.0),
	new Opportunity(Id=fflib_IDGenerator.generate(Opportunity.SObjectType),
			Amount=20.0)
  };
  Set<Id> mockOppoIds = new Map<Id,Opportunity>(mockOppos).keySet();


  fflib_ApexMocks mocks = new fflib_ApexMocks();

  //	Given mockOpportunityGenerator
  IOpportunityGenerator mockOpportunityGenerator = 
    (IOpportunityGenerator) mocks.mock(OpportunityGenerator.class);

  //  Given mock domain and uow
  Opportunities mockOpportunitiesDomain = (Opportunities) mocks.mock(Opportunities.class);
  fflib_SObjectUnitOfWork mockUow = 
    (fflib_SObjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);

  mocks.startStubbing();

  mocks.when(mockOpportunityGenerator.generate()).thenReturn(mockOppos);

  mocks.when(mockOpportunitiesDomain.sObjectType())
    .thenReturn(Opportunity.SObjectType);
  ((IOpportunities) mocks.doAnswer(
	new MyAssignOwnerAnswer(mockOppos),mockOpportunitiesDomain))
	.assignOwners();
  mocks.stopStubbing();

  // Given mocks injected
  Application.Domain.setMock(mockOpportunitiesDomain);
  Application.UnitOfWork.setMock(mockUow);
  DoCoolOpportunityStuff coolStuff = new DoCoolOpportunityStuff(mockOpportunityGenerator);

  // when service method called
  coolStuff.doAssignOwners2(mockUow);	// assigns owners

  //	verify each oppo updated and w/ owner
  ((fflib_SObjectUnitOfWork)mocks.verify(mockUow,mocks.times(2)
         .description('2 recs sb modified')))
	 .registerDirty(fflib_Match.sObjectOfType(Opportunity.SObjectType));

  ((fflib_SObjectUnitOfWork)mocks.verify(mockUow,mocks.times(2)
	.description('each domain oppo should have ownerId as per our mocked Answer')))
	.registerDirty(fflib_Match.sObjectWith(
	  new Map<SObjectField,Object>{
	    Opportunity.OwnerId => UserInfo.getUserId()
		}));
  }

  

These lines set up the mockOpportunityGenerator and then inject it to the code under test

//	Given mockOpportunityGenerator
  IOpportunityGenerator mockOpportunityGenerator = 
    (IOpportunityGenerator) mocks.mock(OpportunityGenerator.class);
 ...
// inject via testVisible constructor
DoCoolOpportunityStuff coolStuff = new DoCoolOpportunityStuff(mockOpportunityGenerator);

And to mock the no arg domain class method assignOwners(), we use the same Answer technique in the first example:

((IOpportunities) mocks.doAnswer(
	new MyAssignOwnerAnswer(mockOppos),mockOpportunitiesDomain))
	.assignOwners();

Here we have a different custom fflib_Answer type (see below). This custom Answer assigns the owners to something predictable like the running user’s ID.

class MyAssignOwnerAnswer implements fflib_Answer {

  private Opportunity[] oppos;
  public MyAssignOwnerAnswer(Opportunity[] oppos) {
	this.oppos = oppos;
  }
  public Object answer(fflib_InvocationOnMock invocation) {
    for (Integer i = 0; i < this.oppos.size(); i++) {
      Opportunity o = this.oppos[i];
	o.OwnerId = UserInfo.getUserId();
      }
    return null;	// answer must return something
  }
}

Deleting Leads and Hubspot sync

Oh, the woes of SFDC and Hubspot lead/contact synchronization. There’s a lot going on in the Hubspot sync and a careful reading of their documentation is always wise.

One thing I have discovered over the years is that the Hubspot and SFDC databases get out of sync. Here is how that typically happens:

  • Hubspot contact reaches some state wherein it qualifies for the inclusion list
  • Hubspot contact syncs to SFDC as a new Lead
  • Inclusion list rules are written in such a way that if the lead becomes disqualified in Salesforce, the contact is removed from the inclusion list. Note that the Lead still exists in Salesforce.
  • Now, if the Salesforce Lead is deleted (and you have configured Hubspot to delete Hubspot contacts if the corresponding Salesforce Lead/Contact is deleted), the HubSpot contact won’t be deleted because it is no longer in the inclusion list.

So, where’s the problem?

Given the above, if the Hubspot contact fills out a Contact Us form or does anything else that re-raises the HubSpot score to become marketing qualified again; HubSpot will…

  • Add the contact to the inclusion list (because your rules enable it to enter)
  • Sync the HubSpot contact to Salesforce (HubSpot has remembered the corresponding Salesforce ID)
  • Discover that Salesforce deleted the Lead, and …
  • Because you have configured HubSpot to accept Salesforce deletions as HubSpot deletions…
  • Deletes the HubSpot contact – even though the contact just filled out a form!

How to resolve

First of all, if you are getting started, never ever allow a Hubspot contact to leave the inclusion list once sync’d to Salesforce. This way, if you delete a Lead of Contact in Salesforce, it will delete the corresponding HubSpot contact. Should that contact re-appear months later as marketing-qualified again; HubSpot will treat it as a new Contact and Salesforce will see it as a new Lead. Just what you want.

A more usual scenario is you have (tens of, hundreds of) thousands of bogus/spammy/obsolete Salesforce Leads that some bright-eyed, bushy-tailed Salesforce admin decides to delete. Bam! They’re gone from Salesforce and those still in the inclusion list are also vaporized from HubSpot. But the deleted Salesforce Leads not in the inclusion list won’t be deleted from HubSpot and the problems described above ensue weeks/months later.

What you need to do is:

  • Before mass deleting in Salesforce, create a CSV with the email addresses of all the to-be-deleted Leads/Contacts
  • Delete from Salesforce
  • This will delete from HubSpot any HubSpot contacts in the inclusion list (good!)
  • Export from Hubspot a CSV with all HubSpot contacts with email address as a column. Be sure to do this after the Salesforce deletes are done and the HubSpot sync has run
  • Do an Excel VLOOKUP (actually, use the fast VLOOKUP technique unless you plan on staring at your computer screen when you have > 50,000 entries) and find all the entries in your HubSpot database that were deleted from Salesforce in the recent purge. These entries need to become a static list that you will use to delete from HubSpot. Do this operation as fast as you can.
  • Then fix your inclusion list rules to always include any Hubspot contact ever sync’d to Salesforce, regardless of HubSpot score or any other value. You can use the “Salesforce Last Sync Date not equal null” as the filter.

Process Builder bulkification – Record Create

There’s some uncertainty developed over the years as to how much bulkification actually exists in Process Builder.

Here’s a V42 assessment for a couple of simple use cases.

Starting condition:
A Process Builder flow on Opportunity with two decision blocks; each one with an Action Group that creates a Case. The first Action Group includes an “Evaluate Next Criteria” to allow the records that meet the first condition to also flow into the second condition for evaluation.

There is a trigger on Case.

Use Case 1 – a batch of Opportunities – all of which meet condition-1 and none meet condition-2
RESULT: The newly-created Cases from the first Action Group are bulkified – any Apex trigger will see them all in a single trigger context

Use Case 2 — a batch of Opportunities – 20 meet condition 1 and a distinct set of 30 meet condition-2 (the second decision block)
RESULT:

  • The newly-created Cases from the first Action Group will be presented as batch of 20 records to the Case trigger
  • The newly-created Cases from the second Action Group will be presented as batch of 30 records to the Case trigger

Use Case 3 — a batch of Opportunities – 50 meet condition 1 and the same 50 records meet condition-2 (the second decision block)
RESULT:

  • The newly-created Cases from the first Action Group will be presented as batch of 50 records to the Case trigger
  • The newly-created Cases from the second Action Group will be presented as batch of 50 records to the Case trigger

Hence, your trigger on Case will execute twice. This might create a limits exposure and certainly affects CPU time. Even more importantly, if you are using simplistic static variables to control after insert trigger recursion, the second set of records may not fully execute as expected — be careful!

Note that if you are not using “Evaluate Next Criteria” as in the diagram below, the results above are the same.

Triggers – workflow – recursion control – callouts – allOrNone

An insidious set of circumstances:

Starting condition:
  • You have an afterUpdate trigger that if a condition when oldvalue changes to newValue, you want to do a callout
  • You also have a workflow that updates the same object and that workflow’s entry criteria is satisfied when the for the same update event

Now, you might not know this, but Workflow field updates will cause the trigger to re-execute AND, the value of Trigger.old will be as it was when the record was initially updated. Here’s what it says from the well-thumbed Triggers and Order of Execution Apex Doc:

Trigger.old contains a version of the objects before the specific update that fired the trigger. However, there is an exception. When a record is updated and subsequently triggers a workflow rule field update, Trigger.old in the last update trigger won’t contain the version of the object immediately prior to the workflow update, but the object before the initial update was made

Thus, your future method will be called TWICE, and, if it is doing a callout, will callout twice. Here’s a simple proof:

Apex trigger
trigger LeadTrigger on Lead (before insert, before update, after insert, after update) {

    if (Trigger.isAfter && Trigger.isUpdate)
        new LeadTriggerHandler().onAfterUpdate(Trigger.new,Trigger.oldMap);
}
Apex Trigger handler
public class LeadTriggerHandler {

    public void onAfterUpdate(Lead[] leads, map<ID,Lead> oldLeads) {
        for (Lead l: leads) { 
            Lead oldLead = oldLeads.get(l.Id);
            if (l.Company != oldLead.Company) {
                System.debug(LoggingLevel.INFO,'company has changed from ' + oldLead.Company   + 
                             'to ' + l.Company + ' .. request an @future to dowork');
                doCallout(l.Company);
            }
        }
    }
      
    @future
    static void doCallout(String company) {
        System.debug(LoggingLevel.INFO,'future method to do callout for ' + company);
        // .. callout details not important
    }
}
Workflow
  • Evaluation Criteria: Evaluate the rule when a record is created, and any time it’s edited to subsequently meet criteria
  • Rule Criteria: If Lead.Company contains ‘Changed’
  • Action: Field update Lead.Mobile to ‘650-555-1212’
Anonymous apex to demonstrate
Lead[] leads = new list<Lead> {
    new Lead(Company = 'Foo00', LastName = 'LName00'),
    new Lead(Company = 'Foo01', LastName = 'LName01')
    };

insert leads;
leads[0].Company = 'Foo00Changed';
leads[1].Company = 'Foo01Changed';
update leads; // force future to execute in handler
Debug log(s)

Workflow - Recursion - Trigger - Callout Ex 1

Ack! the future fired four(4) times! We should only have had two (2) as we updated only two records.

|USER_INFO|[EXTERNAL]|00540000000wbFS|cropredy@gmail.com|Pacific Standard Time|GMT-08:00
|EXECUTION_STARTED
//  lead inserted -- details omitted ...
//	Lead update event Lead.Company value changes
DML_BEGIN|[9]|Op:Update|Type:Lead|Rows:2
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
USER_DEBUG|[7]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork
USER_DEBUG|[7]|INFO|company has changed from Foo01to Foo01Changed .. request an @future to dowork
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_STARTED|[EXTERNAL]|Workflow:Lead
WF_RULE_EVAL_BEGIN|Assignment
WF_SPOOL_ACTION_BEGIN|Assignment
WF_ACTION|.
WF_RULE_EVAL_END
WF_RULE_EVAL_BEGIN|Workflow
WF_CRITERIA_BEGIN|[Lead: LName00 00Q1W00001Jh9VR]|onUpdate - Set Field|01Q1W000000RGk3|ON_CREATE_OR_TRIGGERING_UPDATE|0
WF_RULE_FILTER|[Lead : Company contains Changed]
WF_RULE_EVAL_VALUE|Foo00Changed
WF_CRITERIA_END|true
WF_CRITERIA_BEGIN|[Lead: LName01 00Q1W00001Jh9VS]|onUpdate - Set Field|01Q1W000000RGk3|ON_CREATE_OR_TRIGGERING_UPDATE|0
WF_RULE_FILTER|[Lead : Company contains Changed]
WF_RULE_EVAL_VALUE|Foo01Changed
WF_CRITERIA_END|true
WF_SPOOL_ACTION_BEGIN|Workflow
WF_FIELD_UPDATE|[Lead: LName00 00Q1W00001Jh9VR]|Field:Lead: Mobile|Value:650-555-1212|Id=04Y1W000000PfJV|CurrentRule:onUpdate - Set Field (Id=01Q1W000000RGk3)
WF_FIELD_UPDATE|[Lead: LName01 00Q1W00001Jh9VS]|Field:Lead: Mobile|Value:650-555-1212|Id=04Y1W000000PfJV|CurrentRule:onUpdate - Set Field (Id=01Q1W000000RGk3)

 // Workflow updates the Leads with Field Update
WF_ACTION| Field Update: 2; 
WF_RULE_EVAL_END

// before/after triggers on Lead re-fire (expected)
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]

// uh-oh, Trigger.old has values prior to the initial update DML, 
// not the values as of the after update conclusion 
USER_DEBUG|[7]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork
USER_DEBUG|[7]|INFO|company has changed from Foo01to Foo01Changed .. request an @future to dowork
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9VR, 00Q1W00001Jh9VS]
WF_ACTIONS_END| Field Update: 2;
CODE_UNIT_FINISHED|Workflow:Lead
DML_END|[9]
CODE_UNIT_FINISHED|execute_anonymous_apex
EXECUTION_FINISHED

Solution 1 (sounds good)

Just add a static recursion control variable to your handler

public class LeadTriggerHandler {
    static set<ID> leadIdsAlreadySentToFuture = new set<ID>(); // recursion control
    public void onAfterUpdate(Lead[] leads, map<ID,Lead> oldLeads) {
        for (Lead l: leads) { 
            Lead oldLead = oldLeads.get(l.Id);
            if (l.Company != oldLead.Company && 
                !leadIdsAlreadySentToFuture.contains(l.Id)) { // have we already done this?
                System.debug(LoggingLevel.INFO,'company has changed from ' + oldLead.Company   + 
                             'to ' + l.Company + ' .. request an @future to dowork');
                doCallout(l.Company);
                leadIdsAlreadySentToFuture.add(l.Id);    
            }
        }
    }
      
    @future
    static void doCallout(String company) {
        System.debug(LoggingLevel.INFO,'future method to do callout for ' + company);
        // .. callout details not important
    }
}

This works as the debug log shows the future being called twice, once per Lead updated:
Workflow - Recursion - Trigger - Callout Ex 2

So, can I now work on my next JIRA ticket? Sorry ….

What if your Trigger/Handler is also invoked in a use case where partial success is allowed and one or more of the records fails to validate? AllOrNone = false can happen in many common use cases:

  • Data Loader
  • Any use of Apex Database.update(records,false); True also for the other Database.xxx methods.
  • Bulk, SOAP, or REST APIs that either default AllOrNone to false or set explicitly if available.

Here, we run into another little-known SFDC feature of trigger retries in the AllOrNone = false (i.e. partial successes allowed) use case. This is documented in the Apex guide as:

AllOrNone doc

Going back to the Triggers and Order of Execution, there’s one last tidbit as to why you can’t use static variables for recursion control in an AllOrNone = false use case:

When a DML call is made with partial success allowed, more than one attempt can be made to save the successful records if the initial attempt results in errors for some records. For example, an error can occur for a record when a user-validation rule fails. Triggers are fired during the first attempt and are fired again during subsequent attempts. Because these trigger invocations are part of the same transaction, static class variables that are accessed by the trigger aren’t reset. DML calls allow partial success when you set the allOrNone parameter of a Database DML method to false or when you call the SOAP API with default settings. For more details, see Bulk DML Exception Handling.

So, if you do a bulk update of two records, and one fails the validation rule, the static recursion control variable will be set on the first attempt, any @future calls are rolled back, and, when SFDC makes the second attempt on the non-failed record, the recursion control prevents the callout attempt from even happening so you end up with no callouts!

Let’s look at a proof:

Add a validation rule:

Website = 'www.failme.com'

Execute this code:
Lead[] leads = new list<Lead> {
    new Lead(Company = 'Foo00', LastName = 'LName00'),
    new Lead(Company = 'Foo01', LastName = 'LName01')
    };

insert leads;

leads[0].Company = 'Foo00Changed';
leads[1].Company = 'Foo01Changed';
leads[1].Website = 'www.failme.com';  // force partial success by failing this in VR
Database.SaveResult[] results = Database.update(leads,false); // allow partial success
Get this debug log

Workflow - Recursion - Trigger - Callout Ex 3

No future logs! Future never happened!

CODE_UNIT_STARTED|[EXTERNAL]|execute_anonymous_apex
// 1st time trigger is executed - both Leads passed:
DML_BEGIN|[11]|Op:Update|Type:Lead|Rows:2
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv, 00Q1W00001Jh9Vw]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv, 00Q1W00001Jh9Vw]
CODE_UNIT_STARTED|[EXTERNAL]|Validation:Lead:00Q1W00001Jh9Vv

// Validation rules execute
VALIDATION_RULE|03d1W000000Tdvy|Coerce_failure
VALIDATION_FORMULA|Website = 'www.failme.com'|Website=null
VALIDATION_PASS
CODE_UNIT_FINISHED|Validation:Lead:00Q1W00001Jh9Vv
CODE_UNIT_STARTED|[EXTERNAL]|Validation:Lead:00Q1W00001Jh9Vw
VALIDATION_RULE|03d1W000000Tdvy|Coerce_failure
VALIDATION_FORMULA|Website = 'www.failme.com'|Website=www.failme.com
// Fail the second Lead
VALIDATION_FAIL
CODE_UNIT_FINISHED|Validation:Lead:00Q1W00001Jh9Vw

// After update sees only the first, successful, Lead; future requested, static vbl set
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
USER_DEBUG|[8]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_STARTED|[EXTERNAL]|Workflow:Lead

// Workflow executes , causes field update on first lead
WF_RULE_EVAL_BEGIN|Workflow
WF_CRITERIA_BEGIN|[Lead: LName00 00Q1W00001Jh9Vv]|onUpdate - Set Field|01Q1W000000RGk3|ON_CREATE_OR_TRIGGERING_UPDATE|0
WF_RULE_FILTER|[Lead : Company contains Changed]
WF_RULE_EVAL_VALUE|Foo00Changed
WF_CRITERIA_END|true
WF_SPOOL_ACTION_BEGIN|Workflow
WF_FIELD_UPDATE|[Lead: LName00 00Q1W00001Jh9Vv]|Field:Lead: Mobile|Value:650-555-1212|Id=04Y1W000000PfJV|CurrentRule:onUpdate - Set Field (Id=01Q1W000000RGk3)
WF_ACTION| Field Update: 1;
WF_RULE_EVAL_END

// WF field update causes after trigger to re-execute (as expected)
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
// after trigger is NOP as recursion vbl says do nothing
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
WF_ACTIONS_END| Field Update: 1;
CODE_UNIT_FINISHED|Workflow:Lead


// SFDC retries the first record because AllOrNone=false; governor limits reset
//	But static variables are not reset
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_STARTED|[EXTERNAL]|Validation:Lead:00Q1W00001Jh9Vv

// WF fires again, updates the first Lead but no callout done  as recursion vbl prevents
WF_RULE_EVAL_BEGIN|Workflow
WF_CRITERIA_BEGIN|[Lead: LName00 00Q1W00001Jh9Vv]|onUpdate - Set Field|01Q1W000000RGk3|ON_CREATE_OR_TRIGGERING_UPDATE|0
WF_RULE_FILTER|[Lead : Company contains Changed]
WF_RULE_EVAL_VALUE|Foo00Changed
WF_CRITERIA_END|true
WF_SPOOL_ACTION_BEGIN|Workflow
WF_FIELD_UPDATE|[Lead: LName00 00Q1W00001Jh9Vv]|Field:Lead: Mobile|Value:650-555-1212|Id=04Y1W000000PfJV|CurrentRule:onUpdate - Set Field (Id=01Q1W000000RGk3)
WF_ACTION| Field Update: 1;
WF_RULE_EVAL_END
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event BeforeUpdate for [00Q1W00001Jh9Vv]

// no callout request made in retry of first record
CODE_UNIT_STARTED|[EXTERNAL]|01q1W000000Tdah|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
CODE_UNIT_FINISHED|LeadTrigger on Lead trigger event AfterUpdate for [00Q1W00001Jh9Vv]
WF_ACTIONS_END| Field Update: 1;
CODE_UNIT_FINISHED|Workflow:Lead
DML_END|[11]
CODE_UNIT_FINISHED|execute_anonymous_apex
EXECUTION_FINISHED

So now what?

If we take the recursion static variable away, then the AllOrNone use case will still not pass – the future will get called twice on the successful record and never on the failed record.

... after the VR fails record[1] and before the WF executes 
USER_DEBUG|[8]|INFO|company has changed from Foo00 to Foo00Changed .. request an @future to dowork
... after the WF updates record[0] .. our original issue
USER_DEBUG|[8]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork
.. SFDC retries the successful records in trigger.new; skips the failed ones
.. trigger re-executes as if none of the above ever happened
USER_DEBUG|[8]|INFO|company has changed from Foo00 to Foo00Changed .. request an @future to dowork
... after the WF updates record[0] .. our original issue
USER_DEBUG|[8]|INFO|company has changed from Foo00to Foo00Changed .. request an @future to dowork

Workflow - Recursion - Trigger - Callout Ex 4

Clearly, static variables can’t be used to control redundant callouts when workflows and AllOrNone = false are combined in the same use case.

Solution 2 (better, but fragile)

    Go through your workflows that could update the Object where you are testing for make-a-callout conditions.

  • Move the field updates out and put them in the before insert/update triggers.
  • This way, the workflow will never force the trigger to re-execute with the original, start-of-transaction state of Trigger.old
  • Hence, your doCallout logic will only execute once in the transaction

This is fragile because you or some colleague could add at some later time a new workflow+field update that causes the trigger’s callout-evaluating condition to be re-assessed and you’ll be making duplicate callouts again.

Solution 3 – what could it be?

Clearly, you need to have state that:

  1. Persists across the trigger – workflow field update – trigger sequence
  2. Is rolled back when SFDC retries in the AllorNone = false (partial success) use case

We’ve seen that static variables won’t work. Platform cache would not work as it isn’t rolled back in the AllOrNone = false scenario.

The only thing that meets both criteria would be Sobject fields. A general approach …

  1. Trigger sets the field in Trigger.new (or updates some other Sobject with a known key)
  2. Workflow field update is made, trigger re-executes. Using the values in trigger.new, looks to see if the work was already done and if yes, avoids asking to do it again
  3. If trigger is running in a AllOrNone = false use case, and a record fails in the batch, the update made in step 1 is rolled back by SFDC. Thus, the trigger re-requests the work, persists the request in an sobject, and even though the workflow will re-fire, the persisted sobject can be inspected on the second trigger re-fire and the dowork request skipped

Now, what would you save in that sobject field?

  1. Initially, I was tempted to save the sessionId (encrypted as a MAC digest) as a pseudo signal that the callout request was made. As an sobject, it would be rolled back in the AllOrNone-false with error use case. But, when the Bulk API is used, there is no sessionId — it is null.
  2. Next idea was to exploit lastModifiedDate and lastModifiedById and compare the running user and current time to see if the trigger is being used in the trigger-workflow+field update-trigger use case as a sort of pseudo state variable. This seems problematic for several reasons like time skew and concurrent transactions coming from the same user
  3. Another idea was an unconditional workflow field update to set a field called Is_WF_Update_in_Progress__c. Thus, the second time through the trigger, the code would inspect the Is_WF_Update_in_Progress__c in Trigger.new, say, “ha”, I’m being called in a workflow field update-induced trigger invocation and bypass the request to do a callout. But, then the new field would have to be cleared (another DML) and, we’d be unnecessarily forcing the trigger to re-execute even if no other workflow’s rule criteria were satisfied. This slows down performance. This is complicated and doesn’t scale well. Every SObjectType needs its own clone of this if involved in workflows + triggers that compare trigger.old to trigger.new

A Workable, albeit a bit heavyweight solution

Create a Custom Object Transaction_State__c

One relevant field:

  • Name – This will be the transactionId
    1. Create a Transaction Service class
      public class TransactionService {
          
          /**
          *	Meat of the TransactionService
          *
          *		1 - Set/get of transactionId
          *		2 - Track visitedIds by scopeKey to avoid recursion
          *		3 - Track whether a "context" is enabled or disabled - especially useful for testmethods to switch of async handling
          **/
      
      	@TestVisible private static String transactionId;
      
      	/**
      	*	get(set)TransactionId - use to record some identifier for the transaction. Particularly useful for incoming REST calls
      	*							so methods can reference without having to pass around in arguments
      	**/
          
          public virtual String getTransactionId() {
      		return transactionId == null ? transactionId = String.valueOf(System.currentTimeMillis()) + '_' + UserInfo.getName() : transactionId;
          }
          
          public virtual void setTransactionId(String txnId) {
          	transactionId = txnId;
          }
      
      	public virtual Boolean hasTransactionId()
      	{
      		return transactionId == null ? false : true;
      	}
      
          private static map<String,Boolean>	enablementByContext	= new map<String,Boolean> ();
          
          /**
          *	isDisabled (String  context) - returns true if this context has been switched off
          *		future enhancement - read from custom metadata to allow external (dis)(en)ablement
          **/
          public virtual Boolean isDisabled(String context) {
          	return enablementByContext.containsKey(context)
          		? !enablementByContext.get(context)		// in map, return whether enabled or disabled
          		: false;  // no entry, hence enabled
          }
          /**
          *	isEnabled (String  context) - returns true if this context has been switched on or never entered
          *		future enhancement - read from custom metadata to allow external (dis)(en)ablement
          **/
          public virtual Boolean isEnabled(String context) {
          	return enablementByContext.containsKey(context)
          		? enablementByContext.get(context)		// in map, return whether enabled or disabled
          		: true;  // no entry, hence enabled
          }
          
          /**
          *	setEnablement(String context, Boolean isEnabled)
          **/
          public virtual void setEnablement(String context, Boolean isEnabled) {
          	if (isEnabled == null)
          		throw new TransactionService.TransactionServiceException('setEnablement isEnabled argument can not be null');
          	enablementByContext.put(context,isEnabled);
          }
          
          
          
          static ID txnStateIdAsProxyForStateTrust;	// beacon to tell us if we can trust static variables
          
          /**
          *	establishStateTrust - Transaction_State__c is an sobject
          *							1 - so, it is rolled back on allOrNone = false retry
          *							2 - hence we point at it with a static variable that isn't rolled back on retry
          *							3 - If the two don't agree, we know we are retrying and static map must be reset to empty
          **/
          private void establishStateTrust() {
              if (txnStateIdAsProxyForStateTrust == null) { // no trust yet setup
                  resetStateTrust();
              }
              else {
                  //	if we have an sobject, has it been rolled back because we are in an AllOrNone = false
                  //	(partial success) SFDC-initiated retry use case on the "successes"?
                  Transaction_State__c[] txnStates = [select Id, Name from Transaction_State__c where Id = : txnStateIdAsProxyForStateTrust];
                  if (txnStates.isEmpty()) { // static vbl points at sobject that has been rolled back
                      resetStateTrust();
                  }
                  else {}	// if the static variable we established points at an existing Transaction_State__c,
                  		// that means we are not in an AllOrNone = false retry step and the static variables
                  		// maintaining state can be relied on. Thus, triggers re-executed
                  		// as part of a workflow/Process Builder can avoid repeating logic
              }
          }
          
          private void resetStateTrust(){
             Transaction_State__c txnState = new Transaction_State__c(Name = transactionid);
             insert txnState;
             txnStateIdAsProxyForStateTrust = txnState.Id;
             clearVisitedCaches(); 
          }
          
          /**
          *	Map takes care of visited Ids by scopeKey and is valid up until the point that a retry
          *	is detected; then map is cleared and we start afresh
          **/    
      	static map<String,Set<ID>>	visitedIdsThisTxnByScopeKey = new map<String,set<ID>> ();        
              
          public virtual set<ID> getUnvisitedIdsThisTxn(String scopeKey, set<ID> proposedIds) {
              
              establishStateTrust();
              if (visitedIdsThisTxnByScopeKey.containsKey(scopeKey)) {
          		set<ID> unvisitedIds = new set<ID>(proposedIds);					// start with proposedIds as unvisited
          		unvisitedIds.removeAll(visitedIdsThisTxnByScopeKey.get(scopeKey));	// remove any Ids we've already seen
          		visitedIdsThisTxnByScopeKey.get(scopeKey).addAll(proposedIds);		// update visited set
          		return unvisitedIds;
          	}
          	else {																	// new scopeKey, hence all ids are unvisited
          		visitedIdsThisTxnByScopeKey.put(scopeKey,new set<ID>(proposedIds));
          		return proposedIds;
          	}
          }
          
          /**
          *	peekVisitedIdsThisTxn - Inspect visitedIDs this Transaction without affecting set (for a given scope key)
          **/
          public virtual set<ID> getVisitedIdsThisTxn(String scopeKey) {
        		return visitedIdsThisTxnByScopeKey.containsKey(scopeKey) ? visitedIdsThisTxnByScopeKey.get(scopeKey) : new set<ID>();
        	}
          
          /**
          *	getVisitedIdsThisTxn - Inspect visitedIDs this Transaction without affecting set (all scope keys)
          **/
          public virtual map<String,set<ID>> getVisitedIdsThisTxn() {
        		return visitedIdsThisTxnByScopeKey;
        	}
        	
        	
          /**
          *	clearVisitedCache()	- Clears specific visited ID cache
          **/
          public virtual void clearVisitedCache(String scopeKey) {
          	if (visitedIdsThisTxnByScopeKey.containsKey(scopekey))
          		visitedIdsThisTxnByScopeKey.get(scopekey).clear();
          	else
          		throw new TransactionService.TransactionServiceException('Invalid scopeKey: ' + scopeKey + ' for clearVisitedCaches');	
          }    
          /**
          *	clearVisitedCaches()	- Clears all visited ID caches; useful for testmethods
          **/
          public virtual void clearVisitedCaches() {
          	visitedIdsThisTxnByScopeKey.clear();
          }
          
      
      }
      
      Modify the triggerhandler code as follows
      public class LeadTriggerHandler {
          public void onAfterUpdate(Lead[] leads, map<ID,Lead> oldLeads) {
          
              set<ID> unvisitedIds = TransactionService.getVisitedIdsThisContext('LeadDoFuture',oldLeads.keySet());
              for (Lead l: leads) { 
                  Lead oldLead = oldLeads.get(l.Id);
                  if (!unvisitedIds.contains(l.Id) && l.Company != oldLead.Company)    {
                      System.debug(LoggingLevel.INFO,'company has changed from ' + oldLead.Company   + 
                                   'to ' + l.Company + ' .. request an @future to dowork');
                      doCallout(l.Company);
          
                  }
              }
          }
            
          @future
          static void doCallout(String company) {
              System.debug(LoggingLevel.INFO,'future method to do callout for ' + company);
              // .. callout details not important
          }
      }
      

      The triggerhandler asks the Transaction Service to get all unvisited Ids for some context scope. Behind the scenes, the TransactionService saves the Ids + context scope + TransactionId in the database, thus creating a persistent store for the AllOrNone = true use case and a rolback-able store for the AllOrNone = false use case.

      Now, if you run an AllOrNone = true use case
      Lead[] leads = new list<Lead> {
          new Lead(Company = 'Foo00', LastName = 'LName00'),
          new Lead(Company = 'Foo01', LastName = 'LName01')
          };
      
      insert leads;
      leads[0].Company = 'Foo00Changed';
      leads[1].Company = 'Foo01Changed';
      update leads; // force future to execute in handler
      

      You see the future is called twice, once per record

      Workflow - Recursion - Trigger - Callout Ex 6 - allorNone true success

      If you run in an AllOrNone = false use case
      Lead[] leads = new list<Lead> {
          new Lead(Company = 'Foo00', LastName = 'LName00'),
          new Lead(Company = 'Foo01', LastName = 'LName01')
          };
      
      insert leads;
      
      leads[0].Company = 'Foo00Changed';
      leads[1].Company = 'Foo01Changed';
      leads[1].Website = 'www.failme.com';  // force partial success by failing this in VR
      Database.SaveResult[] results = Database.update(leads,false); // allow partial success
      

      You see the future is only called once for the record that does not fail validation rules
      Workflow - Recursion - Trigger - Callout Ex 5 - allorNone false success

ApexMocks, Selectors, and Formula Fields

Part three of a series. Posts include:

If you’ve bought into mocking Sobjects for use in Unit Tests, you probably have run into the roadblock wherein formula fields and system audit fields can’t be created in SObjects. For example, these won’t compile:

 Account acct = new Account(LastModifiedDate = System.now(), Name = 'Foo');

 Opportunity oppo = new Opportunity(HasLineItems = true, ...);

An example
So, if you have a class method-under-test that wants to do work on Account Opportunities and the work varies based on the value of Opportunity.FormulaField__c

public void doSomeWork(set<ID> acctIds) {
   Integer count = 0;
   for (Account acct : [select Id, 
                            (select Id, FormulaField__c from Opportunities)
                            from Account where Id IN: acctIds])
     for (Opportunity o : acct.Opportunities)
        if (oppo.FormulaField__c == 'foo') doSomeFooWork();
        else doSomeBarWork();

As a good practitioner of Enterprise Patterns, you convert the code to use a Selector:

public void doSomeWork(set<ID> acctIds) {
   Integer count = 0;
   for (Account acct : AccountsSelector.newInstance()
                            .selectWithOpposById(acctIds))
     for (Opportunity o : acct.Opportunities)
       if (oppo.FormulaField__c == 'foo') doSomeFooWork();
       else doSomeBarWork();

I’m assuming you know how to create selectors and the corresponding entry in Application.cls.

The testmethod using ApexMocks

So, let’s set up a typical ApexMocks testmethod where we mock the AccountsSelector.

@isTest private static void testDoSomeWork() {
  fflib_ApexMocks mocks = new fflib_ApexMocks();

  // Given a mock selector
  AccountsSelector mockAcctsSelector = (AccountsSelector)
      mocks.mock(AccountsSelector.class);

  // Given a mock Selector with stubbed results for FormulaField__c
  mocks.startStubbing();
  mocks.when(mockAcctsSelector.SObjectType)).thenReturn(Account.SObjectType);
  mocks.when(mockAcctsSelector
      .selectWithOpposById((set<ID>)fflib_match.anyObject())
      )
      .thenReturn(mockAcctsWithOppos);
  mocks.stopStubbing();

  // Given injected mocks
  Application.Selector.setMock(mockAcctsSelector);

  // When doSomeWork is invoked
  new MyClass().doSomeWork(new set<ID> {});  // don't care about real AccountIds

  // Then verify (not shown here; 
  // perhaps verify uow.registerNew or uow.registerDirty)

So, the question, is, how do we create mockAcctsWithOppos since we need to have values for Opportunity.FormulaField__c?

There is only one way and that is to create JSON and deserialize into the Account Sobject. I’ve used three ways to do this:

Mocking SObjects with Json – method 1 – hard-coded strings

Account[] mockAcctsWithOppos = 
   (Account[]) Json.deserialize(someJsonString,list<Account>.class);

where someJsonString looks like this (example is a single Account with two Opportunities):

{
  "totalSize" : 2,
  "done" : true,
  "records" : [ {
    "attributes" : {
      "type" : "Account",
      "url" : "/services/data/v41.0/sobjects/Account/0013600001FGf1HAAT"
    },
    "Id" : "0013600001FGf1HAAT",
    "Opportunities" : {
      "totalSize" : 2,
      "done" : true,
      "records" : [ {
        "attributes" : {
          "type" : "Opportunity",
          "url" : "/services/data/v41.0/sobjects/Opportunity/0063600000PPLTPAA5"
        },
        "Id" : "0063600000PPLTPAA5",
        "FormulaField__c" : "foo"
      },
       {
        "attributes" : {
          "type" : "Opportunity",
          "url" : "/services/data/v41.0/sobjects/Opportunity/0063600000PPLTPAA5"
        },
        "Id" : "0063600000PPLTPAA6",
        "FormulaField__c" : "bar"
      }
    ]
    }
  }
]
}

You create this Json by using Workbench to generate a query and paste the results into either an Apex string or stick in a StaticResource.

Mocking SObjects with Json – method 2 – fflib_ApexMocksUtils.makeRelationship

The ApexMocks package includes a utility method that can construct the deserialized Json without you having to create the actual string

// Let's mock two Accounts, one with two Oppos, the other with none
ID[] mockAcctIds = new list<ID>();
ID[] mockOppoIds = new list<ID>();
for (Integer i = 0; i < 2; i++) {
  mockAcctIds.add(fflib_IdGenerator.generate(Account.SObjectType);
  mockOppoIds.add(fflib_IdGenerator.generate(Opportunity.SObjectType);
}

Account[] mockAcctsWithOppos = fflib_ApexMocksUtils.makeRelationship(
   Account.class,
   new list<Account> {
     new Account(Id = mockAcctIds[0], Name = '00Account'),
     new Account(Id = mockAcctIds[1], Name = '01Account')
   },
   Opportunity.Account,  // the relationship field
   new list<list<Opportunity>> {
     new list<Opportunity> { . // Two Oppos for Account[0]
         new Opportunity(Id = mockOppoIds[0], AccountId = mockAcctIds[0], 
                         FormulaField__c = 'foo'), 
         new Opportunity(Id = mockOppoIds[1], AccountId = mockAcctIds[0], 
                         FormulaField__c = 'bar')
     },
     new list<Opportunity>();  // no Oppos for Account[1]
  );

This is nice as it lets you do everything without messy string constants. But the utility is limited to only one child relationship so you can’t use it for mocking Accounts with Cases and Opportunities. I find the list> 4th argument to initially be confusing to construct and get right.

Mocking SObjects with Json – method 3 – sfab_SObjectFabricator

Matt Addy has a nice GitHub package to construct Sobjects that is more descriptive and isn’t limited by the number of children. Here is how to use it:

// Let's mock two Accounts, one with two Oppos, the other with none
ID[] mockAcctIds = new list<ID>();
ID[] mockOppoIds = new list<ID>();
for (Integer i = 0; i < 2; i++) {
  mockAcctIds.add(fflib_IdGenerator.generate(Account.SObjectType);
  mockOppoIds.add(fflib_IdGenerator.generate(Opportunity.SObjectType);
}
Account[] mockAcctsWithOppos = new list<Account> {
  (Account) new sfab_FabricatedSObject(Account.class) [0] has 2 Oppos
      .setField(Account.Id = mockAcctIds[0]).
      .setField(Account.Name = '00Account')
      .setChildren('Opportunities', new List<sfab_FabricatedSObject> {
        new sfab_FabricatedSObject(Opportunity.class)
          .setField(Opportunity.Id, mockOppoIds[0]),
          .setField(Opportunity.AccountId, mockAcctIds[0]),
          .setField(Opportunity.FormulaField__c, 'foo'), 
        new sfab_FabricatedSObject(Opportunity.class)
          .setField(Opportunity.Id, mockOppoIds[1]),
          .setField(Opportunity.AccountId, mockAcctIds[0]),
          .setField(Opportunity.FormulaField__c, 'bar')
    }).toSObject(),
  (Account) new sfab_FabricatedSObject(Account.class) //[1] has no Oppos
      .setField(Account.Id = mockAcctIds[1])
      .setField(Account.Name = '01Account')
      .toSObject()
  };

I like the sfab_SObjectFabricator approach as it is clear where you are defining children (and you can also do parents with setParent()).

Some final remarks

  1. Don’t let the inability to construct SObjects with formula or audit fields get in your way to using ApexMocks to mock either inputs to services or domain layers or mock results from services or domain layers. Choose one of the approaches above or roll your own to exploit Salesforce’s feature of constructing any Sobject’s fields via Json deserialization.
  2. The examples above are probably too verbose for the code-under-test. The Opportunity.AccountId, if never referenced, need not be mocked.
  3. Exploit the Unit of Work layer so you can use ApexMocks to verify that your DML (via registerXXX methods) is done as expected – without having to pay for the cost of real DML.
  4. You will still need what I call ‘end-to-end’ testing that doesn’t use mocks to verify that your selectors work against real data and return all the columns the code expects. You also need to verify that actual DML doesn’t run afoul of Validation Rules that otherwise aren’t executed when you mock the Unit Of Work.
  5. ApexMocks are a great way to explore in detail the unit test use cases by focusing the testing problem on the inputs and outputs of a given class/method.

ApexMocks and Email

Part two of a series. Posts include:

One of the problems in unit testing outbound email is that it is hard to verify that you constructed all of the properties of the outbound email as expected. This is compounded by the fact that sandbox orgs default with email deliverability ‘off’ so any attempt to use Messaging.sendEmail(emails) will throw an exception and your test breaks.

The example’s premise

Suppose we have a simple class method that constructs and sends an email:

public void sendEmail() {
   Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
   mail.setToAddresses(new list<String> {'foo@bar.com'};
   mail.setSubject('Greetings, earthlings!');
   ...
   Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
}

How would you unit test that the outbound email was sent to ‘foo@bar.com’ ? Not so simple. Same for the other properties of the outbound email.

Enter Enterprise Patterns and ApexMocks

Rework the Apex class to use the fflib UnitOfWork:

public void sendEmail() {
   fflib_ISobjectOfWork uow = Application.UnitOfWork.newInstance();
   Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
   mail.setToAddresses(new list<String> {'foo@bar.com'};
   mail.setSubject('Greetings, earthlings!');
   ...
   uow.registerEmail(mail); // let UnitOfWork know mail is part of Txn
   uow.commitWork(); // send the mail
}

The fflib_SobjectUnitOfWorkClass considers outbound emails as part of the transaction, hence the registerEmail(mail) method call.

Now, your testmethod looks like this:

@isTest private static void testSuccessPath() {
  fflib_ApexMocks mocks = new fflib_ApexMocks();
  // Given a mock UoW (injected)
  fflib_SobjectUnitOfWork mockUow = 
     (fflib_SobjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
  Application.UnitOfWork.setMock(mockUow);

  // When the email method is invoked
  new MyClass().sendEmail();

  // Then verify that an email was constructed and sent
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,
                                          mocks
                                           .times(1)
                                           .description('email sb constructed')))
    .registerEmail((Messaging.SingleEmailMessage) fflib_Match.anyObject());
    									
    									
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,
                                          mocks
                                           .times(1)
                                           .description('email sb sent')))
    .commitWork();

  // Then verify that the email was constructed as expected
  // We use ArgumentCaptors for this. There are four (4) steps:

  fflib_ArgumentCaptor capturedEmailArg	= 
        fflib_ArgumentCaptor.forClass(Messaging.SingleEmailMessage.class);
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,1))
       .registerEmail((Messaging.SingleEmailMessage)capturedEmailArg.capture());

  Object actualEmailAsObject = capturedEmailArg.getValue();
  Messaging.SingleEmailMessage actualEmail = 
        (Messaging.SingleEmailMessage) actualEmailAsObject;

  System.assertEquals('Greetings, earthlings!', 
                       actualEmail.getSubject(),
                       'subject is from friendly aliens');
  System.assertEquals(new list<String> {'foo@bar.com'},
                       actualEmail.getToAddresses()
                       'only @bar.com domains expected');
  ... other properties.
}

Let’s look at the argumentCaptor, one line at a time.

fflib_ArgumentCaptor capturedEmailArg	= 
        fflib_ArgumentCaptor.forClass(Messaging.SingleEmailMessage.class);

We declare a variable of type fflib_ArgumentCaptor and set it to be of the type we want to inspect.

((fflib_SobjectUnitOfWork) mocks.verify(mockUow,1))
       .registerEmail((Messaging.SingleEmailMessage)capturedEmailArg.capture());

We ask the mocking framework that when the UnitOfWork object is called with method registerEmail with single argument of type Messaging.SingleEmailMessage that we want to capture the value of that argument when the mock UoW is called. The capture() method of ApexMocks library does this. Note we use the mocks.verify(..) method to do this. That is, instead of verifying the value passed to registerEmail we are capturing that value for later inspection.

Object actualEmailAsObject = capturedEmailArg.getValue();
  Messaging.SingleEmailMessage actualEmail = 
        (Messaging.SingleEmailMessage) actualEmailAsObject;

Our declared variable capturedEmailArg has a method getValue() provided by the fflib_ArgumentCaptor class. It returns an Object. There is also a getValues() method for collections. We cast this to the type we care about – Messaging.SingleEmailMessage.

Now, we can assert against the actual email that the class-under-test constructed and verify each property.

System.assertEquals('Greetings, earthlings!', 
                       actualEmail.getSubject(),
                       'subject is from friendly aliens');

So, why is this cool?

  1. We are immune from the sandbox having to be configured to send emails. Because fflib is already unit tested, calls to registerEmail(someEmail) followed by commitWork() will send emails in an org configured to send emails. We merely need to verify that registerWork and commitWork got called. Since the UnitOfWork layer is mocked, we can use ApexMocks to verify that calls to a mockUow are as expected.
  2. The fflib_ArgumentCaptor feature of ApexMocks allows detailed inspection of arguments passed to any mockable class/method. In our example, it is a single argument of type Messaging.SingleEmailMessage, but it could be any arbitrary Apex type. Thus, your unit tests can be quite exhaustive about verifying that the contract between two objects is fulfilled without having to construct any real Sobjects, do any real DML, or query for the results.

Apex Mocks and Enterprise Patterns (First in a Series)

Part one of a series. Posts include:

It took me a bit to get here but I can unequivocally state that Apex Mocks is well worth learning to improve your unit testing.

This blog post aims at the Apex developer who doesn’t have Java Mockito experience (which was me). Since most of the code at our org relies on the Apex Enterprise Patterns, the examples herein will exploit that. The most useful reference to me was chapter 11 – Unit Testing of Force.com Enterprise Architecture although there were some typos in the code examples in my printed copy.

See also Part 2 (Unit Testing email), and Part 3 (Selector mocking with formula fields)

The scenario
You have a CasesService, method reopen(set csIds)

Here’s the code: Note use of the Unit of Work Pattern

public virtual class CasesServiceImpl implements ICasesService{

    /**
    *	reopen	- reopen a set of Cases
    **/
    public virtual void reopen(set<ID> csIds) {
        try {
            fflib_ISobjectUnitOfWork uow = Application.UnitOfWork.newInstance();
        	reopen(uow,csIds);
        	uow.commitWork();
        }
        catch (Exception e) {
            throw new CasesService.CaseReopenException(e.getTypeName() + ' ' + e.getMessage() + ' ' + e.getStackTraceString());
        }
    }
    

    public virtual void reopen(fflib_ISobjectUnitOfWork uow, set<ID> csIds) {
        for (ID csId : csIds) {
            uow.registerDirty(new Case(Id = csId, Status = Cases.STATUS_NEW));
        }    
    }
    
}

I’m presuming if you are reading this, that you already know the Apex Enterprise Pattern and have created the ICasesService.cls, CasesService.cls, and updated the Application.cls.

Unit testing the service
In “classic” Apex development, you would test the CasesServiceImpl.reopen(set csIds) method by creating via DML a Case record (this might also require creating an Account and Contact record). Sort of like this:

@isTest private static void testCaseReopen() {
   Account[] mockAccts = new list<Account> {
     new Account(...),
     new Account(...)
     ...
   };
   insert mockAccts;
   // and so on for the Contacts (mockContacts) and Cases (mockCases)

   Test.startTest(); 
   CasesService.reopen(new set<ID> (new map<ID,Case>(mockCases).keySet());
   Test.stopTest();
   Case[] resultCases [select Id, Status from Case];
   for (Integer i = 0; i < resultCases.size(); i++)
      System.assertEquals('New',resultCases[i].Status,
                          '['+i+'] Status should be New');
}

So boring. And so much DML that makes your tests run longer – especially if you have hundreds or thousands of tests. You have to query the database to test your asserts. Think about testing Opportunities – Accounts, Contacts, Opportunities, Pricebooks, Product2s, PricebookEntries, and OpportunityLineItems may all have to be inserted into the database.

And here’s the beauty
Because you are using Apex Enterprise Patterns, you recognize that the reopen method only needs to be passed in Case Ids and then verified that the Cases were registered as dirty and that commitWork was called.

  • You don’t need real Cases to generate valid Case Ids
  • You don’t need to query the updated records to see if the Status was changed. Instead, you just need to know that registerDirty() was called with the expected status of ‘New’. As long as commitWork() was called, you can rely on the updates being done (written to the database) because commitWork() is already unit tested by the fflib library.


So, what does the unit test look like?

I’ll start off by saying that the syntax takes a bit of time getting used to. I’ll try to comment each step

@isTest private static void testReopen() {
  /** variable mocks represents the mocking framework object. 
     You can call it anything **/
  
  fflib_ApexMocks mocks = new fflib_ApexMocks();  // required and first
 
  // Given Case Ids to reopen 
  /** We use fflib to generate valid-for-the-sobjectType IDs **/
  
  ID[] mockCaseIds = new list<ID>();
  for (Integer i = 0; i < 2; i++) // 2 is arbitrary but goal is bulk testing 	
    mockCaseIds.add(fflib_IdGenerator.generate(Case.SobjectType));

  // Given mock UnitOfWork
  fflib_SobjectUnitOfWork mockUow = (fflib_SobjectUnitOfWork) 
                                       mocks.mock(fflib_SObjectUnitOfWork.class);
  Application.UnitOfWork.setMock(mockUow); // inject the mock for fflib factories

  /** Wrap your service call in try catch so you can easily debug any errors **/
  // When service invoked
  try {
   CasesService.reopen(new set<ID>(mockCaseIds)); 
  }
  catch(Exception e) {
   System.assert(false,'snb, service method sb success; ' + showException(e));
  }

  // Then verify Case status is New and objects committed 
  /** mocks.verify takes two arguments:
       1 - the object being mocked - in this case, the UnitOfWork
       2 - an object of type fflib_VerificationMode. 
           This object supports a fluent pattern with methods:
            times(n) - how many times the mock object's method M is called
            atLeast(n) - method called at least n times
            atMost(n) - method called at most n times
            atLeastOnce() - method called at least one time
            between(m,n) - method called between m and n times
            never() - method never called
            calls(n) - method called n times w/ InOrder verifier (see doc)
            description(text) - equivalent to the third argument for System.assertEquals()
  
       Once cast to the type of the mock object, then verification is that the 
       registerDirty method was called (twice) with an Sobject of type Case
  **/         
  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                  .times(2)
                  .description('both Cases should be marked as Closed'))
  )
   .registerDirty(fflib_Match.sObjectOfType(Case.SObjectType));


  /** In this verify, assert registerDirty was called with an SObject matching 
      1 - Case.Id = the first mocked CaseId
      2 - Case.Status is 'New'

     This is done with a matcher (type = fflib_Match) argument. 
     The framework looks to see if the method (registerDirty) 
     was called with an argument that "matches" the matcher.

     There are a large number of matchers, see class
     fflib_Match (most of the Mockito matchers are mirrored). 
     One of the Apex-specific ones is sObjectWith that takes a map of
     SObjectField => Object
  **/

  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                  .times(1)
                  .description('Case[0] sb reopened'))
  )
   .registerDirty(fflib_Match.sObjectWith(new map<SObjectField,Object> {
        Case.ID => mockCaseIds[0],    																			
        Case.Status => Cases.STATUS_NEW}));
		
  /** In this verify, assert the second Case ID also became status New **/
  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                  .times(1)
                  .description('Case[1] sb reopened'))
  )
   .registerDirty(fflib_Match.sObjectWith(new map<SObjectField,Object> {
        Case.ID => mockCaseIds[1],    																			
        Case.Status => Cases.STATUS_NEW}));

  /** Finally, assert records were committed to Database. 
      As commitWork has no args there is no Matcher **/

  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                   .times(1)
                   .description('both Cases should be committed'))
  )
   .commitWork();

}

Some meta remarks

  1. Using ApexMocks requires some wordy syntax that is not immediately apparent when you first see it
  2. It is especially useful for unit testing smallish methods that have few dependencies (which is how they should be written in the first place). Doing unit testing improves your confidence that methods will do what they are supposed to do when orchestrated into larger transactions.
  3. You can mock only non-static methods of objects.
  4. You need to invest time in learning how Matchers work. The class fflib_Match and fflib_ApexMocksTest are helpful. So are Mockito books and the Mockito web site
  5. As we’ll see in subsequent posts, you can mock selectors including formula fields and system audit fields without having to do any DML or SOQL. You can also mock the domain layer and service layer, and for that matter any non-static object method (except async)

The goal of this post was to explain the syntax. Try it out on a small class and you’ll feel you are up’ing your Unit test game, hence leaving a stronger legacy of test code to yourself, your team members, and your successors.

Lead Conversion, Tasks, and Order of Execution

Starting condition

  • Suppose you have a Lead with one or more activities.
  • Suppose when you convert the Lead, on the OOTB Lead Convert page, you add a new activity.

What happens at Lead Conversion and in what order

  1. Before/after update trigger fires on the Tasks that existed on the Lead pre-conversion.
  2. Before/after insert trigger fires on the new Task specified in the Lead Convert UI

In both use cases, the Task is prepopulated with a WhatId of the parent Opportunity

And, for Task Validation Rules…

  • If the Task Validation Rule fails the Tasks existing pre-conversion, the Lead Conversion UI will surface the error
  • If the Task Validation Rule fails the Task specified on the conversion page, the Task is not inserted nor is an error surfaced

If you want your trigger to detect the Task as participating as part of the Lead Conversion there is no robust solution.

Hacky-solution:
If all Lead Converts go through the OOTB Lead Convert page, then System.URL.getCurrentRequestUrl().getPath().endsWith('leadconvert.jsp') will work