Author Archives: eric.kintzer@cropredy.com

Apex Mocks and Enterprise Patterns (First in a Series)

It took me a bit to get here but I can unequivocally state that Apex Mocks is well worth learning to improve your unit testing.

This blog post aims at the Apex developer who doesn’t have Java Mockito experience (which was me). Since most of the code at our org relies on the Apex Enterprise Patterns, the examples herein will exploit that. The most useful reference to me was chapter 11 – Unit Testing of Force.com Enterprise Architecture although there were some typos in the code examples in my printed copy.

The scenario
You have a CasesService, method reopen(set csIds)

Here’s the code: Note use of the Unit of Work Pattern

public virtual class CasesServiceImpl implements ICasesService{

    /**
    *	reopen	- reopen a set of Cases
    **/
    public virtual void reopen(set<ID> csIds) {
        try {
            fflib_ISobjectUnitOfWork uow = Application.UnitOfWork.newInstance();
        	reopen(uow,csIds);
        	uow.commitWork();
        }
        catch (Exception e) {
            throw new CasesService.CaseReopenException(e.getTypeName() + ' ' + e.getMessage() + ' ' + e.getStackTraceString());
        }
    }
    

    public virtual void reopen(fflib_ISobjectUnitOfWork uow, set<ID> csIds) {
        for (ID csId : csIds) {
            uow.registerDirty(new Case(Id = csId, Status = Cases.STATUS_NEW));
        }    
    }
    
}

I’m presuming if you are reading this, that you already know the Apex Enterprise Pattern and have created the ICasesService.cls, CasesService.cls, and updated the Application.cls.

Unit testing the service
In “classic” Apex development, you would test the CasesServiceImpl.reopen(set csIds) method by creating via DML a Case record (this might also require creating an Account and Contact record). Sort of like this:

@isTest private static void testCaseReopen() {
   Account[] mockAccts = new list<Account> {
     new Account(...),
     new Account(...)
     ...
   };
   insert mockAccts;
   // and so on for the Contacts (mockContacts) and Cases (mockCases)

   Test.startTest(); 
   CasesService.reopen(new set<ID> (new map<ID,Case>(mockCases).keySet());
   Test.stopTest();
   Case[] resultCases [select Id, Status from Case];
   for (Integer i = 0; i < resultCases.size(); i++)
      System.assertEquals('New',resultCases[i].Status,
                          '['+i+'] Status should be New');
}

So boring. And so much DML that makes your tests run longer – especially if you have hundreds or thousands of tests. You have to query the database to test your asserts. Think about testing Opportunities – Accounts, Contacts, Opportunities, Pricebooks, Product2s, PricebookEntries, and OpportunityLineItems may all have to be inserted into the database.

And here’s the beauty
Because you are using Apex Enterprise Patterns, you recognize that the reopen method only needs to be passed in Case Ids and then verified that the Cases were registered as dirty and that commitWork was called.

  • You don’t need real Cases to generate valid Case Ids
  • You don’t need to query the updated records to see if the Status was changed. Instead, you just need to know that registerDirty() was called with the expected status of ‘New’. As long as commitWork() was called, you can rely on the updates being done (written to the database) because commitWork() is already unit tested by the fflib library.


So, what does the unit test look like?

I’ll start off by saying that the syntax takes a bit of time getting used to. I’ll try to comment each step

@isTest private static void testReopen() {
  /** variable mocks represents the mocking framework object. 
     You can call it anything **/
  
  fflib_ApexMocks mocks = new fflib_ApexMocks();  // required and first
 
  // Given Case Ids to reopen 
  /** We use fflib to generate valid-for-the-sobjectType IDs **/
  
  ID[] mockCaseIds = new list<ID>();
  for (Integer i = 0; i < 2; i++) // 2 is arbitrary but goal is bulk testing 	
    mockCaseIds.add(fflib_IdGenerator.generate(Case.SobjectType));

  // Given mock UnitOfWork
  fflib_SobjectUnitOfWork mockUow = (fflib_SobjectUnitOfWork) 
                                       mocks.mock(fflib_SObjectUnitOfWork.class);
  Application.UnitOfWork.setMock(mockUow); // inject the mock for fflib factories

  /** Wrap your service call in try catch so you can easily debug any errors **/
  // When service invoked
  try {
   CasesService.reopen(new set<ID>(mockCaseIds)); 
  }
  catch(Exception e) {
   System.assert(false,'snb, service method sb success; ' + showException(e));
  }

  // Then verify Case status is New and objects committed 
  /** mocks.verify takes two arguments:
       1 - the object being mocked - in this case, the UnitOfWork
       2 - an object of type fflib_VerificationMode. 
           This object supports a fluent pattern with methods:
            times(n) - how many times the mock object's method M is called
            atLeast(n) - method called at least n times
            atMost(n) - method called at most n times
            atLeastOnce() - method called at least one time
            between(m,n) - method called between m and n times
            never() - method never called
            calls(n) - method called n times w/ InOrder verifier (see doc)
            description(text) - equivalent to the third argument for System.assertEquals()
  
       Once cast to the type of the mock object, then verification is that the 
       registerDirty method was called (twice) with an Sobject of type Case
  **/         
  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                  .times(2)
                  .description('both Cases should be marked as Closed'))
  )
   .registerDirty(fflib_Match.sObjectOfType(Case.SObjectType));


  /** In this verify, assert registerDirty was called with an SObject matching 
      1 - Case.Id = the first mocked CaseId
      2 - Case.Status is 'New'

     This is done with a matcher (type = fflib_Match) argument. 
     The framework looks to see if the method (registerDirty) 
     was called with an argument that "matches" the matcher.

     There are a large number of matchers, see class
     fflib_Match (most of the Mockito matchers are mirrored). 
     One of the Apex-specific ones is sObjectWith that takes a map of
     SObjectField => Object
  **/

  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                  .times(1)
                  .description('Case[0] sb reopened'))
  )
   .registerDirty(fflib_Match.sObjectWith(new map<SObjectField,Object> {
        Case.ID => mockCaseIds[0],    																			
        Case.Status => Cases.STATUS_NEW}));
		
  /** In this verify, assert the second Case ID also became status New **/
  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                  .times(1)
                  .description('Case[1] sb reopened'))
  )
   .registerDirty(fflib_Match.sObjectWith(new map<SObjectField,Object> {
        Case.ID => mockCaseIds[1],    																			
        Case.Status => Cases.STATUS_NEW}));

  /** Finally, assert records were committed to Database. 
      As commitWork has no args there is no Matcher **/

  ((fflib_SobjectUnitOfWork) 
    mocks.verify(mockUow,mocks
                   .times(1)
                   .description('both Cases should be committed'))
  )
   .commitWork();

}

Some meta remarks

  1. Using ApexMocks requires some wordy syntax that is not immediately apparent when you first see it
  2. It is especially useful for unit testing smallish methods that have few dependencies (which is how they should be written in the first place). Doing unit testing improves your confidence that methods will do what they are supposed to do when orchestrated into larger transactions.
  3. You can mock only non-static methods of objects.
  4. You need to invest time in learning how Matchers work. The class fflib_Match and fflib_ApexMocksTest are helpful. So are Mockito books and the Mockito web site
  5. As we’ll see in subsequent posts, you can mock selectors including formula fields and system audit fields without having to do any DML or SOQL. You can also mock the domain layer and service layer, and for that matter any non-static object method (except async)

The goal of this post was to explain the syntax. Try it out on a small class and you’ll feel you are up’ing your Unit test game, hence leaving a stronger legacy of test code to yourself, your team members, and your successors.

Lead Conversion, Tasks, and Order of Execution

Starting condition

  • Suppose you have a Lead with one or more activities.
  • Suppose when you convert the Lead, on the OOTB Lead Convert page, you add a new activity.

What happens at Lead Conversion and in what order

  1. Before/after update trigger fires on the Tasks that existed on the Lead pre-conversion.
  2. Before/after insert trigger fires on the new Task specified in the Lead Convert UI

In both use cases, the Task is prepopulated with a WhatId of the parent Opportunity

And, for Task Validation Rules…

  • If the Task Validation Rule fails the Tasks existing pre-conversion, the Lead Conversion UI will surface the error
  • If the Task Validation Rule fails the Task specified on the conversion page, the Task is not inserted nor is an error surfaced

If you want your trigger to detect the Task as participating as part of the Lead Conversion there is no robust solution.

Hacky-solution:
If all Lead Converts go through the OOTB Lead Convert page, then System.URL.getCurrentRequestUrl().getPath().endsWith('leadconvert.jsp') will work

Analytic Snapshot and Process Builder

Can the target object of an Analytic Snapshot be the target object of a Process Builder flow?

NO

The documentation states that workflows aren’t allowed but is silent on Process Builder.

Target object must not be included in a workflow.
The custom object in the Target Object field is included in a workflow. Choose a target object that is not included in a workflow.

I was trying to set a RecordTypeId after each snapshot record was created using values in the SObject. The only way to do this (as triggers aren’t supported either) is via a scheduled Apex job

HubSpot Synchronization and Deleted Leads

Don’t do this at home.

Our org had about 300,000 Salesforce Leads and around 200,000 HubSpot contacts. The Salesforce Leads had accumulated from 2006 and came from many sources, including previous Marketing Automation systems such as Marketo and Eloqua. For a variety of reasons, we decided to purge from Salesforce the obsolete Leads.

So, here’s what happened:

  • Run Data Loader mass delete job in Salesforce
  • Expect HubSpot to delete the corresponding contacts so the systems stay in sync
  • Discover over the course of a few months that HubSpot emails sent to reps on contact activity (like filling out a form) had broken links to the SFDC lead
  • Moderately annoying to the reps but at some point, sales insisted this be fixed

Analysis

The big reveal was that HubSpot doesn’t delete HubSpot contacts associated with SFDC leads if those contacts aren’t in the SFDC inclusion list!

Here’s an example:

  • Contact fred@foo.com comes into HubSpot from some form.
  • HubSpot adds contact to inclusion list (because form completed).
  • HubSpot contact syncs to Salesforce as a new Lead.
  • Sales marks lead with status Bogus Data.
  • HubSpot picks up change in status on next sync but ….
  • Because inclusion list rules say, exclude Status = Bogus Data, the contact is removed from the inclusion list. HubSpot maintains, as a property of the contact, the SFDC Lead Id in case the contact re-enters the inclusion list filter.
  • Salesforce mass delete removes the fred@foo.com Lead.
  • Because the SFDC Lead is no longer in the inclusion list, the delete event is not recognized by HubSpot and contact remains in HubSpot. The HubSpot contact is an orphan from the point of view of synchronization.

Remedies
We’re still exploring this but I believe the conceptual answer should be:

  1. If the HubSpot contact is ever sync’d to Salesforce, the contact should remain in the inclusion list. Thus, deletes from the Salesforce side will be deleted in HubSpot.
  2. Use HubSpot smart lists to filter out contacts marked as disqualified or bogus data or otherwise not worth engaging in new campaigns.
  3. Run periodic (monthly) Salesforce batch jobs to delete Leads that are bogus/spam after x days of existence
  4. Don’t delete from HubSpot as a HubSpot delete won’t delete in Salesforce, leaving you with an unbalanced system
  5. Make your inclusion list rules succinct – they need to be readable on one page, without scrolling

Custom domain – Napili Community Template

This post written to address a confusing point in the documentation.

Let’s say you want to create a custom domain name for your Community based on the Napili template. You desire https://customers.foo.com.

Stumbling block number One

When enabling Communities in an org (and custom domains are only available in PROD), you get this screen:
communities-enablement

(screen shot is from Dev Edition so PROD will be slightly different in the default URL)

Your eyes feast on the “Important: The domain name will be used in all of your communities and can’t be changed after you save it”. But you want your community to be called customers.foo.com with no force.com in the domain at all. You get scared.

RELAX. Typically, you will use your company name, say foo, in the entry box. Think of this as the master URL for all your communities (up to 50) that Salesforce needs to host your communities. It isn’t until later that you will bind your Community to its custom domain and then to the master URL

Stumbling block number Two
When building a Community, say with the Napili template, where do you define the custom domain name?

ANSWER: You don’t. Just give your Community a good label to distinguish it from any other Communities you might create. The Community setup process prompts you for an optional suffix for your Community URL. And this URL uses a force.com domain as in foo.na2.force.com. Still not a custom domain. RELAX.

And here’s the secret sauce how it all comes together

Salesforce doesn’t make things easy with a wealth of terminology, some of which doesn’t seem to apply for the poor Napili template customizer. But the key thing to remember is that Napili-template Communities are Sites. That is, Sites with a capital S.

So step 1 to the Custom Domain (actually steps 1-4) are described in excellent detail in this Knowledge Article.

  1. Update your DNS Server With Your Custom Domain
  2. Create a Certificate Signing Request & Obtain an SSL Certificate for your domain
  3. Update your signed SSL certificate in Salesforce.
  4. Create a Custom Domain in Salesforce. Note that Communities has to be enabled in your PROD org to finish this step as you won’t be able to assign the certificate to the custom domain until Communities is enabled. Hence, previous comments above re: relaxing

At this point you are almost there. All that is left is binding your Community to the custom domain.

Go to Domains Management | Domains. The custom domain you created in step 4 above (customers.foo.com) will appear here with the attached certificate/key. Select the custom domain. Click New Custom URL.
custom-domain-custom-url

The domain field is prepopulated as you would expect but what value goes into Site? Well, it is a lookup field so click the spyglass and, YES, you will see your Community in the list of available Sites. Select it and save. Your Community is now bound to the custom domain that is bound to the Community as hosted in Salesforce under a force.com domain name.

So, what’s happening under the hood?

  1. Your Community is hosted at Salesforce hence it has a force.com domain
  2. You define an alias in DNS (the CNAME entry) between your custom domain name and a domain name that Salesforce works with that includes your orgId. In this example, customers.foo.com is aliased to customers.foo.com.yourorgId.live.siteforce.com. Full details on how CNAME works can be found in many places such as here.
  3. When your users visit customers.foo.com the actual request via DNS goes to customers.foo.com.yourorgId.live.siteforce.com. Salesforce uses the binding between your custom domain and the Site (i.e. your published/activated Community) to find and render the pages of your community – the one you maintain in Community Builder. But the URL shown on the browser is what you want.

Pro tip:

Turns out you can migrate Communities from sandboxes to PROD with Changesets. Not every setting is copied but your pages will be. In the Changeset list of components, select Sites.com. A list of Communities will appear to choose from. See, knowing that Communities are Sites comes in handy.

Schedulable Jobs – Constructors and execute()

This issue threw me for a loop for some time.

Suppose you have a Schedulable class and when it runs, the behavior of the execute() method doesn’t match what you expect. It looks like there’s memory of previous execute()s

Here is a specific example that distills the issue

public class MySchedulable implements Schedulable {

   Account[] accts = new List<Account>();

   public void execute(SchedulableContext sc) {
      accts.addAll([select id, name from Account where YTD_Total__c < 1000.0]);
      // do something interesting with this 
   }

You run the job weekly. You observe on week n, where n > 1, that Accounts are being processed that currently don’t have a YTD_Total__c < 1000. Accounts with YTD_Total__c > 1000 are being processed (?!?)

Explanation

  • When the job is scheduled, the class constructor is called (here, the implicit constructor) and the object variable accts is initialized empty.
  • On week 1, when execute() is called, the accounts with YTD_Total__c < 1000 as of week 1 are added to list accts.
  • On week 2, when execute() is called, the accounts with YTD_Total__c < 1000 as of week 2 are added to list accts.

Note that the class constructor is not reinvoked for each execute(). Hence, the accts list grows and grows.

Solution
Reinitialize all object variables within the execute().

public class MySchedulable implements Schedulable {
 
   public void execute(SchedulableContext sc) {
      Account[] accts = new List<Account>();
      accts.addAll([select id, name from Account where YTD_Total__c < 1000.0]);
      // do something interesting with this 
   }

Email2Case with recordTypes

This was annoying and worth documenting

Starting conditions

  • Record types defined on Case.
  • Email2Case configured with email address support@foo.com using RecordType A and email address orders@foo.com using RecordType B.
  • RecordType A is the default record type for the automated case user (as configured in Case Settings).
  • Automated Case User’s profile has access to both RecordType A and RecordType B.

You start testing

Since it typically requires liaison with your mail server team to establish the email addresses, verify-to-Salesforce their validity, and then auto-forward the email to the Salesforce email services address (e.g. verylongname@salesforce.com), you will be tempted during testing to try sending emails to verylongname@salesforce.com as this is what Salesforce actually would receive.

You will be surprised.

If the email you send to is for the Automated Case User’s profile’s non-default recordtype, then the recordType assigned to the Case will be the default record type (!?!). This happens in either of the following circumstances:

  • Email-to-case address is not verified, or ..
  • Email-to-case address is verified

That is, in order to get the correct recordType assigned, you have to send the email to your company’s email domain and have that email forwarded to Salesforce.

Example:

  • RecordType A is default recordType for Automated Case User
  • Email-to-Case address orders@foo.com is configured to use RecordType B
  • Email-to-Case address orders@foo.com is associated to orders-verylongname@salesforce.com

Results:

  • Send email to orders-verylongname@salesforce.com. Result: Case created with recordType A ?!?
  • Verify orders@foo.com and send email to orders-verylongname@salesforce.com. Result: Case created with recordType A ?!?
  • Establish forwarding rules in your mail server that forward orders@foo.com to orders-verylongname@salesforce.com. Send email to orders@foo.com. Result: Case created with recordType B Hooray!

Email Services Forwarding Verification From Gmail

Minor trip up today

I set up an Inbound Email Service in PROD. Email address (friendly) was foo@bar.com. “bar.com” uses Gmail as corporate email system. As you know, you need to forward the emails received at fooo@bar.com to foo@verylongemailaddress.pod.apex.salesforce.com.

When the forwarding is done by Gmail, it sends a confirmation code requesting verification of the forwarding before Gmail will enable.

Where do you see this confirmation code?

There’s no mailbox to look at in Salesforce. So, what I did was code a super class for my Inbound Email Handler class that had a method log(..):

    public virtual void log(Messaging.inboundEmail email,Messaging.InboundEnvelope env) {
    
    	System.debug(LoggingLevel.INFO,'\n INboundEmail Envelope:\n' + 'from:' + env.fromAddress + ' to: ' + env.toAddress + '\n' +
    		'\n  subject    :' + email.subject +
    		'\n  ccAddresses:' + email.ccAddresses +
    		'\n  toAddresses:' + email.toAddresses +
    		'\n  fromAddress:' + email.fromAddress +
    		'\n  fromName   :' + email.fromName +
    		'\n  headers    :' + '\n' + headersToString(email.headers) +
    		
    		'\n  #binAtch:' + (email.binaryAttachments != null ? email.binaryAttachments.size() : 0) +
    		'\n  #textAtch:' + (email.textAttachments != null ? email.textAttachments.size() : 0) +
    		
    		'\n  plaintextBody:' + email.plainTextBody +
    		'\n  htmlBody:' + email.htmlBody
    	);

I invoked this method in the InboundEmailHandler (called Email2Lead in this use case) so every message would be logged

public Email2Lead() {
    	super();
}
    
public Messaging.InboundEmailResult handleInboundEmail(Messaging.InboundEmail email,Messaging.InboundEnvelope env) {
    	Messaging.InboundEmailResult	res 	= new Messaging.InboundEmailResult();
    log(email,env);
    ... do work here ..		
    res.success = true;
    return res;
}

So, turn on debug log for the context user of the Inbound Email Service, use Gmail to send the confirmation request of the forwarding rule, and then inspect your debug log in the value of the plaintextbody debug line

What if you don’t get the confirmation code?

The most likely reason is that your Inbound Email Service is configured to only accept emails from certain domains. This list of domains needs to include google.com as that is the source of the confirmation code for the forwarding rule. The source is not your corporate mail system (e.g. bar.com)

SObject method isClone() Nuance

Discovered something doing a unit test today

SObject class method isClone() does not return true unless the clone source SObject exists in the database.

Account a = new Account(name = '00clonesrc');
insert a;
Account aClone = a.clone(false,true,false,false);
system.debug(LoggingLevel.Info,'isClone='+aClone.isClone());

The debug line shows as: isClone=true

but, don’t do the insert as in this example:

Account a = new Account(name = '00clonesrc');
Account aClone = a.clone(false,true,false,false);
system.debug(LoggingLevel.Info,'isClone='+aClone.isClone());

The debug line shows as: isClone=false

Normally, this might not be an issue but I was unit testing a service layer method and passing cloned sobjects in as arguments without doing the database operation in order to make the tests run faster. This is one place where the DML is required in order to get isClone() to work as expected.

Update 2016-10-04

Per suggestion by Adrian Larson, I retried using a dummy ID

Account a = new Account(id = '001000000000000000', name = '00clonesrc');
Account aClone = a.clone(false,true,false,false);
system.debug(LoggingLevel.Info,'isClone='+aClone.isClone());

The debug line shows as: isClone=true

Duplicate Rule Woes

A long time ago, I implemented duplicate checking in Apex for custom object Foo__c. I decided it was time to use SFDC’s out-of-the-box Duplicate Rules so I could move the logic into point-and-click configuration and clean up the code.

Good idea eh?

Well, sort of. There are some considerations before you jump into this.

Starting condition:
My existing Apex logic checked for duplicates intrabatch as well as extrabatch. Meaning, if two Foo__c‘s with the same key appeared in the same trigger set, they were both flagged as duplicate errors. Similarly, if any Foo__c within the batch matched an existing Foo__c outside of the batch, it would be flagged as an error.

Consideration (coding)

  • Unfortunately, SFDC duplicate rules won’t block intrabatch duplicates. This is documented in the Help
  • Doubly unfortunate, once you insert in a batch duplicate Foos, if you edit one of the duplicate Foos without changing the matching key field, SFDC won’t check it against the other Foo with the same key. For example, if you bulk uploaded in one batch two Foos, each with key ‘Bar’, SFDC doesn’t detect duplicates. When you go to edit one of the Foos with key ‘Bar’ and change any field other than the matching key, SFDC won’t tell you that Foo i with key ‘Bar’ is the same as existing Foo j with key ‘Bar’.

That said, you do get to eliminate any Apex code that does SOQL to check for duplicates extrabatch.

Workaround
If you really want to block Foos with the same key from getting into the database, you have to implement in your domain layer (i.e. trigger), Apex verification. No SOQL is required because all the records that have to be checked will be in Trigger.new

Consideration (deployment)
As of V37.0 (Summer 16), there is no way to deploy Duplicate Rules through ant or Change Sets. You have to manually add the Duplicate Rules in your target orgs. You can deploy MatchingRules via ant or Change Sets but that won’t do you much good as they have to be bound to a Duplicate Rule. There is an Idea worth voting up.