Category Archives: Uncategorized

ApexMocks and Email

Part two of a series. See Part 1 and Part 3

One of the problems in unit testing outbound email is that it is hard to verify that you constructed all of the properties of the outbound email as expected. This is compounded by the fact that sandbox orgs default with email deliverability ‘off’ so any attempt to use Messaging.sendEmail(emails) will throw an exception and your test breaks.

The example’s premise

Suppose we have a simple class method that constructs and sends an email:

public void sendEmail() {
   Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
   mail.setToAddresses(new list<String> {'foo@bar.com'};
   mail.setSubject('Greetings, earthlings!');
   ...
   Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
}

How would you unit test that the outbound email was sent to ‘foo@bar.com’ ? Not so simple. Same for the other properties of the outbound email.

Enter Enterprise Patterns and ApexMocks

Rework the Apex class to use the fflib UnitOfWork:

public void sendEmail() {
   fflib_ISobjectOfWork uow = Application.UnitOfWork.newInstance();
   Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
   mail.setToAddresses(new list<String> {'foo@bar.com'};
   mail.setSubject('Greetings, earthlings!');
   ...
   uow.registerEmail(mail); // let UnitOfWork know mail is part of Txn
   uow.commitWork(); // send the mail
}

The fflib_SobjectUnitOfWorkClass considers outbound emails as part of the transaction, hence the registerEmail(mail) method call.

Now, your testmethod looks like this:

@isTest private static void testSuccessPath() {
  fflib_ApexMocks mocks = new fflib_ApexMocks();
  // Given a mock UoW (injected)
  fflib_SobjectUnitOfWork mockUow = 
     (fflib_SobjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
  Application.UnitOfWork.setMock(mockUow);

  // When the email method is invoked
  new MyClass().sendEmail();

  // Then verify that an email was constructed and sent
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,
                                          mocks
                                           .times(1)
                                           .description('email sb constructed')))
    .registerEmail((Messaging.SingleEmailMessage) fflib_Match.anyObject());
    									
    									
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,
                                          mocks
                                           .times(1)
                                           .description('email sb sent')))
    .commitWork();

  // Then verify that the email was constructed as expected
  // We use ArgumentCaptors for this. There are four (4) steps:

  fflib_ArgumentCaptor capturedEmailArg	= 
        fflib_ArgumentCaptor.forClass(Messaging.SingleEmailMessage.class);
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,1))
       .registerEmail((Messaging.SingleEmailMessage)capturedEmailArg.capture());

  Object actualEmailAsObject = capturedEmailArg.getValue();
  Messaging.SingleEmailMessage actualEmail = 
        (Messaging.SingleEmailMessage) actualEmailAsObject;

  System.assertEquals('Greetings, earthlings!', 
                       actualEmail.getSubject(),
                       'subject is from friendly aliens');
  System.assertEquals(new list<String> {'foo@bar.com'},
                       actualEmail.getToAddresses()
                       'only @bar.com domains expected');
  ... other properties.
}

Let’s look at the argumentCaptor, one line at a time.

fflib_ArgumentCaptor capturedEmailArg	= 
        fflib_ArgumentCaptor.forClass(Messaging.SingleEmailMessage.class);

We declare a variable of type fflib_ArgumentCaptor and set it to be of the type we want to inspect.

((fflib_SobjectUnitOfWork) mocks.verify(mockUow,1))
       .registerEmail((Messaging.SingleEmailMessage)capturedEmailArg.capture());

We ask the mocking framework that when the UnitOfWork object is called with method registerEmail with single argument of type Messaging.SingleEmailMessage that we want to capture the value of that argument when the mock UoW is called. The capture() method of ApexMocks library does this. Note we use the mocks.verify(..) method to do this. That is, instead of verifying the value passed to registerEmail we are capturing that value for later inspection.

Object actualEmailAsObject = capturedEmailArg.getValue();
  Messaging.SingleEmailMessage actualEmail = 
        (Messaging.SingleEmailMessage) actualEmailAsObject;

Our declared variable capturedEmailArg has a method getValue() provided by the fflib_ArgumentCaptor class. It returns an Object. There is also a getValues() method for collections. We cast this to the type we care about – Messaging.SingleEmailMessage.

Now, we can assert against the actual email that the class-under-test constructed and verify each property.

System.assertEquals('Greetings, earthlings!', 
                       actualEmail.getSubject(),
                       'subject is from friendly aliens');

So, why is this cool?

  1. We are immune from the sandbox having to be configured to send emails. Because fflib is already unit tested, calls to registerEmail(someEmail) followed by commitWork() will send emails in an org configured to send emails. We merely need to verify that registerWork and commitWork got called. Since the UnitOfWork layer is mocked, we can use ApexMocks to verify that calls to a mockUow are as expected.
  2. The fflib_ArgumentCaptor feature of ApexMocks allows detailed inspection of arguments passed to any mockable class/method. In our example, it is a single argument of type Messaging.SingleEmailMessage, but it could be any arbitrary Apex type. Thus, your unit tests can be quite exhaustive about verifying that the contract between two objects is fulfilled without having to construct any real Sobjects, do any real DML, or query for the results.

Rerender custom component after main page inlineEdit Save

Sounds simple. You have a Visualforce (VF) page with apex:detail and a custom component. You want to rerender the custom component after the inlineEdit Save command completes. The VF markup is trivial:

<apex:page standardController="Account">
  <apex:form >
      <c:MyComponent id="theComponent" someArg="false"/>  
      <apex:detail inlineEdit="true" subject="{!Account}" rerender="theComponent"/>
  </apex:form>
</apex:page>

So, this works great except …. if someArg is used to conditionally render a table row or column. Such as in this example:

<apex:component >
    <apex:attribute name="someArg" type="Boolean" required="false" default="false" description="TRUE if component not part of export"/>
    <table>
      <thead>
        <apex:outputPanel rendered="{!NOT(someArg)}">
          <tr id="headerRow">
            <th><apex:outputText value="Column 1 Now: {!NOW()}"/></th>
            <th><apex:outputText value="Column 2 NotSomeArg"/></th>
          </tr> 
        </apex:outputPanel>    
        <apex:outputPanel rendered="{!someArg}">
          <tr>
            <th><apex:outputText value="Column 1 Now: {!NOW()}"/></th>
            <th><apex:outputText value="Column 2 someArg"/></th>    
          </tr>
        </apex:outputPanel>
      </thead>    
    </table>                                                                                           
</apex:component>

The problem
I had a pretty sophisticated component that sometimes rendered 9 columns and sometimes 11 columns in an HTML table, depending on the value of someArg. This works well on initial page load. When the business requirements changed and I needed to rerender the component after inlineEdit save, the rerender action failed. In fact failed hard – leaving blank spaces in the table header and no table rows.

After spending way too much time thinking it was something about inlineEdit and components, other arguments not being passed, or something in the component’s controller, I stumbled upon this sentence in the documentation (something I knew about already but because the component was so sophisticated and was a component, I didn’t think applied).

You cannot use the reRender attribute to update content in a table.

The solution?
No good answers here.

  1. You have to refactor your component so that you either have two components, one for x columns and one for y columns, sharing subcomponents to avoid duplication or …
  2. rethink your design or …
  3. after inlineEdit completes, reload the entire page. oncomplete="location.replace();". I tried this but the user experience was poor, as first inline edit does its Ajax refresh and all looks fine, and then the page reloads. Since this is contrary to the way other pages in Salesforce refresh after inlineEdit Save, I eschewed this option.

Side Note
Here’s a tip when you’re stuck with a complex piece of code that no longer is working for some inexplicable reason: Rebuild the code from its most elementary pieces in your Dev Edition so you have a working version, then add in the suspect bits one at a time until it fails. Try and avoid your custom objects an customizations, reproduce using the OOB Account object or the like. A side benefit if you do this is that if you’re really stuck, you’ll have a short, self-contained, compilable example (SSCCE) – suitable for publishing on a forum for community assistance.

Testing a Batchable + Queueable + Schedulable

It isn’t super clear in the documentation (V36) what happens in a test method when a Batchable, Queueable, and Schedulable are involved within the Test.startTest()...Test.stoptest() execution scope.

The system executes all asynchronous processes started in a test method synchronously after the Test.stopTest statement

So, I decided to do a simple experiment:

The class (acts as both a batchable, queueable, and schedulable)

public with sharing class FooBatchableQueueable 
             implements Database.Batchable<Sobject>, Queueable {
    
    
    public Database.QueryLocator start(Database.BatchableContext bc) {
    	System.debug(LoggingLevel.INFO,'Entered Batchable start()...');
    	return  Database.getQueryLocator([select id from Group 
                                              where DeveloperName = 'Foo']);
    }
    
    public void execute(Database.BatchableContext bc, List<Group> scope) {
    	System.debug(LoggingLevel.INFO,'Entered Batchable execute()...');
    	System.enqueueJob(new FooBatchableQueueable());
    	System.debug(LoggingLevel.INFO,'within Batchable execute(), after enqueuing the job...');
    }
    public void finish(Database.BatchableContext bc) {
    	System.schedule('FooSchedulable','0 0 0 1 1 ?', new FooSchedulable());
    	System.debug(LoggingLevel.INFO,'within Batchable finish(), after scheduling');
    }
    
    public void execute(QueueableContext qc) {
    	System.debug(LoggingLevel.INFO,'reached Queueable execute()');
    }
    

    public class FooSchedulable implements Schedulable {
    	public void execute(SchedulableContext sc) {
    		System.debug(LoggingLevel.INFO,'reached Schedulable execute()');
    	}   	
    }
}

And the testmethod

@isTest
private with sharing class FooBatchableQueueableTest {
    
    @isTest private static void testBatchableQueueable() {
    	insert new Group(DeveloperName='Foo', name='Foo', type='Regular');
    	Test.startTest();
    	Database.executeBatch(new FooBatchableQueueable());
    	Test.stoptest();
    	//	Async batchable should execute, then queueable,  
        //      then schedulable. Or do they? See debug log
    }
}

And, what does happen?

  1. The batchable start() and execute() execute fine.
  2. The execute() calls System.enqueueJob(..)
  3. The Queueable job starts, and its execute() method is invoked. See the debug Log
  4. The batchable finish() method executes. It does a System.schedule() on a new object.
  5. The schedulable’s execute does not start.

Debug log

Entered Batchable start()…
Entered Batchable execute()…
within Batchable execute(), after enqueuing the job…
reached Queueable execute()
Entered Batchable finish()…
within Batchable finish(), after scheduling

Conclusion

  • Both the batchable and the queueable, as async transactions are executed “synchronously” once the Test.stopTest() is reached in the testmethod.
  • You definitely cannot assume that the batch finish() will execute before the queueable execute().
  • The constructor for the schedulable class will get invoked, but not its execute() method. You can see no debug log from within the schedulable’s execute().
  • You will need to explicitly test the schedulable by mocking the environment prior to its scheduling and then invoking in a separate testmethod.

Conga Composer – INVALID_FIELD: No such relation ‘Product2’ on entity ‘OpportunityLineItem’

I had built a Conga Query that exploited the Product2 relationship from the OpportunityLineItem. Yet, when I used this in a Conga Composer URL, I got this error:

INVALID_FIELD: No such relation 'Product2' on entity 'OpportunityLineItem'

I know that the Product2 relationship from OpportunityLineItem was added by SFDC in V30

yet my Conga Composer URL was V33:

&serverUrl=https://mydomain.my.salesforce.com/services/Soap/u/33.0/someId

So, what gives?

Per Conga Support, the serverUrl is used by Conga solely for determining the instance name and they smash the rest of the URL with V29 (!%!).

Of course, this could change in the future so, hopefully, this post will become obsolete.

Workaround
Instead of using the OpportunityLineItem.Product2 relationship, you can fallback to the OpportunityLineItem.PricebookEntry.Product2 relationship. I don’t know what I would do were I needed a Conga Query on Order – an SObject introduced in V30

SOQL Query Optimization

Pardner, you sure got some ‘xplaining’ to do

In PROD, I had a daily REST query that executed against another SFDC org. It retrieved about 1000 or so Opportunities and had been executing just fine, night after night. But today, the Apex Jobs log showed ‘Read timed out’. In fact, three days’ worth – ouch.

First thought – timeout argument too low
Perhaps the timeout needed to be bumped up in the HTTPRequest call?

This wasn’t it, I was using this Apex to set the timeout:

hRqst.setTimeout(120000);  // max is 120 secs = 120000 ms

Second thought, the REST SOQL query needed optimization
Of course, I wanted to deny this possibility as the REST query had been running along just fine for weeks now. But what if the target (of the query) SFDC org had gotten much bigger and this was causing the query to slow down. After all, the target org (not managed by me) actually has hundreds of thousands of Opportunities but I thought I was using an index so why would there be a problem?

Let’s take a look at the query (simplified) for discussion purposes:

   select id, stageName, 
      (select  unitPrice, totalPrice, pricebookEntry.name from OpportunityLineItems) 
     from Opportunity where closedate >= THIS_QUARTER and ID IN 
       (select opportunityId from opportunityLineItem 
          where pricebookEntry.name = 'FOO' OR 
                (bar__c = 'BAR' and is_Active__c = true))

You can see that the query does a semi-join – it is looking for Opportunities where at least one Opportunity Product meets some criteria.

Now, distant genetic memory kicked in and I remembered that SFDC introduced Query Plan explains back in V30. I hadn’t had a reason to look at this as I wasn’t experiencing any performance issues until now.

Using SFDC Workbench REST Explorer, I logged into the target SFDC org (the one where the query executes). I created this REST GET:

 
/services/data/v32.0/query?explain=select+id+(select++unitPrice,+totalPrice,pricebookEntry.name+from+OpportunityLineItems)++from+Opportunity+where+closedate+>=+THIS_QUARTER+and+ID+IN+(select+opportunityId+from+opportunityLineItem++where+pricebookEntry.name+=+'FOO'+OR+(bar__c+=+'BAR'+and+is_active__c+=+true))

And here was the result – three possible plans, the first one has a relative cost of < 1.0 so it meets the 'selective query definition'. It will use an index. SoqlExplainInitial

So, although 0.826 isn’t great, there are over 500,000 records to consider. So why was the query timing out?

Third thought – The Explain feature doesn’t consider the semi-join portion
Since there are really two queries being executed and joined together, I hypothesized that the query on OpportunityLineItem wasn’t ‘explained’. So, I decided to see what that query would look like ‘explained’. Back to REST Explorer and I entered this:

/services/data/v32.0/query?explain=select+opportunityId+from+opportunityLineItem++where+(pricebookEntry.name+=+'FOO'+OR+(bar__c+=+'BAR'+and+is_active__c+=+true))

And the plan (only one) came back being a table scan of 2.1 million rows! Uh-oh. Here was the problem. As you might suspect, the culprit is the OR clause on non-indexed custom fields.
SoqlExplainProblemQuery

So, how to make this query use an index? I had naively thought (or perhaps it was epistemic arrogance) that SFDC would automatically exclude any OpportunityLineItem that was prior to the current quarter as specified by the semi-join’s left hand query; and I knew closeDate was indexed. Wrong!

So, I added into the semi joins right hand query a condition to only include OpportunityLineItem whose parent Opportunity’s closeDate was in the current quarter or beyond:

 
/services/data/v32.0/query?explain=select+opportunityId+from+opportunityLineItem++where+opportunity.closeDate+>=+THIS_QUARTER+and+(pricebookEntry.name+=+'FOO'+OR+(bar__c+=+'BAR'+and+is_active__c+=+true))

SoqlExplainFinal
The explain’s relative cost for this optimization now falls below 1.0 and hence it too will use an index. The semi join will join the results of two index result sets.

Peace
I adjusted the daily job’s query using the properly selective semi-join right hand side and reran the daily job. No more timeout!

Update! – after writing all this, I learned I could access the Query Plan from the Developer Console. I tend to avoid using that on orgs I don’t control for fear of writing some anonymous Apex by mistake or using DML through the Query Editor results.

Resources
SFDC KB article (using Developer Console)
Using REST explain