Category Archives: Uncategorized

Migrating to Salesforce Integration User Licenses

If you are like me, you’ve been following best practices and assigning one user license to each of your third party integrations – especially those that cause some DML to occur in your org. Having a separate user license per integration:

  • Allows for traceability in the Field History related list as well as the CreatedBy and LastModifiedBy fields on your records.
  • Facilitates assigning the integration user only those permissions it needs to do its job via a Permission Set

Most AppExchange packages will include a Permission Set for what the package documentation typically calls a “service account user” dedicated to performing integration functions between the package and your org.

$$$$

Of course, if you’ve done this, you are paying full license price per integration for a non-human user that probably never uses the UX. But, after intense customer pressure (and I was there at Dreamforce in 2022 when Bret Taylor, then President of Salesforce , agreed to provide lower cost integration-only licenses to jubilation from the crowd), Salesforce delivered five (5) free integration-only licenses to every Enterprise Edition org and offered additional integration-only licenses for the low, low cost of $10/month each (2024 pricing).

Since no one likes spending more Salesforce subscription money than they have to, I was tasked with converting thirteen (13) integration users to our five free + eight (purchased, but cheap) Salesforce Integration licenses. I had to do the conversion before our annual subscription renewal so we could avoid paying another year of thirteen full price user seats (~$1800-2000 p.a.).

Starting Conditions

Being a good boy, many of my (full, expensive license) integration users were already set up with:

  • Profile = Minimum Access - Salesforce
  • Permission Set XXX-Integration with requisite CRUD, FLS, and other permissions

The oldest integrations were set up with

  • Profile = MyCompany Integration User Profile
    • Modify All Data and FLS on every field
  • No Permission Set

Vendor Documentation Review

An advantage of doing this work in late 2023 was that many of our third party vendors already had documentation of their support for the Salesforce Integration license. A survey of their Knowledge Bases yielded:

  • Supported with tips (a good example of this was OwnBackup)
  • Explicitly not supported (Talkdesk as of January 2024)
  • Silent

Those vendors that had nothing to say got a support case filed from me so I didn’t waste time deploying a migrated user license and having it fail or work mostly but not always.

Internal third parties were a bit easier as I could ask the product owners or do some reports and SOQL inside the org to gain insight.

Some Doubts (Alleviated)

Could I change a user with a full license to a Salesforce Integration license without destroying any OAuth tokens, passwords, etc?

A simple experiment proved that changing the Salesforce license of a user didn’t affect any authentication already established.

  • A user with a Salesforce Integration license can still login to Salesforce (even though it does not have UX permissions)
  • That user can change their password
  • That user can even set up Two-Factor Authentication

The Nitty-Gritty

I started off in my dev environment.

Permission Sets

I developed a Permission Set that I called Universal API User System Permissions. This contained every System Permission that I wanted every integration user to have (like API Enabled, Access Activities, Edit Tasks, Edit Events) You’ll decide what makes sense for your org.

For each Integration User, I either reviewed the existing permission set or created a new one. This contained all the CRUD, FLS, and other permissions that that integration user needed. Notably, it can not contain VisualForce Pages or Lightning Experience User permissions (among others that involve UX features).

If creating a new one, be sure to not set the Permission Set’s license as Salesforce. Instead, leave it blank.

User License

Then I changed the integration user’s License in Setup > Users from Salesforce to Salesforce Integration. This has the effect of deleting all the permission set and permission set group assignments (but fortunately, leaving public group membership intact).

IMPORTANT – I then assigned the Salesforce API User Permission Set License to the user. You have to do this before assigning Permission Sets. While it may have been obvious to the SFDC Product Manager, assigning a Permission Set License to the user is not typically done as ordinary Enterprise Edition users get an implicit Permission Set License. If you forget this step, you’ll get an error in assigning Permission Sets to the user – permission not supported for the Salesforce Integration license.

Permission Set Assignment

I then assigned the two permission sets to the integration user.

  • Universal API User System Permissions
  • XXX-Integration (where XXX was the name of the integration, e.g. Hubspot, OwnBackup, etc.)
At this point you may encounter errors
  • Your permission set includes a permission that isn’t supported by the Salesforce API User permission set license. Remove that permission.
  • Your permission set isn’t listed in the available permission sets to assign(!). This happens if you started with a permission set that defined it was available only for Salesforce licenses (rather than blank).
    • You can’t edit a Permission Set to change its license. You can’t clone in the UX and change the license field.
    • Your only option is to go to your IDE (or Workbench) and create a new .permissionset file with a new name, then
      • Copy-paste the old permission set XML into your new file
      • Change the value of the <label>...</label>
      • Delete the XML for <license>Salesforce</license>
      • Then save (deploy) the new .permissionset file to your org.
    • I also renamed the old, now useless Permission Set, with a suffix (retired). This would get deleted later after final deployment and testing

Testing

In many orgs, including my own, you aren’t lucky enough to have staging versions for all or any of your third part integrations.

By careful study of what I knew the third party integration did (for example, Hubspot reads/edits Accounts, Contacts, Leads, Opportunities, and Tasks), I then

  • Used Workbench (or Postman or equivalent) and logged into my org as the integration user.
  • Composed REST requests to query and DML records, simulating what the third party app does (to the best of my knowledge)
    • If you use Event Monitoring, you could examine the REST API logs to gain insight into the GETs, POSTs, PATCHes, and DELETEs used by the third party app.
  • In some cases, based on the tests, I needed to revise the Permission Set (usually because I had forgotten some CRUD or FLS permission).

Deployment

Once dev testing was done, I used my DevOps tool (Gearset) to deploy the permission set(s) to the downstream branches and orgs in my pipeline.

  • In each deployed org, you have to manually swap the integration user’s license to salesforce Integration, assign the Permission Set License, then assign the Permission Sets.
  • In some cases, I had staging 3rd party systems bound to my SFDC staging org. Here I could do true end-end testing.

Summary

This is just tedious work but rewarding once done as by converting thirteen full user licenses to Salesforce Integration licenses, I saved my org probably $25,000 p.a. in subscription costs. A bonus was that each integration user got a full review of its permissions and where too permissive, I brought the hammer down, yielding a more secure configuration.

If you have an org where you’ve lumped all your integrations to share a common Salesforce user, this is a good opportunity to follow best practices and dedicate a free/cheap integration user license to each integration.

Disappointments

The disappointments were the vendors who didn’t support (as of January 2024) the Salesforce Integration user license:

  • Talkdesk – The integration user needs access to Visualforce pages and tabs.
  • Groove – For “reasons”

Special cases

Backup / restore integrations

If converting your integration user for backup/restore vendors, you’ll need to create a Permission Set that includes CRUD on every object you want to backup/restore + FLS on every field that you backup/restore + recordtype access (all).

Platform Event Publishers

One of our integrations only published Platform Events (via REST calls) to our org. Since Platform Events run by default as Automated Process User (or configured to run as a Salesforce user), the integration user only needed CRUD on the XXX__e objects. No FLS is necessary to publish Platform Events

Apex Mocks and Verifying Multiple Custom Type Arguments

Part five of a series. Posts include:

Let’s say you have an AccountsService with method doStuff that has two arguments: an fflib_SObjectUnitOfWork and a custom type Map. CancelRequest looks like this:

public class CancelRequest {
  Id id;
  Date cancelDate;
  String cancelReason;

  public CancelRequest(Id val) {this.id = val;}
  public CancelRequest withCancelDate(Date val) {this.cancelDate = val; return this;}
  public CancelRequest withCancelReason(String val) {this.cancelReason = val; return this;}
  public Date getCancelDate() {return this.cancelDate;}
  public String getCancelReason() {return this.cancelReason;}  
}

Now, let’s say you have some code that calls the AccountsService.cancel (this could be a Visualforce controller, invocable method, domain class method, batchable execute(), Apex REST class, etc. – for purposes of this example, it doesn’t matter).

public MyClass {
  public void doStuff(Set<Id> accountIds, String cancelReason) {
    fflib_ISobjectUnitOfWork uow = Application.UnitOfWork.newInstance();  
    Map<Id,CancelRequest> cancelRequests = new Map<Id,CancelRequest>();
    for (Id accountId: accountIds) {
      cancelRequests.put(accountId,new CancelRequest(accountId)
                                    .withCancelDate(Date.today())
                                    .withCancelReason(cancelReason) );
    }
    AccountsService.cancel(uow,cancelRequests);  // cancel accounts w/ date+reason
    uow.commitWork();
  }
}

To unit test MyClass.doStuff(..), you want to mock the AccountsService as all you’re really interested in is that it is called once and with the proper arguments. You have a separate unit test for the actual AccountsService.cancel that checks for the proper DML.

So, let’s build the testmethod…

@isTest
private static void givenAccountIdsAndReasonVerifyDelegationToAccountsServiceCancel() {
   fflib_ApexMocks mocks = new fflib_ApexMocks(); // framework
   
   // Given some accountIds
   Id[] mockAccountIds = new List<Id> {
          fflib_IdGenerator.generate(Account.SObjectType),
          fflib_IdGenerator.generate(Account.SObjectType)
  };

  // Given a cancel reason
  String cancelReason = 'foo';

  // Given a mockAccountsService (assumes standard naming conventions for service implementations
  AccountsServiceImpl mockAccountsService = 
      (AccountsServiceImpl) mocks.mock(AccountsServiceImpl.class);
  Application.Service.setMock(IAccountsService.class,mockAccountsService);

  // Given a mock Unit of Work
  fflib_SObjectUnitOfWork mockUow = 
    (fflib_SObjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
  Application.UnitOfWork.setMock(mockUow);  

  // When doStuff called
  Test.startTest();
  new MyClass().doStuff(new Set<Id> (mockAccountIds),cancelReason);
  Test.stoptest();

  // Then verify service was called only once
  ((AccountsServiceImpl) mocks.verify(mockAccountsService,mocks.times(1)
                            .description('AccountsService.cancel sb called')))
           .cancel((fflib_ISobjectUnitOfWork) fflib_Match.anyObject(),
                   (Map<Id,CancelRequest>) fflib_Match.anyObject()
                  );
  //  Then verify that the service was called with the expected arguments.
  
  // Because the arguments are Apex custom types (and don't implement
  // an equals() and hashcode() method), there is no way for the ApexMocks
  // matchers to verify on equality. So, we fallback to argumentcaptors

  // Set up the captors, one per arg to method AssetsService.cancel
  fflib_ArgumentCaptor capturedUowArg = // arg0
    fflib_ArgumentCaptor.forClass(fflib_ISObjectUnitofWork.class);
  fflib_ArgumentCaptor capturedCancelRequestArg = // arg1
    fflib_ArgumentCaptor.forClass(Map<Id,CancelRequest>.class);

  // Capture the actual args used when the mock service was called
  ((AccountsServiceImpl) mocks.verify(mockAccountsService,1))
    .cancel((fflib_ISobjectUnitOfWork)capturedUowArg.capture(),
            (Map<Id,CancelRequest>)capturedCancelRequestArg.capture()
           );

  // Transform the capturedArgs (represented by type fflib_ArgumentCaptor)
  // into something we can inspect (using getValue() )
  fflib_ISobjectUnitOfWork actualUowArg =
    (fflib_ISobjectUnitOfWork) capturedUowArg.getValue();
 
  Map<Id,CancelRequest> actualCancelRequestArg =
    (Map<Id,CancelRequest>) capturedCancelRequestArg.getValue();
 
  // Now, whew, finally verify values
  System.assertEquals(mockAccountIds.size(),
          actualCancelRequestArg.size(),'all accts sb requested for cancel');

  for (Id accountId: actualCancelRequestArg.keySet() ) {
    System.assertEquals('Foo',
                        actualCancelRequestArg.get(accountId).getCancelReason());
    System.assertEquals(Date.today(),
                        actualCancelRequestArg.get(accountId).getCancelDate());
  }
}

Process Builder bulkification – Record Create

There’s some uncertainty developed over the years as to how much bulkification actually exists in Process Builder.

Here’s a V42 assessment for a couple of simple use cases.

Starting condition:
A Process Builder flow on Opportunity with two decision blocks; each one with an Action Group that creates a Case. The first Action Group includes an “Evaluate Next Criteria” to allow the records that meet the first condition to also flow into the second condition for evaluation.

There is a trigger on Case.

Use Case 1 – a batch of Opportunities – all of which meet condition-1 and none meet condition-2
RESULT: The newly-created Cases from the first Action Group are bulkified – any Apex trigger will see them all in a single trigger context

Use Case 2 — a batch of Opportunities – 20 meet condition 1 and a distinct set of 30 meet condition-2 (the second decision block)
RESULT:

  • The newly-created Cases from the first Action Group will be presented as batch of 20 records to the Case trigger
  • The newly-created Cases from the second Action Group will be presented as batch of 30 records to the Case trigger

Use Case 3 — a batch of Opportunities – 50 meet condition 1 and the same 50 records meet condition-2 (the second decision block)
RESULT:

  • The newly-created Cases from the first Action Group will be presented as batch of 50 records to the Case trigger
  • The newly-created Cases from the second Action Group will be presented as batch of 50 records to the Case trigger

Hence, your trigger on Case will execute twice. This might create a limits exposure and certainly affects CPU time. Even more importantly, if you are using simplistic static variables to control after insert trigger recursion, the second set of records may not fully execute as expected — be careful!

Note that if you are not using “Evaluate Next Criteria” as in the diagram below, the results above are the same.

ApexMocks and Email

Part two of a series. Posts include:

One of the problems in unit testing outbound email is that it is hard to verify that you constructed all of the properties of the outbound email as expected. This is compounded by the fact that sandbox orgs default with email deliverability ‘off’ so any attempt to use Messaging.sendEmail(emails) will throw an exception and your test breaks.

The example’s premise

Suppose we have a simple class method that constructs and sends an email:

public void sendEmail() {
   Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
   mail.setToAddresses(new list<String> {'foo@bar.com'};
   mail.setSubject('Greetings, earthlings!');
   ...
   Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
}

How would you unit test that the outbound email was sent to ‘foo@bar.com’ ? Not so simple. Same for the other properties of the outbound email.

Enter Enterprise Patterns and ApexMocks

Rework the Apex class to use the fflib UnitOfWork:

public void sendEmail() {
   fflib_ISobjectOfWork uow = Application.UnitOfWork.newInstance();
   Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
   mail.setToAddresses(new list<String> {'foo@bar.com'};
   mail.setSubject('Greetings, earthlings!');
   ...
   uow.registerEmail(mail); // let UnitOfWork know mail is part of Txn
   uow.commitWork(); // send the mail
}

The fflib_SobjectUnitOfWorkClass considers outbound emails as part of the transaction, hence the registerEmail(mail) method call.

Now, your testmethod looks like this:

@isTest private static void testSuccessPath() {
  fflib_ApexMocks mocks = new fflib_ApexMocks();
  // Given a mock UoW (injected)
  fflib_SobjectUnitOfWork mockUow = 
     (fflib_SobjectUnitOfWork) mocks.mock(fflib_SObjectUnitOfWork.class);
  Application.UnitOfWork.setMock(mockUow);

  // When the email method is invoked
  new MyClass().sendEmail();

  // Then verify that an email was constructed and sent
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,
                                          mocks
                                           .times(1)
                                           .description('email sb constructed')))
    .registerEmail((Messaging.SingleEmailMessage) fflib_Match.anyObject());
    									
    									
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,
                                          mocks
                                           .times(1)
                                           .description('email sb sent')))
    .commitWork();

  // Then verify that the email was constructed as expected
  // We use ArgumentCaptors for this. There are four (4) steps:

  fflib_ArgumentCaptor capturedEmailArg	= 
        fflib_ArgumentCaptor.forClass(Messaging.SingleEmailMessage.class);
  ((fflib_SobjectUnitOfWork) mocks.verify(mockUow,1))
       .registerEmail((Messaging.SingleEmailMessage)capturedEmailArg.capture());

  Object actualEmailAsObject = capturedEmailArg.getValue();
  Messaging.SingleEmailMessage actualEmail = 
        (Messaging.SingleEmailMessage) actualEmailAsObject;

  System.assertEquals('Greetings, earthlings!', 
                       actualEmail.getSubject(),
                       'subject is from friendly aliens');
  System.assertEquals(new list<String> {'foo@bar.com'},
                       actualEmail.getToAddresses()
                       'only @bar.com domains expected');
  ... other properties.
}

Let’s look at the argumentCaptor, one line at a time.

fflib_ArgumentCaptor capturedEmailArg	= 
        fflib_ArgumentCaptor.forClass(Messaging.SingleEmailMessage.class);

We declare a variable of type fflib_ArgumentCaptor and set it to be of the type we want to inspect.

((fflib_SobjectUnitOfWork) mocks.verify(mockUow,1))
       .registerEmail((Messaging.SingleEmailMessage)capturedEmailArg.capture());

We ask the mocking framework that when the UnitOfWork object is called with method registerEmail with single argument of type Messaging.SingleEmailMessage that we want to capture the value of that argument when the mock UoW is called. The capture() method of ApexMocks library does this. Note we use the mocks.verify(..) method to do this. That is, instead of verifying the value passed to registerEmail we are capturing that value for later inspection.

Object actualEmailAsObject = capturedEmailArg.getValue();
  Messaging.SingleEmailMessage actualEmail = 
        (Messaging.SingleEmailMessage) actualEmailAsObject;

Our declared variable capturedEmailArg has a method getValue() provided by the fflib_ArgumentCaptor class. It returns an Object. There is also a getValues() method for collections. We cast this to the type we care about – Messaging.SingleEmailMessage.

Now, we can assert against the actual email that the class-under-test constructed and verify each property.

System.assertEquals('Greetings, earthlings!', 
                       actualEmail.getSubject(),
                       'subject is from friendly aliens');

So, why is this cool?

  1. We are immune from the sandbox having to be configured to send emails. Because fflib is already unit tested, calls to registerEmail(someEmail) followed by commitWork() will send emails in an org configured to send emails. We merely need to verify that registerWork and commitWork got called. Since the UnitOfWork layer is mocked, we can use ApexMocks to verify that calls to a mockUow are as expected.
  2. The fflib_ArgumentCaptor feature of ApexMocks allows detailed inspection of arguments passed to any mockable class/method. In our example, it is a single argument of type Messaging.SingleEmailMessage, but it could be any arbitrary Apex type. Thus, your unit tests can be quite exhaustive about verifying that the contract between two objects is fulfilled without having to construct any real Sobjects, do any real DML, or query for the results.

Rerender custom component after main page inlineEdit Save

Sounds simple. You have a Visualforce (VF) page with apex:detail and a custom component. You want to rerender the custom component after the inlineEdit Save command completes. The VF markup is trivial:

<apex:page standardController="Account">
  <apex:form >
      <c:MyComponent id="theComponent" someArg="false"/>  
      <apex:detail inlineEdit="true" subject="{!Account}" rerender="theComponent"/>
  </apex:form>
</apex:page>

So, this works great except …. if someArg is used to conditionally render a table row or column. Such as in this example:

<apex:component >
    <apex:attribute name="someArg" type="Boolean" required="false" default="false" description="TRUE if component not part of export"/>
    <table>
      <thead>
        <apex:outputPanel rendered="{!NOT(someArg)}">
          <tr id="headerRow">
            <th><apex:outputText value="Column 1 Now: {!NOW()}"/></th>
            <th><apex:outputText value="Column 2 NotSomeArg"/></th>
          </tr> 
        </apex:outputPanel>    
        <apex:outputPanel rendered="{!someArg}">
          <tr>
            <th><apex:outputText value="Column 1 Now: {!NOW()}"/></th>
            <th><apex:outputText value="Column 2 someArg"/></th>    
          </tr>
        </apex:outputPanel>
      </thead>    
    </table>                                                                                           
</apex:component>

The problem
I had a pretty sophisticated component that sometimes rendered 9 columns and sometimes 11 columns in an HTML table, depending on the value of someArg. This works well on initial page load. When the business requirements changed and I needed to rerender the component after inlineEdit save, the rerender action failed. In fact failed hard – leaving blank spaces in the table header and no table rows.

After spending way too much time thinking it was something about inlineEdit and components, other arguments not being passed, or something in the component’s controller, I stumbled upon this sentence in the documentation (something I knew about already but because the component was so sophisticated and was a component, I didn’t think applied).

You cannot use the reRender attribute to update content in a table.

The solution?
No good answers here.

  1. You have to refactor your component so that you either have two components, one for x columns and one for y columns, sharing subcomponents to avoid duplication or …
  2. rethink your design or …
  3. after inlineEdit completes, reload the entire page. oncomplete="location.replace();". I tried this but the user experience was poor, as first inline edit does its Ajax refresh and all looks fine, and then the page reloads. Since this is contrary to the way other pages in Salesforce refresh after inlineEdit Save, I eschewed this option.

Side Note
Here’s a tip when you’re stuck with a complex piece of code that no longer is working for some inexplicable reason: Rebuild the code from its most elementary pieces in your Dev Edition so you have a working version, then add in the suspect bits one at a time until it fails. Try and avoid your custom objects an customizations, reproduce using the OOB Account object or the like. A side benefit if you do this is that if you’re really stuck, you’ll have a short, self-contained, compilable example (SSCCE) – suitable for publishing on a forum for community assistance.

Testing a Batchable + Queueable + Schedulable

It isn’t super clear in the documentation (V36) what happens in a test method when a Batchable, Queueable, and Schedulable are involved within the Test.startTest()...Test.stoptest() execution scope.

The system executes all asynchronous processes started in a test method synchronously after the Test.stopTest statement

So, I decided to do a simple experiment:

The class (acts as both a batchable, queueable, and schedulable)

public with sharing class FooBatchableQueueable 
             implements Database.Batchable<Sobject>, Queueable {
    
    
    public Database.QueryLocator start(Database.BatchableContext bc) {
    	System.debug(LoggingLevel.INFO,'Entered Batchable start()...');
    	return  Database.getQueryLocator([select id from Group 
                                              where DeveloperName = 'Foo']);
    }
    
    public void execute(Database.BatchableContext bc, List<Group> scope) {
    	System.debug(LoggingLevel.INFO,'Entered Batchable execute()...');
    	System.enqueueJob(new FooBatchableQueueable());
    	System.debug(LoggingLevel.INFO,'within Batchable execute(), after enqueuing the job...');
    }
    public void finish(Database.BatchableContext bc) {
    	System.schedule('FooSchedulable','0 0 0 1 1 ?', new FooSchedulable());
    	System.debug(LoggingLevel.INFO,'within Batchable finish(), after scheduling');
    }
    
    public void execute(QueueableContext qc) {
    	System.debug(LoggingLevel.INFO,'reached Queueable execute()');
    }
    

    public class FooSchedulable implements Schedulable {
    	public void execute(SchedulableContext sc) {
    		System.debug(LoggingLevel.INFO,'reached Schedulable execute()');
    	}   	
    }
}

And the testmethod

@isTest
private with sharing class FooBatchableQueueableTest {
    
    @isTest private static void testBatchableQueueable() {
    	insert new Group(DeveloperName='Foo', name='Foo', type='Regular');
    	Test.startTest();
    	Database.executeBatch(new FooBatchableQueueable());
    	Test.stoptest();
    	//	Async batchable should execute, then queueable,  
        //      then schedulable. Or do they? See debug log
    }
}

And, what does happen?

  1. The batchable start() and execute() execute fine.
  2. The execute() calls System.enqueueJob(..)
  3. The Queueable job starts, and its execute() method is invoked. See the debug Log
  4. The batchable finish() method executes. It does a System.schedule() on a new object.
  5. The schedulable’s execute does not start.

Debug log

Entered Batchable start()…
Entered Batchable execute()…
within Batchable execute(), after enqueuing the job…
reached Queueable execute()
Entered Batchable finish()…
within Batchable finish(), after scheduling

Conclusion

  • Both the batchable and the queueable, as async transactions are executed “synchronously” once the Test.stopTest() is reached in the testmethod.
  • You definitely cannot assume that the batch finish() will execute before the queueable execute().
  • The constructor for the schedulable class will get invoked, but not its execute() method. You can see no debug log from within the schedulable’s execute().
  • You will need to explicitly test the schedulable by mocking the environment prior to its scheduling and then invoking in a separate testmethod.

Conga Composer – INVALID_FIELD: No such relation ‘Product2’ on entity ‘OpportunityLineItem’

I had built a Conga Query that exploited the Product2 relationship from the OpportunityLineItem. Yet, when I used this in a Conga Composer URL, I got this error:

INVALID_FIELD: No such relation 'Product2' on entity 'OpportunityLineItem'

I know that the Product2 relationship from OpportunityLineItem was added by SFDC in V30

yet my Conga Composer URL was V33:

&serverUrl=https://mydomain.my.salesforce.com/services/Soap/u/33.0/someId

So, what gives?

Per Conga Support, the serverUrl is used by Conga solely for determining the instance name and they smash the rest of the URL with V29 (!%!).

Of course, this could change in the future so, hopefully, this post will become obsolete.

Workaround
Instead of using the OpportunityLineItem.Product2 relationship, you can fallback to the OpportunityLineItem.PricebookEntry.Product2 relationship. I don’t know what I would do were I needed a Conga Query on Order – an SObject introduced in V30

SOQL Query Optimization

Pardner, you sure got some ‘xplaining’ to do

In PROD, I had a daily REST query that executed against another SFDC org. It retrieved about 1000 or so Opportunities and had been executing just fine, night after night. But today, the Apex Jobs log showed ‘Read timed out’. In fact, three days’ worth – ouch.

First thought – timeout argument too low
Perhaps the timeout needed to be bumped up in the HTTPRequest call?

This wasn’t it, I was using this Apex to set the timeout:

hRqst.setTimeout(120000);  // max is 120 secs = 120000 ms

Second thought, the REST SOQL query needed optimization
Of course, I wanted to deny this possibility as the REST query had been running along just fine for weeks now. But what if the target (of the query) SFDC org had gotten much bigger and this was causing the query to slow down. After all, the target org (not managed by me) actually has hundreds of thousands of Opportunities but I thought I was using an index so why would there be a problem?

Let’s take a look at the query (simplified) for discussion purposes:

   select id, stageName, 
      (select  unitPrice, totalPrice, pricebookEntry.name from OpportunityLineItems) 
     from Opportunity where closedate >= THIS_QUARTER and ID IN 
       (select opportunityId from opportunityLineItem 
          where pricebookEntry.name = 'FOO' OR 
                (bar__c = 'BAR' and is_Active__c = true))

You can see that the query does a semi-join – it is looking for Opportunities where at least one Opportunity Product meets some criteria.

Now, distant genetic memory kicked in and I remembered that SFDC introduced Query Plan explains back in V30. I hadn’t had a reason to look at this as I wasn’t experiencing any performance issues until now.

Using SFDC Workbench REST Explorer, I logged into the target SFDC org (the one where the query executes). I created this REST GET:

 
/services/data/v32.0/query?explain=select+id+(select++unitPrice,+totalPrice,pricebookEntry.name+from+OpportunityLineItems)++from+Opportunity+where+closedate+>=+THIS_QUARTER+and+ID+IN+(select+opportunityId+from+opportunityLineItem++where+pricebookEntry.name+=+'FOO'+OR+(bar__c+=+'BAR'+and+is_active__c+=+true))

And here was the result – three possible plans, the first one has a relative cost of < 1.0 so it meets the 'selective query definition'. It will use an index. SoqlExplainInitial

So, although 0.826 isn’t great, there are over 500,000 records to consider. So why was the query timing out?

Third thought – The Explain feature doesn’t consider the semi-join portion
Since there are really two queries being executed and joined together, I hypothesized that the query on OpportunityLineItem wasn’t ‘explained’. So, I decided to see what that query would look like ‘explained’. Back to REST Explorer and I entered this:

/services/data/v32.0/query?explain=select+opportunityId+from+opportunityLineItem++where+(pricebookEntry.name+=+'FOO'+OR+(bar__c+=+'BAR'+and+is_active__c+=+true))

And the plan (only one) came back being a table scan of 2.1 million rows! Uh-oh. Here was the problem. As you might suspect, the culprit is the OR clause on non-indexed custom fields.
SoqlExplainProblemQuery

So, how to make this query use an index? I had naively thought (or perhaps it was epistemic arrogance) that SFDC would automatically exclude any OpportunityLineItem that was prior to the current quarter as specified by the semi-join’s left hand query; and I knew closeDate was indexed. Wrong!

So, I added into the semi joins right hand query a condition to only include OpportunityLineItem whose parent Opportunity’s closeDate was in the current quarter or beyond:

 
/services/data/v32.0/query?explain=select+opportunityId+from+opportunityLineItem++where+opportunity.closeDate+>=+THIS_QUARTER+and+(pricebookEntry.name+=+'FOO'+OR+(bar__c+=+'BAR'+and+is_active__c+=+true))

SoqlExplainFinal
The explain’s relative cost for this optimization now falls below 1.0 and hence it too will use an index. The semi join will join the results of two index result sets.

Peace
I adjusted the daily job’s query using the properly selective semi-join right hand side and reran the daily job. No more timeout!

Update! – after writing all this, I learned I could access the Query Plan from the Developer Console. I tend to avoid using that on orgs I don’t control for fear of writing some anonymous Apex by mistake or using DML through the Query Editor results.

Resources
SFDC KB article (using Developer Console)
Using REST explain