Category Archives: salesforce

Invalid Query Locator

One of my users reported today that they were unable to display a page for a specific ‘company’. Upon investigation, the error was ‘Invalid Query Locator’ and was in a place in the code where such an error was unexpected.

Here was the cause and solution

Foo_c is the parent object, Bar__c is the child.

Foo__c f = [select id, 
             (select id, name, fld1__c, fld2__c, fld3__c 
                  from Bars__r order by name asc)
            from Foo__c where id = :someId];
for (Bar__c b: f.bars__r) 
    res.add(new MyInnerClass(b));

After the 60th Bar__c, the code threw an exception ‘Invalid Query Locator’.

So, why did this happen?

  • I knew from experience that the record in question had more than 60 children and that I had successfully viewed them in the past.
  • So, it must have been something introduced
  • I verified via SOQL in Workbench that there were in fact 70 children – and I could retrieve them all in Workbench

I noted that others had this issue and even SFDC Help had a specific knowledge article on this subject although it didn’t seem to apply to my use case as I was neither taking too long nor had any intermediate queries whilst processing the list of children Bar__c.

I recalled that I had added to the subquery some additional fields. And, those fields were of type Text Area (long). In this case, 32768 long even though most rows had less than 25 characters and in the use case in question, those fields were all null. I also noted that the class was written a long time ago and was at V16.0.

Looking for the easy way out, I tried updating the class version to V33.0. This had no effect.

Then, I tried to see if the field length of the Text Area (long) fields made a difference, even if there was no data that came anywhere close to the limits. So, I reduced the field sizes down to 4096 from 32768. This had some effect as I could process and display all 70 children without error. However, when I changed the test example to have 200 children, the Invalid Query Selector error recurred. Clearly, there is some calculation by SFDC as to when it will return the queryMore attribute based on the expected size of the SOQL result payload.

The fix

I rewrote the code to the following by eliminating the subquery:

Foo__c f = [select id from Foo__c where id = :someId];
for (Bar__c b: [select id, name, fld1__c, fld2__c, fld3__c 
                  from Bars__r where foo__c =: f.id order by name asc) 
    res.add(new MyInnerClass(b));

Now there is no subquery and when tested with 2000 children, there were no errors. Of course, it took two queries to resolve instead of one but as this was a VF controller doing only those queries, it was an easy penalty to accept.

Gnarly SFDC Error Messages – Personal Experience

This post will be updated from time to time and serves as a knowledge base for how not-so-obvious SFDC error messages might be resolved. A running list based on personal experience

INSUFFICIENT_ACCESS_ON_CROSS_REFERENCE_ENTITY

  1. Public read-only sharing model on Parent__c
  2. Parent inserted by user A
  3. User B inserts/updates child record within master-detail relationship to Parent__c
  4. No sharing rule extending write access to User B for Parent__c. For example, testmethod user not in a role that has Parent__c shared to.
  1. Private OWD sharing model on Parent__c. Criteria-based sharing to extend owners of Parent__c to user B’s role
  2. Apex testmethod context
  3. User A inserts Parent
  4. User B inserts child record with lookup to Parent__c
  5. Apex testmethods do not honor criteria-based sharing. Workaround is to insert Parent and Child by same user
  1. Object X has lookup relationship to Object Y
  2. When Object X is inserted or updated, if the value of X.lookup_y__c is to an id that is valid for Object Y’s type but does not exist in the database
  3. Typical examples occur when cloning X from something else or sync’ing X from a different SFDC org
  1. Detail Object D is detail to master Object M
  2. When operation is via API or APEX
  3. When Object D is inserted without a value for the master’s ID field. (example: insert OrderItem without OrderItem.OrderId
  4. Typical example is a misconfigured middleware operation wherein you were expecting to do an update but configured as an insert. Will not be caught in standard UI as the master field is required on the page layout.

INVALID_CROSS_REFERENCE_KEY

  1. Insertion of an OpportunityLineItem, QuoteItem, or OrderItem
  2. PricebookEntryID references a PricebookEntry for Pricebook P
  3. Parent Opportunity, Quote, or Order has Pricebook2Id of Pricebook Q

INVALID_PARTNER_NETWORK_STATUS
Either:

  1. Active connection to partner org
  2. sObjects not published to partner org
  3. You insert those sObjects in your code/testmethods

or …

  1. Active connection to partner org
  2. sObjects published to partner org
  3. sObjects not subscribed within partner org
  4. You insert those sObjects in your code/testmethods

or …

  1. Active connection to partner org
  2. Sharing a child record (like an Opportunity)
  3. But, the parent record (like an Account) has not yet been accepted

INVALID_SSO_GATEWAY_URL

  1. Running user has no Federation Id and
  2. Running user’s profile is defined with ‘Is Single Signon Enabled’ as true

You cannot unset the role on the owner of a portal an account which has partner users.

  1. You are trying to remove (not replace) the role from a User u
  2. That user is the Owner of one or more Accounts that have been enabled as Partner Accounts. (Using SFDC Partner Portal)

The solution is to find all such Accounts owned by User u that are Partner Accounts. You can use a List View to do this, filtering on Partner Account = true. In our use case, we were no longer using Partner Portal and were doing role cleanup so simply disabling each Account as a ‘partner account’ using the Manage External Accounts button on the Account detail page was all that was required. If said accounts are still valid as Partner Accounts, change the owner to some other owner that does have a role.

ANT build: MapHighlightAction is not a standard action and cannot be overridden.

  1. This is an object with address fields like Contact
  2. Your dev sandbox does not have Google Maps enabled – Setup | Customize | Maps and Locations | Settings but your target deployment org does have Google Maps enabled.
  3. The metadata for the dev sandbox SObject (e.g. Contact) will include lines as shown below

     <actionOverrides>
        <actionName>MapHighlightAction</actionName>
        <type>Default</type>
    </actionOverrides>

What seemed to be happening is that when you deploy an SObject like Contact from dev sandbox without Google Maps enabled, if the target sandbox/prod org does have Google Maps enabled, the above lines generate the error MapHighlightAction is not a standard action and cannot be overridden.

If you enable Google Maps in your dev sandbox, and examine the Contact.obj metadata, the above lines are removed and, assuming your target deployment org also has Google Maps enabled, the deployment will succeed. We encountered this issue when we did our first dev sandbox to fullcopy sandbox deployment using an SObject (Contact) with address fields. The dev sandbox was created pre-V33 but upgraded to V34 whereas the fullcopy sandbox was created V33 and also upgraded to V34. V33 was when Google Maps was introduced.

ANT build: Required field is missing

  1. You have an Eclipse project with package.xml against your dev sandbox at Version x
  2. Your Ant build against staging sandbox uses package.xml at Version x+n, n> 0
  3. You run ant from your local GIT repo (containing your metadata files, like xxx.workflow) against your staging sandbox, using the staging sandbox package.xml

What seems to happen is the package.xml file in your dev sandbox Eclipse is defaulted to the earliest version of any class in your project. In my case it was V15.0. However, the Ant build against Staging sandbox used package.xml at V30.0. When you get metadata from the dev sandbox into Eclipse, the metadata files are presented according to what is supported by the dev sandbox’s package.xml version #. Using V15.0, the metadata for xxx.workflow files is no longer compatible with V30.0 (email alerts need to have descriptions, tasks need to have subjects).

The solution is to update the package.xml file in the dev sandbox to the same version as used in the ant build. Then refresh the src tree in the Eclipse dev project

There is already a Child Relationship named Foos on Bar

  1. You are doing a ChangeSet deployment of a lookup custom field Bar__c on Sobject X
  2. The parent SObject (Foo) of the lookup field Bar__c has a child relationship named Foos (Foos__r)
  3. You haven’t changed anything on X or changed anything about the lookup field or its child relationship name. In sandbox there are no duplicates. In PROD, the Sobject X doesn’t even exist.

A likely cause of this is that in PROD, you have a deleted SObject X with field Bar__c. But the deleted object is within the 45 day window before it is permanently deleted by Salesforce. Hence, when the changeset comes through, SFDC thinks there is already the child relationship (albeit in a deleted sobject) and considers your change to be a duplicate.

The solution is to Erase the deleted Sobject in the target org (PROD) and retry the deployment of the Change Set.

SOQL Query Optimization

Pardner, you sure got some ‘xplaining’ to do

In PROD, I had a daily REST query that executed against another SFDC org. It retrieved about 1000 or so Opportunities and had been executing just fine, night after night. But today, the Apex Jobs log showed ‘Read timed out’. In fact, three days’ worth – ouch.

First thought – timeout argument too low
Perhaps the timeout needed to be bumped up in the HTTPRequest call?

This wasn’t it, I was using this Apex to set the timeout:

hRqst.setTimeout(120000);  // max is 120 secs = 120000 ms

Second thought, the REST SOQL query needed optimization
Of course, I wanted to deny this possibility as the REST query had been running along just fine for weeks now. But what if the target (of the query) SFDC org had gotten much bigger and this was causing the query to slow down. After all, the target org (not managed by me) actually has hundreds of thousands of Opportunities but I thought I was using an index so why would there be a problem?

Let’s take a look at the query (simplified) for discussion purposes:

   select id, stageName, 
      (select  unitPrice, totalPrice, pricebookEntry.name from OpportunityLineItems) 
     from Opportunity where closedate >= THIS_QUARTER and ID IN 
       (select opportunityId from opportunityLineItem 
          where pricebookEntry.name = 'FOO' OR 
                (bar__c = 'BAR' and is_Active__c = true))

You can see that the query does a semi-join – it is looking for Opportunities where at least one Opportunity Product meets some criteria.

Now, distant genetic memory kicked in and I remembered that SFDC introduced Query Plan explains back in V30. I hadn’t had a reason to look at this as I wasn’t experiencing any performance issues until now.

Using SFDC Workbench REST Explorer, I logged into the target SFDC org (the one where the query executes). I created this REST GET:

 
/services/data/v32.0/query?explain=select+id+(select++unitPrice,+totalPrice,pricebookEntry.name+from+OpportunityLineItems)++from+Opportunity+where+closedate+>=+THIS_QUARTER+and+ID+IN+(select+opportunityId+from+opportunityLineItem++where+pricebookEntry.name+=+'FOO'+OR+(bar__c+=+'BAR'+and+is_active__c+=+true))

And here was the result – three possible plans, the first one has a relative cost of < 1.0 so it meets the 'selective query definition'. It will use an index. SoqlExplainInitial

So, although 0.826 isn’t great, there are over 500,000 records to consider. So why was the query timing out?

Third thought – The Explain feature doesn’t consider the semi-join portion
Since there are really two queries being executed and joined together, I hypothesized that the query on OpportunityLineItem wasn’t ‘explained’. So, I decided to see what that query would look like ‘explained’. Back to REST Explorer and I entered this:

/services/data/v32.0/query?explain=select+opportunityId+from+opportunityLineItem++where+(pricebookEntry.name+=+'FOO'+OR+(bar__c+=+'BAR'+and+is_active__c+=+true))

And the plan (only one) came back being a table scan of 2.1 million rows! Uh-oh. Here was the problem. As you might suspect, the culprit is the OR clause on non-indexed custom fields.
SoqlExplainProblemQuery

So, how to make this query use an index? I had naively thought (or perhaps it was epistemic arrogance) that SFDC would automatically exclude any OpportunityLineItem that was prior to the current quarter as specified by the semi-join’s left hand query; and I knew closeDate was indexed. Wrong!

So, I added into the semi joins right hand query a condition to only include OpportunityLineItem whose parent Opportunity’s closeDate was in the current quarter or beyond:

 
/services/data/v32.0/query?explain=select+opportunityId+from+opportunityLineItem++where+opportunity.closeDate+>=+THIS_QUARTER+and+(pricebookEntry.name+=+'FOO'+OR+(bar__c+=+'BAR'+and+is_active__c+=+true))

SoqlExplainFinal
The explain’s relative cost for this optimization now falls below 1.0 and hence it too will use an index. The semi join will join the results of two index result sets.

Peace
I adjusted the daily job’s query using the properly selective semi-join right hand side and reran the daily job. No more timeout!

Update! – after writing all this, I learned I could access the Query Plan from the Developer Console. I tend to avoid using that on orgs I don’t control for fear of writing some anonymous Apex by mistake or using DML through the Query Editor results.

Resources
SFDC KB article (using Developer Console)
Using REST explain

S2S (Salesforce-to-Salesforce) Adventures

To paraphrase talk radio – “long-time thinking about, first-time actually doing it”.

An application came up to implement S2S between my org and another org at the same client. Something seemingly simple – just forward Accounts and Opportunities to the target org. The problem statement called for automatic forwarding / automatic acceptance, no user-initiated forwarding.

Several gotchas were uncovered and I’ll keep a running list here. Most resolved through Google searches and Developer Console investigation.

No upserting of PartnerNetworkRecordConnection (PNRC)

You might run across this blog post with a helpful code example. Unfortunately, the sample uses the upsert DML operation. This is great if the PNRC is being inserted but not if you are trying to update one. Stick to using DML insert and avoid trying to modify an existing PNRC. See SFDC doc.

Don’t try and share an Opportunity before sharing its Account

If you use APEX to share an Opportunity before the parent Account is shared, the DML insert of PartnerNetworkRecordConnection will fail with an exception ‘Invalid status for partner network operation’.

You can share an Opportunity without specifying a value for ParentRecordId. In this situation, the Opportunity will forward and be accepted as an orphan (no parent Account). Yes, the schema allows for this but most orgs probably have validation rules to prevent this from happening. This can be a workaround (with trigger on the target org) for the multiple Lookups on Account problem (see below).

Deleting a PNRC doesn’t actually delete it

If you go into Developer Console and query on PartnerNetworkRecordConnection, then delete selected records through the DC user interface, the records don’t actually delete, they have their status changed to Inactive.

INVALID_PARTNER_NETWORK_STATUS invalid status for partner network operation
Causes:

  • Even if you have an active connection to the remote org, you need to publish the sObjects you insert in your code/testmethods and, the remote org has to subscribe to those sObjects.
  • If you are sharing a child record (like an Opportunity) and the parent record has not yet been accepted, this error ensues. In testmethods, unless using SeeAllData=true and a queryable parent record successfully shared, there is no way to get around this error except to ignore it.

It is impossible to auto-accept (without coding) an Opportunity if it has two or more lookup fields to Account

This is a weakness of the PNRC schema wherein you can’t specify the relationshipname used for the ParentRecordId field. Since the target SFDC org has no way of knowing which relationship to instantiate, it tosses the decision into the target user’s lap. The pending record is locatable only through the Opportunity tab, “Opportunities from Connections” section:

OpposFromConnections

If you’ve overridden your Opportunities tab page (like I had in my dev org), then you’ll need to add a button or link to get you to https://yourinstance.salesforce.com/04L?fcf=000000000000000. Yep – the fcf parameter is all zeroes.

By the way, the SFDC Help on this is here.

The only way I could think of to resolve this problem was to do the following:

  1. Publish the Opportunity.accountId field
  2. On the Subscribing (target) org, map that field to a String field called Partner_Oppo_AccountId__c
  3. Don’t include the ParentRecordId field when, on the publishing org, you create a PNRC for the Opportunity
  4. On the subscribing (target) org, create a before insert trigger that examines the value of Partner_Oppo_AccountId__c, and looks for the corresponding PNRC record with the same PartnerRecordId and also matches the PNRC ConnectionId to the triggered object’s ConnectionReceivedId. In the row you find, take the LocalRecordId and assign it to the triggered Opportunity’s accountId field. Then, when the trigger ends, the Sobject has the proper relationship. Now, I didn’t control the target org so I haven’t had a chance to test this.

On the target org, the running user is called Connection User and it has some quirks

  • This user can’t be queried in the User sObject
  • If you need to look at a debug log, you’ll need to monitor Connection User but it can’t be selected in the lookup for Add Users to Debug Log page. To resolve this, see this tip
  • If you need to know in Apex if some user is the Connection User, use the Apex below. See also
   public Boolean isRunningUserConnectionUser() {
      return UserInfo.getUserType() == 'PartnerNetwork' && 
           UserInfo.getName() == 'Connection User'; // localize appropriately

Don’t share a record back to the org that shared it to you

You’ll need to test to see if the object to be shared has the same ConnectionReceivedId as the connection you intend to share (forward) the record to. A good example of this is covered in this very helpful SFDC best practices guide.

You can’t add the External Sharing relatedlist to a VF page

You might think you could with:

<apex:relatedList list="PartnerNetworkRecordConnection"/>

But you can’t. This special related list, like field histories, either has to be rebuilt with a custom component or you need to let the user escape to a standard page layout.

You can only establish S2S connections between sandbox-sandbox, PROD-PROD, Dev Edition-Dev Edition or PROD<>Dev Edition
This makes deploying Apex code tricky from your sandbox to Production if you have testmethods that cover your S2S logic – as they will require an active PartnerNetworkConnection in order to try and insert any PartnerNetworkRecordConnection . You’ll either have to:

  • Establish a PROD-PROD S2S connection with some friendly PROD org
  • Establish a PROD-DevEdition S2S connection with a captive Dev Edition org

Both options aren’t great – the remote PROD org might not be ready when you are and it is not best practice to have a PROD<->Dev Edition connection, especially as it can share records!

See Also
S2S Stackexchange question on test isolation

Rerendering Visualforce apex:sectionHeader

You would think you could rerender a sectionHeader component by referring to its id, just like other Visualforce components. Turns out this is not true as of V32

Here is a controller and VF page that tries to rerender an apex:sectionheader by inserting the selectedItem from an apex:selectlist.

public with sharing class FooController {
    // For demonstration of sectionHeader rerender issue
  public List<SelectOption> optionList {
        get { return new List<SelectOption> {
           new SelectOption('option1','option1'),
           new SelectOption('option2','option2')};}
        private set;                                                      
    }
  public String selectedOption {get; set;}
}
<apex:page controller="FooController">
    
    <apex:sectionHeader title="My Title: {!selectedOption}" 
          description="sectionheaderNotInDiv." id="sectionHdrNoDiv"/>
    <apex:outputPanel  layout="block" id="sectionHdrInDiv">
        <apex:sectionHeader title="My Title: {!selectedOption}" 
          description="sectionheaderInDiv."/>	
    </apex:outputPanel>
    <apex:form>
    	<apex:pageBlock>
            <apex:selectList label="choose something" 
                             value="{!selectedOption}" size="1">
               <apex:actionSupport event="onchange" 
                   rerender="sectionHdrNoDiv,sectionHdrInDiv"/>
               <apex:selectOptions value="{!optionList}"/>
            </apex:selectList>
        </apex:pageBlock>
    </apex:form>
    
</apex:page>

Initial page state
sectionHdrRerender1

Choose an option so onchange event occurs and rerender ensues
sectionHdrRender2

And the result
Only the sectionheader inside a div is rerendered even though both ids are specified in the rerender
sectionHdrRerender3

Analysis
Using Chrome Developer Tools, I inspected the HTML for the first apex:sectionHeader.

<div class="bPageTitle">
 <div class="ptBody secondaryPalette brandSecondaryBrd">
  <div class="content">
   <img src="/s.gif" class="pageTitleIcon" title="Home" alt="Home">
   <h1 class="pageType noSecondHeader">My Title: </h1>
   <div class="blank"></div>
  </div>
 </div>
</div>

And, what do you know, despite specifying an id on the first apex:sectionHeader, SFDC doesn’t add it to any generated HTML. No wonder it doesn’t rerender!

Versioning woes – ApexPages.addMessage

I recently updated my Util class from V26 to V31. Unfortunately, this broke some VF page test methods. After some experimentation, I traced the problem down to this:

The ApexPages.addMessage() method has several instantiation forms. The one I was using is:

public Message(ApexPages.Severity severity, String summary, String detail)

Using a pattern like this in some of my VF controllers that did some substantial DML:

public PageReference fooActionMeth() {
  String step;
  try {
    step = '01 step';
    // do some work 'A' using various Util methods that can throw exceptions
    // do some other work 'B' using various Util methods that can throw exceptions
    step = '02 step';
    // do some more work using various Util methods that can throw exceptions
  }
  catch (exception e) {
   ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.ERROR,
                                              'some error at step:'+step,
                                              e.getMessage()));
  }
}

My testmethod was structured as shown below. I like this pattern because I can test all the error conditions (usually) in sequence within one testmethod rather than going through the overhead of multiple testmethods, one per error case.

 // ...
 Test.startTest();
 Test.setCurrentPage(Foo.page);
 PageReference resPg;
 // set up conditions to test step 1 and verify exception for work 'A' occurs
 resPg = fooAction();
 System.assert(getMostRecentMsg().contains('expected error code'));
 // set up conditions to test step 1 and verify exception for work 'B' occurs
 resPg = fooAction();
 System.assert(getMostRecentMsg().contains('expected error code')); // fails at V31
 // set up conditions to test step 2 and verify exception occurs
 resPg = fooAction();
 System.assert(getMostRecentMsg().contains('expected error code'));
  // ...
 
  // utility method
  private static String getMostRecentMsg() {
     return ApexPages.hasMessages() 
	? ApexPages.getMessages()[ApexPages.getMessages().size()-1].getSummary() + 
          ' ' + 
          ApexPages.getMessages()[ApexPages.getMessages().size()-1].getDetail() 
	: null;
   }

But, when I upgraded the version of the Util worker code to V31 from V26, the testmethod stopped working for the second assert. Why?

To resolve, I broke the problem down to its essential elements using a separate test class:

@isTest
private static void testAddVFMsg() {    
    Test.startTest();
    Test.setCurrentPage(Page.Foo);
    ApexPages.addMessage(
      new ApexPages.Message(ApexPages.Severity.INFO,'MsgSummary','00detail'));
    System.assertEquals(1,ApexPages.getMessages().size());

    ApexPages.addMessage(
      new ApexPages.Message(ApexPages.Severity.INFO,'MsgSummary','01detail'));
     // passes at V28 and earlier, fails at V29+
    System.assertEquals(2,ApexPages.getMessages().size()); 

    Test.stopTest();
}

Note the arguments to the ApexPages.Message constructor between the two calls. Same summary (second argument), different detail (third argument). I then tried it at V31, V30, …until it passed.

Conclusion
ApexPages.addMessage() behavior varies by SFDC version if the i+1st message has the same summary as the ith message, even if the detail argument is different between i+1st and ith message.

If the summary values are the same, the i+1st message won’t get added to the page messages. This change happened as of V29.

Seems to me that if you call addMessage(..), the resulting getMessages() should include the message you added

Versioning Woes – No such column – Field is valid

Here’s a tip to help resolve a System.QueryException ‘No such column’ when the field exists in the schema

Turns out I was generating a SOQL Select in class A using Schema.Describe to get all fields from OpportunityLineItem resulting in:

Select createdbyid, hasrevenueschedule, 
    description, isdeleted, quantity, systemmodstamp, 
    totalprice, hasschedule, createddate, subtotal, discount, 
    lastmodifiedbyid, currencyisocode, pricebookentryid, 
    lastmodifieddate, id, unitprice, servicedate, 
    listprice,  sortorder, hasquantityschedule,  
from OpportunityLineItem

Yet, when this query was executed by class B, i got an error System.QueryException ‘No such column subtotal’

‘subtotal’ is not a field I normally use in OpportunityLineItem so I wondered if it had been removed or added in the most recent SFDC Version (31). Nope.

Here was the issue:

  • Class A, that generated the SOQL from the Schema.Describe was at V27.0, recently upgraded from V13.0
  • Class B, that executed the generated SOQL from the Schema.Describe was at V16.0 (!)
  • Test Class T, that executed Class A and B was at V27.0

So, the Schema.describe in class A found all the fields known to SFDC version 27.0 which included OpportunityLineItem.subtotal but when executed by a much older class B (version 16), the subtotal field wasn’t part of the schema at that point in history so QueryException ensued.

Updating class B to V27.0 resolved the issue

Object access from SOQL results

I was reading Dan Appleman’s 2nd edition on Advanced Apex Programming and looking at the code samples. In one sample, Dan showed me something I didn’t know you could do in Apex – something that had eluded me in 6 years – and so basic. I was so chagrined but, with shame, comes resolution to help others avoid the same fate.

Let’s look a typical SOQL on Opportunity that references both parent and child objects:

    Opportunity o = [select account.id, account.name,
                            id, name,
                            (select id, quantity
                              from OpportunityLineItems)
                      from Opportunity where id = :someId];

Well, we know we can reference the child objects easily with:

   List<OpportunityLineItem> oliList = o.opportunityLineItems;

But here is what I did not know – you can also reference the Account object as well

   Account a = o.account;  //sheesh

This is handy if you need to update the lookup object. Thus you avoid unnecessary SOQL to fetch it.

Batchable class – test data – callouts: How to test?

The short hand title for this post might also be System.CalloutException: You have uncommitted work pending. Please commit or rollback before calling out

Here’s the problem I recently faced:

  1. Batchable class that does callouts in the start() method.
  2. Testmethod needs to setup some test data before the batchable class executes

As you probably know if you are reading this ..

  • Testmethods can’t do callouts, you have to mock out the callout response using Test.setMock(..)
  • Async tasks like Database.executeBatch() or @future methods don’t execute in a test context until the Test.stopTest() method.

Naively, I constructed this basic testmethod:

insert new Account(name = '00test'); // create test data here
Test.startTest();
  Test.setMock(HttpCalloutMock.class, myMultiMockObject); // setup of multiMockObject omitted for clarity
  MyBatchableClass bClass = new MyBatchableClass();
  Database.executeBatch(bClass,2);  // scope is 2
Test.stopTest();  // SFDC executes the async task here
System.assert(...) //verify results

When you execute this test, you get System.CalloutException: You have uncommitted work pending. Please commit or rollback before calling out

I then tried to setup the test data in a different context like this:

System.runAs(someUserOtherThanRunningUser) {
   insert new Account(name = '00test'); // create test data here
}
Test.startTest();
  Test.setMock(HttpCalloutMock.class, myMultiMockObject); // setup of multiMockObject omitted for clarity
  MyBatchableClass bClass = new MyBatchableClass();
  Database.executeBatch(bClass,2);  // scope is 2
Test.stopTest();  // SFDC executes the async task here
System.assert(...) //verify results

When you execute this test, you get System.CalloutException: You have uncommitted work pending. Please commit or rollback before calling out

Aargh. SFDC doc as of V30 wasn’t much help here. Much Googling ensued. Thanks to tomlogic for the solution essence.

Here is what you have to do –avoid the use of Database.executeBatch() in the testmethod if one of the batchable class’s start(), execute() or finish() methods does a callout. Thus, you have to invoke the start(), execute(), finish() methods explicitly to verify your batchable class logic and get code coverage.

insert new Account(name = '00test'); // create test data here
Test.startTest();
  Test.setMock(HttpCalloutMock.class, myMultiMockObject); // setup of multiMockObject omitted for clarity
  Database.BatchableContext 	bc;
  MyBatchableClass bClass = new MyBatchableClass();
  
  // we execute the start() and prepare results for execute()
  // in my use case, start() does the callout; 
  // thus the testmethod mocks the results of the callout (assumed here to be accounts)
  // setup of custom Iterable and Iterator not shown
  MyIterable  itrbl = (MyIterable)bclass.start();  //start() returns an iterable, in my case, a custom iterable. Note the casting
  MyIterator  itrator = (MyIterator) itrbl.iterator();  // continue simulation of start() by constructing the iterator
  List<Account> aScopeList = new List<Account> ();      // create scope for execute() by iterating against the result of the mocked callout
  while (itrator.hasNext()) {
     aScopeList.add(itrator.next());

  // Now invoke execute() w/ Account list built via mocked callout invoked by start()
  bClass.execute(bc,aScopeList);

  // Finally, invoke finish()
  bClass.finish(bc);
Test.stopTest();  // SFDC executes the async task here
System.assert(...) //verify results

To recap, this approach accomplishes the following:

  1. It will test code coverage as start(), execute(), and finish() are all invoked as if SFDC were invoking them by Database.executeBatch().
  2. It avoids the System.CalloutException: You have uncommitted work pending. Please commit or rollback before calling out
  3. It allows you to test against the returned data from the mocked callout(s) thus isolating your testmethod to predictable responses.
  4. It tests your custom iterator, if you are using one
  5. It only requires a bit more setup to individually invoke start(), execute(), and finish() plus manual construction of what execute() would get if database.executeBatch() were called

Conga Composer – Excel – Array Formulas – aargh

We’re a steady user of Conga Composer to generate Excel reports for our users where we can’t get SFDC to format a report the way the users want. This week, an interesting problem arose.

Business problem
Display a count of distinct values in a set of rows matching some condition; some rows may have null values

Background
Without getting into too much detail, we needed to count unique accounts for various conditions in the Conga-generated, merged Excel template. I couldn’t coerce the well-known ‘Power of One’ solution to work for us in the actual SFDC report that serves as the Conga source data so the decision was made to do the counting in Excel using new cell ranges below the generated Conga output.

How do you count distinct values in Excel?
If you go to the Microsoft Excel Help, you get this page.

For example, one method is to use this excel formula: =SUM(IF(FREQUENCY(MATCH(B2:B10,B2:B10,0),MATCH(B2:B10,B2:B10,0))>0,1))

Now, if you coded up this formula in the Conga template, you couldn’t directly use the cell value B10 because Conga will generate a variable number of rows based on your source data. So, you would code this in Excel as:
=SUM(IF(FREQUENCY(MATCH(B2:B2,B2:B2,0),MATCH(B2:B2,B2:B2,0))>0,1)) and rely on Conga+Excel to expand the outer range of each cell range expression to be B13 or B24 or B65 or whatever, depending on how many rows of generated data (where column B represents Excel value to be counted and rows 2…n are what Conga inserts into the merged template. I’m presuming you understand Conga merge templates.

Unfortunately, this does not work as the formula =SUM(IF(FREQUENCY(MATCH(B2:B2,B2:B2,0),MATCH(B2:B2,B2:B2,0))>0,1)) has to be entered into Excel as an array formula and Conga won’t expand/Excel array formulas by adjusting the outer range and B2:B2 expression

So, now the problem is how to count distinct values without using array formulas.

The solution that worked for us (with limitations)
Some Google searching yielded this count distinct formula that does not rely on Excel array formulas:
=SUMPRODUCT(1/COUNTIF(List1, List1))

Geez, how does this work? I’m used to the second argument of COUNTIF being some expression that is tested against the elements of the first argument but here we are comparing the list to itself. The MSFT documentation of COUNTIF doesn’t have such an example.

Well, I turned to the handy Excel formula expression evaluator in my Windows Excel 2010. Take a look at the picture below (click to see full size so you can read it):
Count distinct formula evaluation

As you can see from the picture and successive evaluations, the COUNTIF(list1,list1) looks at each element in the list and sees how many times it appears in itself, returning an array of the same size as List1. Then the reciprocal is taken of each resulting array element.

So, if a value X appears twice in the range, the resulting array will be {2,..,2,…} and thus the reciprocal array is {.5,..,.5,…}. If also a value Y appears in the range three times, the resulting array will be {2,3,3,…2,…}. Positions are of course arbitrary as there is no assumption made about sorting. And the reciprocal array would be {.5,.333,.333,…5,…}

Finally, SUMPRODUCT, because it has only one array, doesn’t multiply anything but does add up the values in the reciprocal array. Adding .5 twice is 1; Adding .33 three times is 1. Summing 1 and 1 is 2 – two unique elements

As you can see from the picture, the result ‘4’ is the number of unique Account Names. pretty cool and no array formula required.

But what happens if you have nulls in the range of Account Names?
The expression will return #DIV/0. Sigh.

The solution is to recognize that if the comparison array in COUNTIF is guaranteed to have a non-null value, then the counts will work. Look at this picture:

Count distinct with null formula evaluation

Note the concatenation operator ‘&’. This appends a zero length string to the second list in the COUNTIF. Thus your final result will include a count of distinct Account names plus 1 for the null cells. If you are guaranteed there will be null cells (and we were due to Conga subtotal lines inserted), you just subtract one from the final result of the =SUMPRODUCT(…) formula.