Category Archives: salesforce

SObject method isClone() Nuance

Discovered something doing a unit test today

SObject class method isClone() does not return true unless the clone source SObject exists in the database.

Account a = new Account(name = '00clonesrc');
insert a;
Account aClone = a.clone(false,true,false,false);
system.debug(LoggingLevel.Info,'isClone='+aClone.isClone());

The debug line shows as: isClone=true

but, don’t do the insert as in this example:

Account a = new Account(name = '00clonesrc');
Account aClone = a.clone(false,true,false,false);
system.debug(LoggingLevel.Info,'isClone='+aClone.isClone());

The debug line shows as: isClone=false

Normally, this might not be an issue but I was unit testing a service layer method and passing cloned sobjects in as arguments without doing the database operation in order to make the tests run faster. This is one place where the DML is required in order to get isClone() to work as expected.

Update 2016-10-04

Per suggestion by Adrian Larson, I retried using a dummy ID

Account a = new Account(id = '001000000000000000', name = '00clonesrc');
Account aClone = a.clone(false,true,false,false);
system.debug(LoggingLevel.Info,'isClone='+aClone.isClone());

The debug line shows as: isClone=true

Duplicate Rule Woes

A long time ago, I implemented duplicate checking in Apex for custom object Foo__c. I decided it was time to use SFDC’s out-of-the-box Duplicate Rules so I could move the logic into point-and-click configuration and clean up the code.

Good idea eh?

Well, sort of. There are some considerations before you jump into this.

Starting condition:
My existing Apex logic checked for duplicates intrabatch as well as extrabatch. Meaning, if two Foo__c‘s with the same key appeared in the same trigger set, they were both flagged as duplicate errors. Similarly, if any Foo__c within the batch matched an existing Foo__c outside of the batch, it would be flagged as an error.

Consideration (coding)

  • Unfortunately, SFDC duplicate rules won’t block intrabatch duplicates. This is documented in the Help
  • Doubly unfortunate, once you insert in a batch duplicate Foos, if you edit one of the duplicate Foos without changing the matching key field, SFDC won’t check it against the other Foo with the same key. For example, if you bulk uploaded in one batch two Foos, each with key ‘Bar’, SFDC doesn’t detect duplicates. When you go to edit one of the Foos with key ‘Bar’ and change any field other than the matching key, SFDC won’t tell you that Foo i with key ‘Bar’ is the same as existing Foo j with key ‘Bar’.

That said, you do get to eliminate any Apex code that does SOQL to check for duplicates extrabatch.

Workaround
If you really want to block Foos with the same key from getting into the database, you have to implement in your domain layer (i.e. trigger), Apex verification. No SOQL is required because all the records that have to be checked will be in Trigger.new

Consideration (deployment)
As of V37.0 (Summer 16), there is no way to deploy Duplicate Rules through ant or Change Sets. You have to manually add the Duplicate Rules in your target orgs. You can deploy MatchingRules via ant or Change Sets but that won’t do you much good as they have to be bound to a Duplicate Rule. There is an Idea worth voting up.

Rapid compare of two lists in Excel

The business problem was to mass delete Leads from Salesforce to cause a corresponding mass delete from HubSpot. The business wanted to know how many of the planned SFDC deletions were in the HubSpot SFDC sync inclusion list. The lists were very large (> 100,000)

  1. Export SFDC leads to be deleted – with email column in export
  2. Export HubSpot contacts in the SFDC integration settings inclusion list
  3. Create a new Excel workbook and place the Hubspot emails into column A and the SFDC emails in column B
  4. In column C, Write a VLOOKUP of value in column B to see if in column A, if not, error
  5. Count the non error cells in column C

Not so fast pard’ner!

Literally, Excel VLOOKUP exact match (4th argument set to false) is really sloooooowwwwwww on large spreadsheets. So, instead, you have to use VLOOKUP twice but with approximate matching and sorted lists.

Step 1 – sort (only) Column A, then sort (only) Column B
Step 2 – in cell C2, the Excel formula is (remember – HubSpot data is in column A, SFDC data in column B):

=IF(VLOOKUP(B2,$A:$A,1,TRUE)=B2, VLOOKUP(B2,$A:$A,1,TRUE), NA()) and then copy the formula down for all rows in column B (SFDC leads).

This runs lightning fast (the alternative VLOOKUP exact match would be in the minutes on a 64 bit quad processor 32 GB Windows 7 machine).

Why does this work so well?
When VLOOKUP uses approximate matching on sorted lists, it stops once it finds the match (or the very next value). So, the IF condition tests to see if VLOOKUP returns the same value as the lookup key, if yes, the true condition simply returns the lookup results again – because we know it was an exact match. If false, return the #N/A value because we know it was not an equal match.

Now, I could have done this faster with the following:

=IF(VLOOKUP(B2,$A:$A,1,TRUE)=B2, B2, NA())

This is because the result column is the same as the lookup column. The second VLOOKUP can be used to return any column from the search array by varying the third argument. I leave the two VLOOKUPs in for future proofing the general fast VLOOKUP technique on other tables.

Rerender custom component after main page inlineEdit Save

Sounds simple. You have a Visualforce (VF) page with apex:detail and a custom component. You want to rerender the custom component after the inlineEdit Save command completes. The VF markup is trivial:

<apex:page standardController="Account">
  <apex:form >
      <c:MyComponent id="theComponent" someArg="false"/>  
      <apex:detail inlineEdit="true" subject="{!Account}" rerender="theComponent"/>
  </apex:form>
</apex:page>

So, this works great except …. if someArg is used to conditionally render a table row or column. Such as in this example:

<apex:component >
    <apex:attribute name="someArg" type="Boolean" required="false" default="false" description="TRUE if component not part of export"/>
    <table>
      <thead>
        <apex:outputPanel rendered="{!NOT(someArg)}">
          <tr id="headerRow">
            <th><apex:outputText value="Column 1 Now: {!NOW()}"/></th>
            <th><apex:outputText value="Column 2 NotSomeArg"/></th>
          </tr> 
        </apex:outputPanel>    
        <apex:outputPanel rendered="{!someArg}">
          <tr>
            <th><apex:outputText value="Column 1 Now: {!NOW()}"/></th>
            <th><apex:outputText value="Column 2 someArg"/></th>    
          </tr>
        </apex:outputPanel>
      </thead>    
    </table>                                                                                           
</apex:component>

The problem
I had a pretty sophisticated component that sometimes rendered 9 columns and sometimes 11 columns in an HTML table, depending on the value of someArg. This works well on initial page load. When the business requirements changed and I needed to rerender the component after inlineEdit save, the rerender action failed. In fact failed hard – leaving blank spaces in the table header and no table rows.

After spending way too much time thinking it was something about inlineEdit and components, other arguments not being passed, or something in the component’s controller, I stumbled upon this sentence in the documentation (something I knew about already but because the component was so sophisticated and was a component, I didn’t think applied).

You cannot use the reRender attribute to update content in a table.

The solution?
No good answers here.

  1. You have to refactor your component so that you either have two components, one for x columns and one for y columns, sharing subcomponents to avoid duplication or …
  2. rethink your design or …
  3. after inlineEdit completes, reload the entire page. oncomplete="location.replace();". I tried this but the user experience was poor, as first inline edit does its Ajax refresh and all looks fine, and then the page reloads. Since this is contrary to the way other pages in Salesforce refresh after inlineEdit Save, I eschewed this option.

Side Note
Here’s a tip when you’re stuck with a complex piece of code that no longer is working for some inexplicable reason: Rebuild the code from its most elementary pieces in your Dev Edition so you have a working version, then add in the suspect bits one at a time until it fails. Try and avoid your custom objects an customizations, reproduce using the OOB Account object or the like. A side benefit if you do this is that if you’re really stuck, you’ll have a short, self-contained, compilable example (SSCCE) – suitable for publishing on a forum for community assistance.

Database allOrNone and exceptions

In my batch classes, I’m a big user of the allOrNone argument to Database.insert or Database.update. This is mostly because some orgs have dirty data and dealing with those as non-fatal exceptions is better for the business than rolling back all DML from a batch execute() that might be dealing with 199 good records and only one bad one.

So, the normal pattern would be

Database.SaveResult[] srList = Database.update(listOfSobjs,false);
for (Integer i =0; i < listOfSobjs.size(); i++)
  if (!srList[i].isSuccess() {
      // log the error somewhere for later admin action - typically to a persistent sobj
  }

But what if you had this coding fragment where the allOrNone argument was a variable, sometimes true, sometimes false?

Database.SaveResult[] srList = Database.update(listOfSobjs,allOrNoneVbl);
for (Integer i =0; i < listOfSobjs.size(); i++)
  if (!srList[i].isSuccess() {
      // log the error somewhere for later admin action - typically to a persistent sobj
  }

Well, and again the doc isn’t completely clear on this, if allOrNoneVbl is true, no Database.SaveResults are returned and an exception is thrown. Here’s proof:

try {
   Database.SaveResult[] srList = Database.insert(new List<Account>{
                                                    new Account(),
                                                    new Account()},
                                                 true);
   system.assert(false,'allOrNothing=true does not throw exception');
}
catch (Exception e) {
    system.assert(false,'allOrNothing=true does throw exception');
}

Debug log:
DML_END|[2]
EXCEPTION_THROWN|[2]|System.DmlException: Insert failed. First exception on row 0;
first error: REQUIRED_FIELD_MISSING,
Required fields are missing: [Name]: [Name]
EXCEPTION_THROWN|[6]|System.AssertException: Assertion Failed: allOrNothing=true does throw exception
FATAL_ERROR|System.AssertException: Assertion Failed: allOrNothing=true does throw exception

Conclusion: If your batch execute() is intended to log errors and you sometimes use isAllOrNone as true and sometimes as false in the same execute() (because you are doing multiple DML operations), your logging code is more complex as the source of the error message is found in different places (i.e. Database.SaveResult method getErrors() versus caught exception getMessages() ).

Testing a Batchable + Queueable + Schedulable

It isn’t super clear in the documentation (V36) what happens in a test method when a Batchable, Queueable, and Schedulable are involved within the Test.startTest()...Test.stoptest() execution scope.

The system executes all asynchronous processes started in a test method synchronously after the Test.stopTest statement

So, I decided to do a simple experiment:

The class (acts as both a batchable, queueable, and schedulable)

public with sharing class FooBatchableQueueable 
             implements Database.Batchable<Sobject>, Queueable {
    
    
    public Database.QueryLocator start(Database.BatchableContext bc) {
    	System.debug(LoggingLevel.INFO,'Entered Batchable start()...');
    	return  Database.getQueryLocator([select id from Group 
                                              where DeveloperName = 'Foo']);
    }
    
    public void execute(Database.BatchableContext bc, List<Group> scope) {
    	System.debug(LoggingLevel.INFO,'Entered Batchable execute()...');
    	System.enqueueJob(new FooBatchableQueueable());
    	System.debug(LoggingLevel.INFO,'within Batchable execute(), after enqueuing the job...');
    }
    public void finish(Database.BatchableContext bc) {
    	System.schedule('FooSchedulable','0 0 0 1 1 ?', new FooSchedulable());
    	System.debug(LoggingLevel.INFO,'within Batchable finish(), after scheduling');
    }
    
    public void execute(QueueableContext qc) {
    	System.debug(LoggingLevel.INFO,'reached Queueable execute()');
    }
    

    public class FooSchedulable implements Schedulable {
    	public void execute(SchedulableContext sc) {
    		System.debug(LoggingLevel.INFO,'reached Schedulable execute()');
    	}   	
    }
}

And the testmethod

@isTest
private with sharing class FooBatchableQueueableTest {
    
    @isTest private static void testBatchableQueueable() {
    	insert new Group(DeveloperName='Foo', name='Foo', type='Regular');
    	Test.startTest();
    	Database.executeBatch(new FooBatchableQueueable());
    	Test.stoptest();
    	//	Async batchable should execute, then queueable,  
        //      then schedulable. Or do they? See debug log
    }
}

And, what does happen?

  1. The batchable start() and execute() execute fine.
  2. The execute() calls System.enqueueJob(..)
  3. The Queueable job starts, and its execute() method is invoked. See the debug Log
  4. The batchable finish() method executes. It does a System.schedule() on a new object.
  5. The schedulable’s execute does not start.

Debug log

Entered Batchable start()…
Entered Batchable execute()…
within Batchable execute(), after enqueuing the job…
reached Queueable execute()
Entered Batchable finish()…
within Batchable finish(), after scheduling

Conclusion

  • Both the batchable and the queueable, as async transactions are executed “synchronously” once the Test.stopTest() is reached in the testmethod.
  • You definitely cannot assume that the batch finish() will execute before the queueable execute().
  • The constructor for the schedulable class will get invoked, but not its execute() method. You can see no debug log from within the schedulable’s execute().
  • You will need to explicitly test the schedulable by mocking the environment prior to its scheduling and then invoking in a separate testmethod.

Developer Console Inoperable Menus

A few times this calendar year, the menus in a newly-opened Developer Console are not responsive to mouse clicks. Nothing happens.

The IDE workspace has become corrupted. Here is how you can quickly resolve it:

  1. Go to Salesforce Workbench
  2. Go to the REST Explorer and enter this GET query: /services/data/v36.0/tooling/query/?q=select+id,lastModifiedBy.name,+lastmodifiedDate+from+IDEWorkspace+where+lastmodifiedBy.name='your name'+order+by+lastmodifiedDate+desc+limit+1
  3. Copy to the clipboard the value of the ID field in the Raw Response result
  4. Do a REST Delete that looks like: /services/data/v36.0/tooling/sobjects/IDEWorkspace/theIdeWorkspaceId

Close and reopen the Developer Console and you should be good to go.

Testing Viewstate in Apex testmethods

If you have a controller plus VF page that could be subject to viewstate issues and you want to avoid introducing viewstate issues once you have already tested some released version of your code, how should you go about doing this?

There is (as of V36.0), no PageReference method getViewStateSize() which would be the obvious choice.

So, is there a way to approximate the viewstate and verify it doesn’t grow as more data volume increases?

First Principles: What is Viewstate comprised of?

Viewstate is used to preserve state across HTTP stateless protocol as the page interacts with the controller via GETs and POSTs (at least until a client-side redirect occurs). From the Salesforce Developer Doc, Viewstate is made up of:

  • All non-transient data members in the associated controller (either standard or custom) and the controller extensions.
  • Objects that are reachable from a non-transient data member in a controller or controller extension.
  • The component tree for that page, which represents the page’s component structure and the associated state, which are the values applied to those components.
  • A small amount of data for Visualforce to do housekeeping.

So, from a testmethod point of view, we have access to the first two: non-transient data members and their reachable objects. The component tree will vary in size depending on how many components are rendered – and this typically varies with the number of list elements in apex:pageBlockTables, apex:dataTables, and apex:repeats. The Visualforce housekeeping space consumption is not available to us.

Steps to take
Assuming we have followed the excellent guidelines in Visualforce in Practice, Chapter 13, Visualforce Performance and Best PracticesRapidfire Rendering of Visualforce Pages, what remains is to use the Apex testmethods as a regression suite to early warn you if you have introduced a viewstate regression, at least from the controller side.

Note, there are ways to avoid Viewstate altogether via client-side techniques such as Javascript Remoting but for purposes of this Blog post, let’s assume you have a traditional VF controller + VF markup.

Here’s a coding approach to use in your testmethod:

@isTest private static void testViewState() {
  // Mock n Sobjects
  MyController ctlr = new Controller();  // same principle applies for extensions
  // exercise setters, getters, and ajax action methods that process all n records 
  Integer ctlrSerializedSize = Json.serialize(ctlr);  // get size of controller, serialized

  // mock another n SObjects so we have 2n total
  ctlr = new Controller();  
  // exercise setters, getters, and ajax action methods that process all 2n records 
  System.assertEquals(ctlrSerializedSize,Json.serialize(ctlr),
                      'non transient size of controller should stay flat');
}

Caveats

  1. For Json.serialize to work, you can’t have any non-transient, unserializable variables like System.SelectOption. Use methods rather than getter properties. A list of restrictions can be found in the Apex Developer Doc – JSON Support. Some things may be hard to work around.
  2. The equality assert given in the code sample most likely works only if the same input conditions are used for the test on n records as is used on 2n records.
  3. I’ll say it again, this won’t test the number of VF apex:components generated by 2n versus n — that could blow up your view state. Hopefully you addressed this the first time with your design by following the aforementioned VF in Practice guidelines, Chapter 13.

Email2Case Tips

Herein are a set of tips for some non-obvious things when setting up email2Case

Notifying the default case owner when assignment rules look at some field value in the Case like ‘Product’

Since email2Case is basically just free text, unless you have rather clever parsers and triggers, your assignment rules won’t match on any Product field. The assignment rules are more applicable to web-to-case where you can put a form in front of the case submitter. So, if you want to notify the default case owner, you have two choices:

Choice A – Use a catchall assignment rule at the bottom of the list of rules. Be sure to include an email template.
Case assignment rules

If you choose this option, you can get a customized-by-you email template sent to the case owner (or members of the case owner queue)

Choice B – On Case Settings, use a default Case Owner and check the box ‘Notify Default Case Owner’

Case Settings Default Owner
If you choose this option, then there is no email template you can apply. The default case owner (a queue in this example) will receive this simple email format:

Case Settings notify default owner

Sending workflow/process flow emails to customers who are not Contacts in the Case

When Email2Case sends a Case from email foo@bar.com, unless there is an existing Contact for foo@bar.com, SFDC will not set a Contact on the Case (or, for that matter, an Account). That is, Case.ContactId will be null as will Case.AccountID. If a Case Comment is made by the support agent, and, via a Workflow/Process Flow, you want that comment to go to the only email you have on the Case, namely, Case.SuppliedEmail, the following is observed to be true:

  • If you define an Email Alert on Case and have as recipients Related Contact, Contact Email, and Supplied Email (in my example, the first two fields will be null), then the email alert will be sent to SuppliedEmail.
  • If at a later point in time, the Contact is created and associated to the Case, and then you send another email due to a new Case Comment via the workflow’s Email Alert, will the recipient get three copies of the same email? Fortunately, the answer is NO. SFDC de-dups the recipients.
  • How to personalize the email ‘from’ field while ensuring that replies are sent back to the Email2Case routing address

    Email gets sent from Cases to customers in many places:

    1. New Email button on the Email Messages related list
    2. Workflow/Process Flow Email alerts
    3. Case Auto-Response Rules
    4. Enable Case Comment Notifications to Contacts

    Let’s look at these in turn:

    New Email button on the Email Messages related list

    From SFDC’s point of view, this is creating a Task of type Email with the standard email creation form. The Email will be sent to the Case’s Contact.Email. So, if you haven’t converted some ‘new’ email address to a contact, you won’t be able to use this option. Assuming you have done this, then the from and replyTo values have to be established. By default, the drop down will be the running user’s SFDC email address, such as foo-the-agent@bar.com as will the replyTo address. This is not what you want as while the outbound email will have the Case threadId, the customer’s reply will go to the personal inbox of foo-the-agent and not be automatically associated to the Case.

    To resolve this, have each agent change My Settings | Email | My Email Address so that their Email Name is their name but the reply to address is the Email2Case address. Here’s an example:
    EmailSettings

    If you don’t do this, then the Support Agent needs to remember to choose the org-wide email address used for Email2Case routing.

    Workflow/Process Flow Email alerts

    If the workflow email alert is due to a user action (like adding a Case Comment or updating the Case Status), then I recommend using the Email Alert sender = Current User’s Email Address. This is easy to migrate as the alternative, an org-wide email address will be specific to your sandbox versus prod and cause issues when you deploy. Plus, if your agents can send email directly from the Email Messages related list and you chose the personalized option above, you’ll want workflows based on agent actions to operate the same.

Org Wide Email Address Verification Not Received

On a new project with a new client, I asked the mail server admins to set up an orgwide email address with me and a few others as members:

sandbox-no-reply@foo.com

Once you set up an orgwide email address, SFDC sends a verification request to ensure you are authorized to use this email.

But – no verification message was received; not to me, not to any of the email group’s members.

  • Email deliverability in sandbox was ‘System’. A verification message should qualify as a system message
  • Email deliverability tests worked fine across all IP ranges
  • I could happily receive other SFDC messages such as security token requests

The solution …

  • Client’s email system was Google Apps for Work.
  • The email group did not allow by default email from outside the domain of foo.com
  • Changing the group setting to ‘Public’ was step 1 of the solution and maybe the only step you will need.
  • As I was doing sysad via my company, my email address was in cropredy.com, not foo.com. A further change was needed in the Group definition to make my email address an ‘owner’. This allows mail to be distributed outside of the domain, in this case to @cropredy.com