Category Archives: salesforce

User Debug Logs Missing in Eclipse IDE – Why?

I had recently upgraded my Eclipse IDE to V36 from V33. The Eclipse Workspace pointed to two different orgs:

  • Developer Edition
  • Sandbox for a client’s PROD org

When I ran a Test Configuration for the Developer Edition, the user debug logs appeared in the Apex Test Results pane. But, for the sandbox org, while the tests ran, the user and system debug log panes were empty. Nothing. Zippo. Bupkus. The logs appeared in the Developer Console but not within the Eclipse IDE.

Solution
Eclipse had prompted me to upgrade the project to V36 but I had ignored that message. Silly me. By upgrading the project to the same version as the Eclipse IDE, and rerunning the tests, the debug logs appeared in the Eclipse Apex Test Results pane.

Eclipse system debug log

Eclipse user debug log

Migrating Attachments using FileExporter

I recently had the joyous job of migrating some of the attachments from one org to another. The use case was a business unit spin-off into a new SFDC org. As such, unlike many migrations, only a subset of the data from the source org had to be moved to the target org.

As the SFDC doc (V36) states, Data Loader can not be used to export Attachment bodies. The SFDC suggestion is to use the Weekly Export feature. This suggestion was impractical as the org had tens of thousands if not more Attachments and I had previously determined that there were only 8000 or so Attachments that needed to be migrated. An enormous CSV file would be created that would be difficult to correctly manipulate in Excel.

So – my solution was to use the handy (free) tool, FileExporter, available on AppExchange.

Although AppExchange states a date of 2010, the tool has received updates from the developer – Ezra Konigsberg. Read the installation instructions carefully. Note that it is a Windows only solution. You have to use an older version of Data Loader (17) and update the security certificate. Ezra goes into great detail on how to do this in the ReadMe for FileExporter.

Another useful summary of the process of attachment migration is this post from West Monroe Associates, a most concise description of the process.

So, how did it go?

By default, FileExporter will download all of the org’s Attachments. Since I only wanted some Attachments, I edited the line in the beans.xml file as Ezra suggested to be like the following

<!ENTITY whereClause "CreatedDate>2014-09-09T00:00:00.000Z and parentId IN (select id from Account where type = 'Foo')">

to get the Account attachments I needed. Then, for the Case attachments, I did a second pass through FileExporter with this beans.xml file:

<!ENTITY whereClause "CreatedDate>2014-09-09T00:00:00.000Z and parentId IN (select id from Case where account.type = 'Foo')">

and so on for the Attachments on the other relevant SObjects (Opportunity, EmailMessage, etc)

FileExporter runs quite quickly although when I did the Case Attachments (over 7000), it was still running when I went to bed but, upon wakening, FileExporter had downloaded all the files when I woke up (home DSL connection).

Two hiccups

  1. If the source file has an invalid Windows file name character in the Attachment name (in my use case it was a colon (:) ), FileExporter downloads the file with the name of theSourceSfdcOrgAttachmentId and no file extension. The file can’t be opened in Windows without telling Windows what program to use. Fortunately, this was not relevant to me as I was uploading the file into the target org and I just needed to adjust the value of the Body column to refer to the file name as exported, not one that resembled the normal pattern: theSourceSfdcOrgAttachmentId#theSourceSFDCFilename.ext. I contacted Ezra and he said he would handle colons in a future release (he handles other invalid Windows file name characters already like ‘&’).
  2. During upload via Data Loader, I received Java Heap error. Fortunately, SFDC Help had the answer with two workarounds. I simply adjusted the size of the Data Loader batch to ‘1’, resubmitted the job and went off to do other things.

Conga Composer Error Messages

Running list of Conga error messages and their resolution

Insufficient privileges to access the Conga Composer Query Repository. Please contact your administrator.

Either:

  • The ID field on &QVarxId where x = 0-1-2 is invalid. Most likely caused by a copy-paste error

MALFORMED_QUERY: … no viable alternative …

Either:

  • Your Conga Composer URL is too long. Too many reports and/or queries. Too many long strings for pvX params. You can verify this by rearranging your reports and queries in the Composer URL. When you look at the View Data workbook, reports/queries that failed with the MALFORMED error now work and other that used to work now get the MALFORMED error

    The solution is to leverage QVarx and ReportVar or to replace soome queries and reports with Excel pivot tables rather than letting Salesforce do the aggregation and filtering

Problem: Invalid query record id. Required: 15- or 18-character Salesforce Record Id.

Either:

  • The obvious – you have the wrong Conga queryId
  • The not so obvious – you are improperly constructing the value of QVarx when used as a query pvX param.

    One cause of this is explained here – not using pipe-delimited QVarxFormat value

    Another and more subtle cause is misspelling the QVarxFormat param as QVarxFmt. Conga will use the default QVar0Format and not pipe-delimit the results.

*** No data found in this data set ***

Either:

  • Running report type Users with Territory management enabled

    If the reportId param does not include pv0=, then the first filter in the report will be smashed by Composer to be the master object Id (in my case, the QuickMerge_Link Sobject with Launch button. Setting pv0= with no params tells Composer to honor the existing first filter in the underlying report

    Example (works): [AmerTerr]00O800000060WXj?pv0=~pv2=foo

    Example (fails): [AmerTerr]00O800000060WXj?pv2=foo

REST SOQL OFFSET and LIMIT – 2000 records

I recently ran into an issue with an Apex batchable class that executed a REST SOQL query performed from SFDC org A against SFDC org B

The batch Apex code in org A was written to handle REST responses that looked like this:

{"totalSize":12,
"done":true | false,
"nextRecordsUrl" : "/services/data/v30.0/query/01g8000001J2eXvAAJ-2000",

"records":[
{"attributes":{
"type" : "the sobject",
"url" : "/services/data/v30.0/sobjects/the sobject/the id"
},
"field0 in query" : "value of field 0",
"field1 in query" : "value of field1",
...},
next record ...
]
}

If "done" : false was present, then, once the iterator reached the end of the retrieved SObjects, it would do an HTTP Request against the value in nextRecordsUrl .

Hence, the batch program could retrieve a large number of records.

And your problem was what?

Turns out my query string used in the original REST GET was of this form:

select id,name from Opportunity where closeDate = THIS_QUARTER Limit 10000 OFFSET 0

Even though there were more than 2000 Opportunities in Org B and even though the coding for the batchable class’s iterator could deal with the queryMore logic, the total number of records retrieved was exactly 2000.

Analysis

Turns out, if you code OFFSET 0, the maximum value returned in the totalSize JSON property of the REST response body is 2000 – regardless of there being more than 2000 rows in the queried org.

Solution
As there was already logic to deal with queryMore(), I simply removed the OFFSET clause. I verified there was no logic to increment OFFSET and repeat the query.

Why was there queryMore() logic in the code in the first place – why not implement LIMIT and OFFSET per SFDC spec?

Turns out, if the retrieved payload from a given query exceeds the maximum transfer size per REST response, SFDC will break up the query results into segments, indicated by the presence of "done": false. These segments will be a maximum of 2000 records but could be (and in my case were – do to payload mass) fewer. The queryMore logic had to be added anyway. Once this was present, the OFFSET portion of the SOQL became superfluous and should have been removed everywhere. Unfortunately, a vestigial OFFSET 0 was present causing ther REST query to only retrieve 2000 rows before saying – I’m done.

Visualforce onchange event not working?

Simple post here

I had a VF page where the onchange event within a pageBlockTable column cell wasn’t executing

<apex:column headerValue="some hdr">
  <apex:actionRegion >
    <apex:inputField value="{!r.foo}">
	<apex:actionSupport event="onChange" rerender="thePageBlockTableId"/>
    </apex:inputField>
  </apex:actionRegion>	
   <apex:inputField value="{!r.bar}" rendered="{!r.foo <> 'xyz'}"/>
</apex:column>

When I click on the cell for the foo column and change the inputField, the pageBlockTable does not rerender

Huh?

Problem is that unlike most of Apex and Visualforce, the event attribute values are case sensitive.

Solution?

replace event="onChange" with event="onchange"

Some additional information on case senstivity in Apex and Visualforce can be found here

inputFile body not Transmitted to VF Controller

Here was the use case:

  1. Display step 1 of a VF page with a commandLink to jump to step 2
  2. Step 2 was initially rendered invisible
  3. Step2 included an apex:inputFile component

When Step 2 was rendered, the inputFile component appeared.

But, if you select a file and click a button to upload the file to the controller, the controller does not receive the file body but does receive the filename and content type.

Huh?

The issue appears to be that initially unrendered apex:inputFile components don’t bind as expected to a controller variable declared as transient (as it typically must be to avoid Visualforce viewstate size issues). The upload action method occurs, all other non-transient attributes of inputFile do transmit but the file body does not.

Here’s proof

The controller

public with sharing class Foo {
	// Class to investigate rerender issue with inputFile
	public Boolean				hasStep1Completed		{get {return this.hasStep1Completed == null ? false : this.hasStep1Completed;} set;}						// vis control
	
	public transient Blob  		fileBody				{get; set;}		// must be transient as view state can't handle > 128KB				
	public String       		fileDescription			{get; set;}
	public String  				fileName				{get; set;}		
  	public Integer       		fileSize				{get; set;}
	public String       		fileType				{get; set;}
	
	public Boolean 				isFileBodyNullOnXmit	{get; set;}		// VF feedback for example
	
	public Foo() {}
	
	public PageReference upload() {
		this.isFileBodyNullOnXmit = this.fileBody == null ? true : false;
		return null;
	}
}

This VF page does not work

<apex:page controller="Foo"  tabStyle="Account">
 <apex:form id="form">
  <apex:outputPanel id="step1Op">
   <apex:pageBlock title="Step 1" id="step1Pb"  rendered="{!NOT(hasStep1Completed)}">
	<apex:actionregion >
	 <apex:commandLink value="Go to step2"  reRender="step1Op,step2Op">
		<apex:param name="hasStep1Completed" value="true" assignTo="{!hasStep1Completed}"/>
	 </apex:commandLink>
	</apex:actionregion>
   </apex:pageBlock>
  </apex:outputPanel>

<!--  This doesn't work, initially unrendered inputFile won't transmit file body once rendered -->
  <apex:outputPanel id="step2Op" >
   <apex:pageBlock title="Step2" id="step2Pb" rendered="{!hasStep1Completed}" >
	<apex:pageBlockButtons location="top">
         <apex:commandButton value="Upload" action="{!upload}"/>
	</apex:pageBlockButtons>
	<apex:inputFile value="{!fileBody}" filename="{!fileName}" contentType="{!fileType}" id="file"/>
   </apex:pageBlock>
 </apex:outputPanel>		
  
 <apex:outputPanel id="resultOp" >
   <apex:pageBlock rendered="{!hasStep1Completed}">
	<apex:outputText value="is FileBody Null On Xmit? {!isFileBodyNullOnXmit}"/>
   </apex:pageBlock>
 </apex:outputpanel> 
		
 </apex:form>
</apex:page>

The apex:inputFile is not initially rendered as Step1 hasn’t completed
FileXmitStep1

Now, by clicking the commandLink, I render it and then choose a file and click Upload
FileXmitStep2

The results outputPanel shows that no fileBody was transmitted!
FileXmitResultnullBodyisTrue

So, how to work around this?

The answer was provided in Salesforce StackExchange. Instead of using VF rendered= attributes, the inputFile must be rendered by made invisible by CSS. So, the revised Visualforce page (no changes to the controller)

<apex:page controller="Foo"  tabStyle="Account">
 <apex:form id="form">
  <apex:outputPanel id="step1Op">
   <apex:pageBlock title="Step 1" id="step1Pb"  rendered="{!NOT(hasStep1Completed)}">
	<apex:actionregion >
	 <apex:commandLink value="Go to step2"  reRender="step1Op,step2Op">
		<apex:param name="hasStep1Completed" value="true" assignTo="{!hasStep1Completed}"/>
	 </apex:commandLink>
	</apex:actionregion>
   </apex:pageBlock>
  </apex:outputPanel>

<!--  This does work, initially rendered by css invisible inputFile will transmit file body once made visible --> 
  <apex:outputPanel id="step2Op" style="display: {!IF(hasStep1Completed, 'inline-block', 'none')};">
   <apex:pageBlock title="Step2" id="step2Pb" >
    <apex:pageBlockButtons location="top">
	<apex:commandButton value="Upload" action="{!upload}"/>
    </apex:pageBlockButtons>
    <apex:inputFile value="{!fileBody}" filename="{!fileName}" contentType="{!fileType}" id="file"/>
  </apex:pageBlock>
 </apex:outputPanel>		
  
 <apex:outputPanel id="resultOp" >
   <apex:pageBlock rendered="{!hasStep1Completed}">
	<apex:outputText value="is FileBody Null On Xmit? {!isFileBodyNullOnXmit}"/>
   </apex:pageBlock>
 </apex:outputpanel> 
		
 </apex:form>
</apex:page>

Only the second outputPanel was changed. Note the use of style="display: {!IF(hasStep1Completed, 'inline-block', 'none')};" to trick Visualforce into thinking the inputFile is rendered but using display: none to make invisible.

The sequence of steps now is:
FileXmitStep1
FileXmitStep2

The results outputPanel shows that fileBody was transmitted! Problem solved.
FileXmitResultnullBodyisFalse

Conga Composer – INVALID_FIELD: No such relation ‘Product2’ on entity ‘OpportunityLineItem’

I had built a Conga Query that exploited the Product2 relationship from the OpportunityLineItem. Yet, when I used this in a Conga Composer URL, I got this error:

INVALID_FIELD: No such relation 'Product2' on entity 'OpportunityLineItem'

I know that the Product2 relationship from OpportunityLineItem was added by SFDC in V30

yet my Conga Composer URL was V33:

&serverUrl=https://mydomain.my.salesforce.com/services/Soap/u/33.0/someId

So, what gives?

Per Conga Support, the serverUrl is used by Conga solely for determining the instance name and they smash the rest of the URL with V29 (!%!).

Of course, this could change in the future so, hopefully, this post will become obsolete.

Workaround
Instead of using the OpportunityLineItem.Product2 relationship, you can fallback to the OpportunityLineItem.PricebookEntry.Product2 relationship. I don’t know what I would do were I needed a Conga Query on Order – an SObject introduced in V30

Conga Composer QVar used within QueryId pv

I was working on making my Conga Composer button more compact and easier to maintain by leveraging the &QVar0Id and &QVar0Fmt value to return a list of strings to be substituted into a Conga Query ...IN (...) ... expression but I couldn’t get it to work. With assistance from Conga Tech Support, here are the details of the problem and solution:

Conga Query 1
select id, opportunity.owner.name, account.name
from Opportunity where closedate = {pv2} and owner.userrole.name IN({pv1})

Note the use of {pv1} within the IN expression. Since owner.userrole.name is a string, you would expect the value passed into {pv1} to look like 'Sales','Marketing' – that is, a comma-delimited list of strings.

So, I set up Conga Query 2 that would always return the Sales and Marketing roles as follows:
select name from userRole where name in ({pv1})

And finally, my Composer button looked like this:

https://www.appextremes.com/apps/Conga/Composer.aspx
?SessionId={!API.Session_ID}
&ServerUrl={!API.Partner_Server_URL_210}
&Id={!$User.ID__c}
&QVar0Id=secondQueryId?pv1='Sales'|'Marketing'
&QVar0Fmt=11000
&QueryId=
firstQueryId?pv1={QVar0}~pv2=THIS_QUARTER,
firstQueryId?pv1={QVar0}~pv2=NEXT_QUARTER,
firstQueryId?pv1={QVar0}~pv2=LAST_QUARTER,
...

Note that the value returned from executing the second Conga Query that is, QVar0Id, is returned formatted per QVar0Fmt, in this case, 11000 means string-delimited, comma-separated, or, 'Sales','Marketing'

So, I expected that when Conga resolved the first query, it would take 'Sales','Marketing' and inject it into the value of the first query’s {pv1}, generating this:

select id, opportunity.owner.name, account.name
from Opportunity where closedate = THIS_QUARTER and owner.userrole.name IN('Sales','Marketing')

Valid SOQL.

But this doesn’t work.

You get this error in Conga (on the View Data workbook)
MALFORMED_QUERY:
Marketing') and closeDate = {pv2}
^
ERROR at Row:1:Column:109
Bind variables only allowed in Apex code

The Solution
You can’t use QVar0Fmt to return the result as comma-separated. It needs to be pipe-separated. In retrospect, this is not surprising as passing comma-separated values to the the pv1, pv2, … parameters of a report filter need to be pipe-separated (as described in the Conga doc). I naively thought that anything injected into a Conga (hence SOQL) query should be valid SOQL syntax but it is clear that Conga post-processes the pipes into commas before executing the SOQL.

Hence, QVar0Fmt needs to be 21000

By changing the Composer button to:

https://www.appextremes.com/apps/Conga/Composer.aspx
?SessionId={!API.Session_ID}
&ServerUrl={!API.Partner_Server_URL_210}
&Id={!$User.ID__c}
&QVar0Id=secondQueryId?pv1='Sales'|'Marketing'
&QVar0Fmt=21000
&QueryId=
firstQueryId?pv1={QVar0}~pv2=THIS_QUARTER,
firstQueryId?pv1={QVar0}~pv2=NEXT_QUARTER,
firstQueryId?pv1={QVar0}~pv2=LAST_QUARTER,
...

The results of the second query come back as 'Sales'|'Marketing', and are injected into the first query yielding this:
select id, opportunity.owner.name, account.name
from Opportunity where closedate = THIS_QUARTER and owner.userrole.name IN('Sales'|'Marketing')

but Conga clears this up before executing the SOQL and the Conga template is properly populated with data from the Opportunity

Visualforce URLFOR Examples

Running blog post of VF examples for URLFOR that aren’t so obvious. More of an aide-memoire to myself

Clone button

<!-- Note use of URL param clone='1' -->
<apex:commandButton value="Clone" 
   action="{!URLFOR($Action.Foo__c.edit, Foo__c.id, [clone='1'])}" id="clonebtn"/>

Custom Controller – Cancel button – avoid coding a custom Cancel action method

<!-- Goes to value of current page's RetURL parameter if any; if none goes to home page -->
<apex:commandButton value="Cancel" 
                    action="{!URLFOR('/' & $CurrentPage.parameters.retURL)}"/>

Fixed domain name plus merge fields

<!-- Goes to some rest resource on known domain; note use of SFDC formula expression to concatenate-->
<apex:commandButton value="Foo" 
                    action="{!URLFOR('https://www.foo.com/' & someControllerProperty)}"/>

Partial success and trigger firing

Yesterday, I was baffled by the following observed behavior:

  1. Insert six Contacts using Database.insert(cList,false), five of which had some validation error, one which succeeded
  2. The successful Contact, in the before insert trigger, derives the value of accountId, if null in the inserted record
  3. The system.assert to verify the defaulting of accountId failed, even though the debug log clearly showed that it was set

Further investigation of the debug log showed the before insert trigger for the successful Contact was executing twice, and in the second iteration, the defaulting of the accountId was not occurring.

Before insert trigger executed twice? Huh?

After some reductionist testing, I verified that this only happened when using optAllOrNothing = false in Database.insert(cList,false). That is, allow for partial successes. According to the documentation (in a section I had never read or paid attention to).

When errors occur because of a bulk DML call that originates from the SOAP API with default settings, or if the allOrNone parameter of a Database DML method was specified as false, the runtime engine attempts at least a partial save:

(1) During the first attempt, the runtime engine processes all records. Any record that generates an error due to issues such as validation rules or unique index violations is set aside.

(2) If there were errors during the first attempt, the runtime engine makes a second attempt that includes only those records that did not generate errors. All records that didn’t generate an error during the first attempt are processed, and if any record generates an error (perhaps because of race conditions) it is also set aside.

(3) If there were additional errors during the second attempt, the runtime engine makes a third and final attempt which includes only those records that didn’t generate errors during the first and second attempts. If any record generates an error, the entire operation fails with the error message, “Too many batch retries in the presence of Apex triggers and partial failures.”

It is the second point that is interesting – “the runtime engine makes a second attempt that includes only those records that did not generate errors” – ahhh, that is why the debug log showed two executions of before insert on the same record.

Now, why did the second attempt not default the Contact’s accountId using my Apex logic?

Answer: At the end of my before insert trigger handler, I set a static variable to prevent the trigger handler from re-executing (sort of a reflex action to avoid unnecessary SOQLs). Hence, when the second attempt at before insert was made, the static variable prevented the defaulting and the record saved successfully, except without a non-null AccountId. Hence the assertion failed.

A different portion of the SFDC Apex doc states:

When a DML call is made with partial success allowed, more than one attempt can be made to save the successful records if the initial attempt results in errors for some records. For example, an error can occur for a record when a user-validation rule fails. Triggers are fired during the first attempt and are fired again during subsequent attempts. Because these trigger invocations are part of the same transaction, static class variables that are accessed by the trigger aren’t reset. DML calls allow partial success when you set the allOrNone parameter of a Database DML method to false or when you call the SOAP API with default settings. For more details, see Bulk DML Exception Handling.

Note the sentence: “Because these trigger invocations are part of the same transaction, static class variables that are accessed by the trigger aren’t reset”.

So, while governor limits are not affected by the retry, the static variables remain persistent and hence the trigger handler derivation logic remained switched off as a result of the conclusion of the initial before insert trigger handler.

Removing the setting of the static variable solved the issue.