Author Archives: eric.kintzer@cropredy.com

Database allOrNone and exceptions

In my batch classes, I’m a big user of the allOrNone argument to Database.insert or Database.update. This is mostly because some orgs have dirty data and dealing with those as non-fatal exceptions is better for the business than rolling back all DML from a batch execute() that might be dealing with 199 good records and only one bad one.

So, the normal pattern would be

Database.SaveResult[] srList = Database.update(listOfSobjs,false);
for (Integer i =0; i < listOfSobjs.size(); i++)
  if (!srList[i].isSuccess() {
      // log the error somewhere for later admin action - typically to a persistent sobj
  }

But what if you had this coding fragment where the allOrNone argument was a variable, sometimes true, sometimes false?

Database.SaveResult[] srList = Database.update(listOfSobjs,allOrNoneVbl);
for (Integer i =0; i < listOfSobjs.size(); i++)
  if (!srList[i].isSuccess() {
      // log the error somewhere for later admin action - typically to a persistent sobj
  }

Well, and again the doc isn’t completely clear on this, if allOrNoneVbl is true, no Database.SaveResults are returned and an exception is thrown. Here’s proof:

try {
   Database.SaveResult[] srList = Database.insert(new List<Account>{
                                                    new Account(),
                                                    new Account()},
                                                 true);
   system.assert(false,'allOrNothing=true does not throw exception');
}
catch (Exception e) {
    system.assert(false,'allOrNothing=true does throw exception');
}

Debug log:
DML_END|[2]
EXCEPTION_THROWN|[2]|System.DmlException: Insert failed. First exception on row 0;
first error: REQUIRED_FIELD_MISSING,
Required fields are missing: [Name]: [Name]
EXCEPTION_THROWN|[6]|System.AssertException: Assertion Failed: allOrNothing=true does throw exception
FATAL_ERROR|System.AssertException: Assertion Failed: allOrNothing=true does throw exception

Conclusion: If your batch execute() is intended to log errors and you sometimes use isAllOrNone as true and sometimes as false in the same execute() (because you are doing multiple DML operations), your logging code is more complex as the source of the error message is found in different places (i.e. Database.SaveResult method getErrors() versus caught exception getMessages() ).

Testing a Batchable + Queueable + Schedulable

It isn’t super clear in the documentation (V36) what happens in a test method when a Batchable, Queueable, and Schedulable are involved within the Test.startTest()...Test.stoptest() execution scope.

The system executes all asynchronous processes started in a test method synchronously after the Test.stopTest statement

So, I decided to do a simple experiment:

The class (acts as both a batchable, queueable, and schedulable)

public with sharing class FooBatchableQueueable 
             implements Database.Batchable<Sobject>, Queueable {
    
    
    public Database.QueryLocator start(Database.BatchableContext bc) {
    	System.debug(LoggingLevel.INFO,'Entered Batchable start()...');
    	return  Database.getQueryLocator([select id from Group 
                                              where DeveloperName = 'Foo']);
    }
    
    public void execute(Database.BatchableContext bc, List<Group> scope) {
    	System.debug(LoggingLevel.INFO,'Entered Batchable execute()...');
    	System.enqueueJob(new FooBatchableQueueable());
    	System.debug(LoggingLevel.INFO,'within Batchable execute(), after enqueuing the job...');
    }
    public void finish(Database.BatchableContext bc) {
    	System.schedule('FooSchedulable','0 0 0 1 1 ?', new FooSchedulable());
    	System.debug(LoggingLevel.INFO,'within Batchable finish(), after scheduling');
    }
    
    public void execute(QueueableContext qc) {
    	System.debug(LoggingLevel.INFO,'reached Queueable execute()');
    }
    

    public class FooSchedulable implements Schedulable {
    	public void execute(SchedulableContext sc) {
    		System.debug(LoggingLevel.INFO,'reached Schedulable execute()');
    	}   	
    }
}

And the testmethod

@isTest
private with sharing class FooBatchableQueueableTest {
    
    @isTest private static void testBatchableQueueable() {
    	insert new Group(DeveloperName='Foo', name='Foo', type='Regular');
    	Test.startTest();
    	Database.executeBatch(new FooBatchableQueueable());
    	Test.stoptest();
    	//	Async batchable should execute, then queueable,  
        //      then schedulable. Or do they? See debug log
    }
}

And, what does happen?

  1. The batchable start() and execute() execute fine.
  2. The execute() calls System.enqueueJob(..)
  3. The Queueable job starts, and its execute() method is invoked. See the debug Log
  4. The batchable finish() method executes. It does a System.schedule() on a new object.
  5. The schedulable’s execute does not start.

Debug log

Entered Batchable start()…
Entered Batchable execute()…
within Batchable execute(), after enqueuing the job…
reached Queueable execute()
Entered Batchable finish()…
within Batchable finish(), after scheduling

Conclusion

  • Both the batchable and the queueable, as async transactions are executed “synchronously” once the Test.stopTest() is reached in the testmethod.
  • You definitely cannot assume that the batch finish() will execute before the queueable execute().
  • The constructor for the schedulable class will get invoked, but not its execute() method. You can see no debug log from within the schedulable’s execute().
  • You will need to explicitly test the schedulable by mocking the environment prior to its scheduling and then invoking in a separate testmethod.

Developer Console Inoperable Menus

A few times this calendar year, the menus in a newly-opened Developer Console are not responsive to mouse clicks. Nothing happens.

The IDE workspace has become corrupted. Here is how you can quickly resolve it:

  1. Go to Salesforce Workbench
  2. Go to the REST Explorer and enter this GET query: /services/data/v36.0/tooling/query/?q=select+id,lastModifiedBy.name,+lastmodifiedDate+from+IDEWorkspace+where+lastmodifiedBy.name='your name'+order+by+lastmodifiedDate+desc+limit+1
  3. Copy to the clipboard the value of the ID field in the Raw Response result
  4. Do a REST Delete that looks like: /services/data/v36.0/tooling/sobjects/IDEWorkspace/theIdeWorkspaceId

Close and reopen the Developer Console and you should be good to go.

Testing Viewstate in Apex testmethods

If you have a controller plus VF page that could be subject to viewstate issues and you want to avoid introducing viewstate issues once you have already tested some released version of your code, how should you go about doing this?

There is (as of V36.0), no PageReference method getViewStateSize() which would be the obvious choice.

So, is there a way to approximate the viewstate and verify it doesn’t grow as more data volume increases?

First Principles: What is Viewstate comprised of?

Viewstate is used to preserve state across HTTP stateless protocol as the page interacts with the controller via GETs and POSTs (at least until a client-side redirect occurs). From the Salesforce Developer Doc, Viewstate is made up of:

  • All non-transient data members in the associated controller (either standard or custom) and the controller extensions.
  • Objects that are reachable from a non-transient data member in a controller or controller extension.
  • The component tree for that page, which represents the page’s component structure and the associated state, which are the values applied to those components.
  • A small amount of data for Visualforce to do housekeeping.

So, from a testmethod point of view, we have access to the first two: non-transient data members and their reachable objects. The component tree will vary in size depending on how many components are rendered – and this typically varies with the number of list elements in apex:pageBlockTables, apex:dataTables, and apex:repeats. The Visualforce housekeeping space consumption is not available to us.

Steps to take
Assuming we have followed the excellent guidelines in Visualforce in Practice, Chapter 13, Visualforce Performance and Best PracticesRapidfire Rendering of Visualforce Pages, what remains is to use the Apex testmethods as a regression suite to early warn you if you have introduced a viewstate regression, at least from the controller side.

Note, there are ways to avoid Viewstate altogether via client-side techniques such as Javascript Remoting but for purposes of this Blog post, let’s assume you have a traditional VF controller + VF markup.

Here’s a coding approach to use in your testmethod:

@isTest private static void testViewState() {
  // Mock n Sobjects
  MyController ctlr = new Controller();  // same principle applies for extensions
  // exercise setters, getters, and ajax action methods that process all n records 
  Integer ctlrSerializedSize = Json.serialize(ctlr);  // get size of controller, serialized

  // mock another n SObjects so we have 2n total
  ctlr = new Controller();  
  // exercise setters, getters, and ajax action methods that process all 2n records 
  System.assertEquals(ctlrSerializedSize,Json.serialize(ctlr),
                      'non transient size of controller should stay flat');
}

Caveats

  1. For Json.serialize to work, you can’t have any non-transient, unserializable variables like System.SelectOption. Use methods rather than getter properties. A list of restrictions can be found in the Apex Developer Doc – JSON Support. Some things may be hard to work around.
  2. The equality assert given in the code sample most likely works only if the same input conditions are used for the test on n records as is used on 2n records.
  3. I’ll say it again, this won’t test the number of VF apex:components generated by 2n versus n — that could blow up your view state. Hopefully you addressed this the first time with your design by following the aforementioned VF in Practice guidelines, Chapter 13.

Email2Case Tips

Herein are a set of tips for some non-obvious things when setting up email2Case

Notifying the default case owner when assignment rules look at some field value in the Case like ‘Product’

Since email2Case is basically just free text, unless you have rather clever parsers and triggers, your assignment rules won’t match on any Product field. The assignment rules are more applicable to web-to-case where you can put a form in front of the case submitter. So, if you want to notify the default case owner, you have two choices:

Choice A – Use a catchall assignment rule at the bottom of the list of rules. Be sure to include an email template.
Case assignment rules

If you choose this option, you can get a customized-by-you email template sent to the case owner (or members of the case owner queue)

Choice B – On Case Settings, use a default Case Owner and check the box ‘Notify Default Case Owner’

Case Settings Default Owner
If you choose this option, then there is no email template you can apply. The default case owner (a queue in this example) will receive this simple email format:

Case Settings notify default owner

Sending workflow/process flow emails to customers who are not Contacts in the Case

When Email2Case sends a Case from email foo@bar.com, unless there is an existing Contact for foo@bar.com, SFDC will not set a Contact on the Case (or, for that matter, an Account). That is, Case.ContactId will be null as will Case.AccountID. If a Case Comment is made by the support agent, and, via a Workflow/Process Flow, you want that comment to go to the only email you have on the Case, namely, Case.SuppliedEmail, the following is observed to be true:

  • If you define an Email Alert on Case and have as recipients Related Contact, Contact Email, and Supplied Email (in my example, the first two fields will be null), then the email alert will be sent to SuppliedEmail.
  • If at a later point in time, the Contact is created and associated to the Case, and then you send another email due to a new Case Comment via the workflow’s Email Alert, will the recipient get three copies of the same email? Fortunately, the answer is NO. SFDC de-dups the recipients.
  • How to personalize the email ‘from’ field while ensuring that replies are sent back to the Email2Case routing address

    Email gets sent from Cases to customers in many places:

    1. New Email button on the Email Messages related list
    2. Workflow/Process Flow Email alerts
    3. Case Auto-Response Rules
    4. Enable Case Comment Notifications to Contacts

    Let’s look at these in turn:

    New Email button on the Email Messages related list

    From SFDC’s point of view, this is creating a Task of type Email with the standard email creation form. The Email will be sent to the Case’s Contact.Email. So, if you haven’t converted some ‘new’ email address to a contact, you won’t be able to use this option. Assuming you have done this, then the from and replyTo values have to be established. By default, the drop down will be the running user’s SFDC email address, such as foo-the-agent@bar.com as will the replyTo address. This is not what you want as while the outbound email will have the Case threadId, the customer’s reply will go to the personal inbox of foo-the-agent and not be automatically associated to the Case.

    To resolve this, have each agent change My Settings | Email | My Email Address so that their Email Name is their name but the reply to address is the Email2Case address. Here’s an example:
    EmailSettings

    If you don’t do this, then the Support Agent needs to remember to choose the org-wide email address used for Email2Case routing.

    Workflow/Process Flow Email alerts

    If the workflow email alert is due to a user action (like adding a Case Comment or updating the Case Status), then I recommend using the Email Alert sender = Current User’s Email Address. This is easy to migrate as the alternative, an org-wide email address will be specific to your sandbox versus prod and cause issues when you deploy. Plus, if your agents can send email directly from the Email Messages related list and you chose the personalized option above, you’ll want workflows based on agent actions to operate the same.

Org Wide Email Address Verification Not Received

On a new project with a new client, I asked the mail server admins to set up an orgwide email address with me and a few others as members:

sandbox-no-reply@foo.com

Once you set up an orgwide email address, SFDC sends a verification request to ensure you are authorized to use this email.

But – no verification message was received; not to me, not to any of the email group’s members.

  • Email deliverability in sandbox was ‘System’. A verification message should qualify as a system message
  • Email deliverability tests worked fine across all IP ranges
  • I could happily receive other SFDC messages such as security token requests

The solution …

  • Client’s email system was Google Apps for Work.
  • The email group did not allow by default email from outside the domain of foo.com
  • Changing the group setting to ‘Public’ was step 1 of the solution and maybe the only step you will need.
  • As I was doing sysad via my company, my email address was in cropredy.com, not foo.com. A further change was needed in the Group definition to make my email address an ‘owner’. This allows mail to be distributed outside of the domain, in this case to @cropredy.com

Opportunity Historical Trend Reporting – Missing Opportunities

Here was a real conundrum. A client asked me to resolve why she (a sysad) couldn’t see any Opportunities in her Opportunity Historical Trending report when all her other Opportunity reports showed all Opportunities without issue.

I did the usual first principles checks:

  1. Was Opportunity Historical Trending even enabled? YES
  2. Were the right fields being tracked for trending? YES
  3. Was there actually Field History Tracking enabled for Opportunities? YES
  4. Were there actually rows in the Opportunity__hd Historical Tracking SObject? YES
  5. Could I reproduce the problem as a System Admin? YES

The usual rule of thumb when something isn’t visible in an Opportunity report is either something about the running user’s role or the sharing model. In this org, the sharing model was public read/write. The running user was a System Administrator who has View ALL Data, but wait …. the system administrator’s role was on a leaf at the end of one of two tree main branches. The sysad’s branch was a sibling to the sales org’s branch. The org had no top level role.

Org Roles

But why would the system administrator be able to see all Opportunities when running a standard Opportunities report if their role was buried deep inside a disconnected-from-the sales-org branch?

The answer lies here in the SFDC Help. I quote (note, the org was not using Territory Management):

If the Organization has the Opportunity Organization-wide Defaults set to “Public Read/Only:”
A Standard Report Type will show all the Opportunities the Running User can see, and that meet the criteria.

A Custom Report Type will only show Opportunities owned by a User with the same Role as or a Role below them in the Hierarchy. In this case if the missing Opportunities are owned by a User with a Role higher in the Hierarchy, the Running User will need to click on that Role in the Hierarchy selector (this can be found in the report detail page under its name).

OK, so this explains why standard reports can show all data that the sysad can see, regardless of where their role sits. But isn’t an Opportunity Historical Trend report a standard report? Doesn’t it come out-of-the-box from Salesforce?

The answer is No, Historical Trend Reports are not standard reports, they are custom report types.
Evidence for this:

So, the verbiage in the SFDC Help re: Custom Report Types applies – the running user’s role matters and since their role in the hierarchy was in a sibling branch to the sales org’s branch, there was not even a way to click the Hierarchy chain to see the data.

Bottom line

For your average SMB org, the out-of-the-box Opportunity reports may never be supplemented with Custom Report Types so it is easy to be lulled into thinking that any report you create for users on Opportunities will at least, let you, the sysad, see all data. But that odd duck – Historical Trending Reports – is treated as a Custom Report Type, and, when used for Opportunities (a common use case), your placement in the role hierarchy matters. In this case, the sysads were not at a top level in the hierarchy (and there was no top level either) so the problem ensued.

Builder pattern – in testmethod asserts

A common problem in business applications is to test multiple field values set by some service in an SObject. Normally, you might start down via this approach:

// ...code that updates some Opportunity

// now verify against expected values
Opportunity oActual = [select amount, closeDate, stagename from Opportunity where ...];
System.assertEquals(1000.0, oActual.amount);
System.assertEquals(Date.newIstance(2020,1,1), oActual.closeDate);
System.assertEquals('Closed Won',oActual.Stagename);

But this has several problems.

  1. It is tedious to type
  2. The testmethod stops on the first error yet other errors may be lurking. This is especially true if you have to verify many SObject fields. So, you end up running a test, finding an error, then fixing, then rerunning, then exposing a new error. Rinse and repeat and most likely, your attention has wandered into social media before too long.

Thanks to my colleagues Adrian Larson, sfdcfox and Keith C on Salesforce Stackexchange, I was introduced to the Builder Pattern. I decided to apply it to this issue.

Objective
I was looking to get all field verify errors for an SObject exposed in a single System.assert. So, the goal was to code this:

System.assertEquals(SObjectVerify.NO_ERRORS,OpportunityVerify.getInstance(oActual)
  .withAmount(60000)
  .withStage('Closed Won')
  .withCloseDate(Date.newINstance(2020,1,1)
  .results());

that, if it doesn’t verify, displays the assertion failure as:

System.AssertException: Assertion Failed: Expected: , Actual:
AccountId Expected: 00540000000wbFSAAY vs. Actual: 0014000000HQGCMAA5
CloseDate Expected: null vs. Actual: 2006-10-14 00:00:00

So, here goes the code – one base (super) class that does all the work and one , each, domain (SObject-specific) class.

The base (super) class

public abstract class SObjectVerify {
	
	//	-------------------------------------
	//	Inner Class to track Variances between 
	//	expected and actual values for a given field
	//	-------------------------------------
	private class Variance {
		Object				actVal;
		Object				expVal;
		
		private Variance(Object expVal, Object actVal) {
			this.expVal		= expVal;
			this.actVal		= actVal;
		}
	}
	private map<Schema.SobjectField,Variance> fldToVarianceMap = new map<Schema.SobjectField,Variance>();
	
	public static final String NO_ERRORS	= '';	// Used by caller in system.assert as expectedResult
	
    private Sobject	actSobj;	// the actual Sobject
    
    // If all expected values match all actuals, 
    //	return NO_ERRORS, otherwise, return a line-broken string of variances
    protected String getResults() {
    	String res 		= NO_ERRORS;
    	for (Schema.SobjectField fld : fldToVarianceMap.keySet())
    		res += '\n   ' + fld + ' Expected: ' + fldToVarianceMap.get(fld).expVal + 
    			     ' vs. Actual: ' + fldToVarianceMap.get(fld).actVal;
    	return res;	
    }

	//	(super) constructor
	public SobjectVerify(SObject actSobj) {this.actSobj = actSobj;}
	
	
	//	Builder pattern, returns ourselves after comparing 
	//	the actualFldVal vs expectedFldVal, stashing variances in a map
	protected SObjectVerify verify(Object expVal, Schema.SObjectField fld) {
		Object actVal	= this.actSobj.get(fld);
		if (expVal == null) {
			if (actVal != expVal)
				this.fldToVarianceMap.put(fld,new Variance(expVal,actVal));
		}
		else
		if (expVal instanceOf Blob) {
			if ((Blob) actVal != (Blob) expVal) 
				this.fldToVarianceMap.put(fld,new Variance(expVal,actVal));
		}
		else
		if (expVal instanceOf Boolean) {
			if ((Boolean) actVal != (Boolean) expVal) 
				this.fldToVarianceMap.put(fld,new Variance(expVal,actVal));
		}
		else
		if (expVal instanceOf Date) {
			if ((Date) actVal != (Date) expVal) 
				this.fldToVarianceMap.put(fld,new Variance(expVal,actVal));
		}
		else
		if (expVal instanceOf DateTime) {
			if ((DateTime) actVal != (DateTime) expVal) 
				this.fldToVarianceMap.put(fld,new Variance(expVal,actVal));
		}
		else
		if (expVal instanceOf Decimal) {
			if ((Decimal) actVal != (Decimal) expVal) 
				this.fldToVarianceMap.put(fld,new Variance(expVal,actVal));
		}
		else
		if (expVal instanceOf ID) {
			if ((ID) actVal != (ID) expVal) 
				this.fldToVarianceMap.put(fld,new Variance(expVal,actVal));
		}
		else
		if (expVal instanceOf Integer) {
			if ((Decimal) actVal != (Integer) expVal) 
				this.fldToVarianceMap.put(fld,new Variance(expVal,actVal));
		}
		else
		if (expVal instanceOf String) {
			if ((String) actVal != (String) expVal) 
				this.fldToVarianceMap.put(fld,new Variance(expVal,actVal));
		}
		return this;												
	}
}

The domain (Sobject-specific) Class

public class OpportunityVerify extends SObjectVerify {
	
	//	Usage
	//	System.assertEquals(SObjectVerify.NO_ERRORS,OpportunityVerify.getInstance(someActualOpportunity)
	//							.withXXX(someExpectedValFldXXX)
	//							.withYYY(someExpectedValFldYYY)
	//							.results();
	
	//	If the assertion fails, System.assert displays for each field at variance (separated by \n):
	//
	//		fldXXX expected: .... vs. actual: ....
	//		fldYYY expected: .... vs. actual: ....
	
	private SObject actSObj;   // actual Opportunity, to be compared with Expected Opportunity
	
	public OpportunityVerify withAccountId(Object expVal) 	{
           return (OpportunityVerify) verify(expVal,Opportunity.AccountId);
        }
	public OpportunityVerify withAmount(Object expVal) 	{
           return (OpportunityVerify) verify(expVal,Opportunity.Amount);
        }
	public OpportunityVerify withCampaignId(Object expVal) 	{
           return (OpportunityVerify) verify(expVal,Opportunity.CampaignId);
        }
	public OpportunityVerify withCloseDate(Object expVal) 	{
           return (OpportunityVerify) verify(expVal,Opportunity.CloseDate);
        }
	public OpportunityVerify withHasOli(Object expVal) 	{
           return (OpportunityVerify) verify(expVal,Opportunity.HasOpportunityLineItem);
        }
	public OpportunityVerify withStage(Object expVal) 	{
           return (OpportunityVerify) verify(expVal,Opportunity.StageName);
        }
	
	public static OpportunityVerify	getInstance(SObject actSobj) {
		return new OpportunityVerify(actSobj);
	}
	
	public String results() {
		return super.getResults(); // super class returns either NO_ERRORS (empty string) or a single string of variances)
	}
	
	public OpportunityVerify(SObject actualSobj) {
		super(actualSobj);
	}
}

Additional advantages

  1. If you need to incrementally add new field verifications, you only need to add a new withXXX method to the theDomainObjectVerify class.

Some possible extensions

  1. Verify a batch of records in a single assert, with the errors indexed by the position in the list

Batchables-Limits Exceptions

A bad thing happened the other day. Here was the sequence:

  1. Batch job started. start() method returned 5000 + Opportunity rows.
  2. Database.stateful used to record an internal log of activity for subsequent posting in the finish() method.
  3. Each batch of 200 was passed to execute() method.
  4. Execute method added bits to the stateful log (a string variable).
  5. Batch 21 (out 28) blew up on a Limits Heap Size exception. Blow up continued on batches 22-28.
  6. finish() method started and took value from Database.Stateful variable and persisted to Sobject Log__c (s).
  7. AND HERE IS WHERE BAD THINGS HAPPENEDfinish() method started a “finalize” batch job, passing a list of sobject IDs that had exceptions in any previous batch execute. The finalize batch job (i.e. the chained batch job), made updates to all Opportunities that weren’t selected from the previous batches start() method and weren’t already marked as exceptions. In my case, these Opportunities were marked as closed lost.

So .. because the Opportunities in batches 21-28 were never processed and never marked with an exception (because of the uncatchable Limits exception), the chained (second) batch job blithely assumed that the Opportunities in batches 21-28 had never been fetched in the previous batch job’s start() method. Hence, perfectly good Opportunities got marked as closed lost.

Uh-oh.


So, what should I have done differently?

First, I wrongly assumed that a Limits exception would terminate the entire batch job, not just the currently running execute() batch.

And, since of this misconception, the finish() method unconditionally executes without knowing if all of the batches passed without uncaught exceptions. And, any work the finish() method performs that involves DML-type work, including scheduling a subsequent chained job may lead to incorrect behavior.

  1. The finish() method has access to the BatchableContext and can get, via getJobId(), the AsyncApexJob that represents the batch job.
  2. AsyncApexJob has a field NumberOfErrors that identifies how many batches are in error. If greater than zero, appropriate business logic should be applied.

Of course, the Limits Exception needs to be avoided in the first place by taking a different approach to stateful logging. I’ll investigate this in a subsequent post (but don’t stay up waiting for it!)

Download multiple ContentVersion files as zip

In our org, there is a relatedList underneath the Contract object of ContentVersions of the contract and any amendments. Users wanted to be able to download all (or download selected) files with one click.

Such relatedList does not provide any download button. You need to select each file in turn, go to the Content page and click Download. Tedious. Users hated it. I hated it.

Googling revealed an interesting post on downloading attachments into a zip using JSZip. However, we had files greater than 15MB and I worried about Heap Limits.

Turns out SFDC provides download ContentVersion as zip already on the Content tab. (Where our users nor I almost never go as the Contract relatedList is more convenient). So, I used the browser developer Tools to see what URL was constructed by SFDC and saw that it exploited the undocumented servlet.shepherd.

The specific URL to use is:

{!URLFOR('/sfc/servlet.shepherd/version/download/' & delimitedSelectedIdList &'?')}

where delimitedSelectedIdList is a list of ContentVersion ids separated by forward slash. from my understanding, the ids separated by slashes is a sometimes-used REST convention to specify a list of resources.

Example (downloads 2 files into single zip)

{!URLFOR('/sfc/servlet.shepherd/version/download/068xxxxxxxxxxxxxxx/068yyyyyyyyyyyyyyy?')}

I tested this on large zip files (~400 MB) and had no issues.

Notes:

  1. servlet.shepherd is undocumented and hence unsupported so this might not be the right answer for a mission critical application.
  2. servlet.shepherd does not work on Attachments, only ContentVersion.