When DoorDash tried to backup their Salesforce environment, their vendor simply timed out. With 17 billion records to protect, they learned a hard lesson: not all backup solutions can handle enterprise scale.
The answer to "Does Salesforce backup my data?" is both yes and no. Salesforce backs up its infrastructure, but under the shared responsibility model, you're responsible for protecting your data from human error, integration failures, and other user-level risks.
Here's what you need to know.
Short answer: No, not in a way you can recover.
Salesforce performs continuous infrastructure backups using real-time replication between data centers. This protects against hardware failures and natural disasters—but it doesn't protect against the most common causes of data loss:
According to a Stanford study, 88% of data breaches stem from human error—exactly the scenarios Salesforce's infrastructure backups don't cover.
Think of it like renting an apartment.
The landlord ensures the building doesn't collapse and the roof doesn't leak. But you're responsible for protecting your belongings inside. If you accidentally throw away important documents, the landlord's insurance won't recover them.
Similarly:
Data Export Service - Weekly or monthly CSV exports
Data Loader - Manual bulk export tool
Salesforce Backup & Recovery services:
For mid-size orgs, this is often sufficient. For enterprises with large data volumes or strict independence requirements, it may fall short.
As a company generating hundreds of thousands of Salesforce records daily, DoorDash selected a reputable backup vendor. The result? According to Lead Salesforce Engineer Jeegar Brahmakshatriya, the tool "had limitations that caused it to just timeout, and not be able to backup the data."
The problem was scale. DoorDash needed to protect 17 billion records across 1,900 objects—volumes that broke their first vendor.
API Governor Limits - Salesforce enforces strict API limits. Standard tools consume APIs inefficiently, causing timeouts at large volumes.
Backup Windows - At extreme scale, backups can take 20+ hours. No time to recover before the next cycle begins.
Performance Impact - Inefficient processes degrade production performance. DoorDash manages 80 million accounts and 2.7 TB of files—they needed a solution that wouldn't slow down their 1,000 call center agents.
Metadata Complexity - Tools must backup complex relationships and specialized content like Salesforce Knowledge articles (critical for companies like DoorDash's customer service operations).
Ask yourself these questions:
Critical capabilities: data security and restore-readiness are the two pillars for comprehensive backup and restore solutions.
Proven Scale Performance - Ask for specific examples at your data volume. Can they show successful backups at your scale?
API Efficiency - At large volumes, API consumption determines success or failure. Look for solutions with documented efficiency metrics and which can use all available APIs to support different backup requirements.
Complete Metadata Coverage - Must include custom objects, Knowledge articles, workflows, and all relationships. Incomplete metadata = impossible restoration.
Flexible Backup Frequency - Options from hourly to continuous, not just daily.
Granular Restore - Restore a single field, entire object, or a major, large scale restoration that involves many objects with complex hierarchies and circular references. Point-in-time recovery to any previous snapshot.
Platform Independence - Many security frameworks recommend storing backups separate from production platforms.
Tools like Data Loader let admins modify thousands of records in seconds. One mistake in a CSV file can wipe data permanently.
Buggy Apex code or misconfigured flows can cause mass corruption. Without backups, you're rebuilding data manually.
Sync errors with external systems can overwrite correct data with incorrect information. Accor experienced this during a migration—their backup enabled same-day recovery.
Disgruntled employees or compromised credentials can deliberately delete or modify critical business data.
Salesforce ships updates three times yearly. While rare, compatibility issues can cause unexpected data loss.
Define two key metrics:
Recovery Time Objective (RTO): How much downtime can you tolerate?
Recovery Point Objective (RPO): How much data loss is acceptable?
For critical, fast-changing data, daily backups mean potential 24-hour data loss. DoorDash runs incremental backups every 4 hours to minimize risk.
The reality: Salesforce protects its infrastructure. You protect your data.
The challenge: Native tools have limitations—particularly around frequency, metadata coverage, and Large Data Volumes. When DoorDash's first vendor timed out on 17 billion records, they learned vendor selection matters.
The solution: Define your RTO and RPO. Understand your regulatory requirements. If you manage large volumes, evaluate solutions built for scale.
The critical step: Test your backup regularly. Don't discover during a disaster that your solution can't actually recover your data.
Your Salesforce data powers your business. Make sure your backup strategy can protect it.


