Loading...

Hardware Specifications for Deduplication Extended Mode

Deduplication Extended mode configuration is referenced as the MediaAgent that can hosts multiple (recommended only two) deduplication database (DDBs).

You can use the extended mode in the following scenarios:

Expand All

MediaAgent Hosting DDBs of Two Sites

MediaAgent hosting the DDB of primary copy and secondary copy from two different sites.

Both primary and secondary copies data from different sites accessing the same DDB MediaAgent.

Long Term Retention

MediaAgent hosting two DDBs of primary and secondary copies with following retention settings:

  • Primary Copy - 90 day retention.
  • Secondary Copy - 1 to 5 years retention and DDB sealing is performed every year.

Two DDBs for Primary Copies per MediaAgent

Protection of large amounts of unstructured data with incremental forever strategy.

In this scenario, MediaAgent hosts two DDBs of primary copy with 90 days retention.

Two DDBs for Secondary Copies per MediaAgent

Fan-in target for secondary copies from two or more DDB MediaAgents managing primary copies.

In this scenario, MediaAgent hosts two DDBs of secondary copies with 90 days retention.

The following table provides the hardware requirements for Extra Large and Large environments for deduplication extended mode. For Medium, Small and Extra Small environments, extended deduplication mode is not recommended.

Important:

  • The following hardware requirements are applicable for MediaAgents with deduplication. The requirements do not apply for tape libraries or MediaAgents without deduplication or using third party deduplication applications.
  • The suggested workloads are not software limitations, rather design guidelines for sizing under specific conditions.
  • Prior to configuring the Large or the Extra-large MediaAgents on VMs, contact the Products team for confirmation.
Components Extra Large Large
Backend Size[1] [2] Up to 400 TB Up to 300 TB
CPU/RAM 16 CPU cores, 128 GB RAM 12 CPU cores, 64 GB RAM
Disk Layout
OS or Software Disk 400 GB SSD class disk 400 GB usable disk, min 4 spindles 15K RPM or higher OR SSD class disk
DDB Volume 01 per MediaAgent 2 TB SSD Class Disk/PCIe IO Cards[3]

2 GB Controller Cache Memory

1.2 TB SSD Class Disk/PCIe IO Cards[3]

2 GB Controller Cache Memory

DDB Volume 02 per MediaAgent 2 TB SSD Class Disk/PCIe IO Cards[3]

2 GB Controller Cache Memory

1.2 TB SSD Class Disk/PCIe IO Cards[3]

2 GB Controller Cache Memory

Suggested IOPS for each DDB Disk 20K dedicated Random IOPS[4] 15K dedicated Random IOPS[4]
Index Cache Disk[7] 2 TB usable with 800+ IOPs [3] [5]

Note: SSD Class Disk/PCIe IO Cards is recommended for certain workloads[6]

For example: an extra large Exchange Mailbox Agent index server can contain 1 billion messages. See Configurations for the Exchange Mailbox Agent Index Server.

1.2 TB usable with 800+ IOPs [3]

Note: SSD Class Disk/PCIe IO Cards is recommended for certain workloads[6]

For example: a  large Exchange Mailbox Agent index server can contain 750 million messages. See Configurations for the Exchange Mailbox Agent Index Server.

Suggested Workloads
Parallel Data Stream Transfers 300 200
Laptop clients 5000 2500
Front End Terabytes (FET)
  • Primary Copy Only - 150 TB to 200 TB
  • Secondary Copy Only - 150 TB to 200 TB
  • Mix of Primary and Secondary Copy:
    • 80 TB to 100 TB Primary Copy FET
    • 80 TB to 100 TB Secondary Copy FET
  • Primary Copy Only - 80 TB to 160 TB
  • Secondary Copy Only - 80 TB to 160 TB
  • Mix of Primary and Secondary Copy:
    • 40 TB to 60 TB Primary Copy FET
    • 40 TB to 60 TB Secondary Copy FET
Two DDBs for Primary Copy per MediaAgent

(OR)

Two DDBs for Secondary Copies per MediaAgent

  • 200 TB FET files (includes OnePass™ for files)
  • 120 TB FET of VM data (mix of VSA on VMs and on MediaAgent)
  • 140 TB FET of VM and file data (mix of files, VSA on VMs and on MediaAgent)

Notes:

  • Assumes incremental forever strategy with periodic DASH fulls and staggered schedules
  • Combination of above data types not to exceed 120 TB to 140 TB FET on the primary copies
  • Do not use extended mode with two primary copies for application or proxy backups
  • 160 TB FET files (includes OnePass for files)
  • 100 TB FET of VM data (mix of VSA on VMs and on MediaAgent)
  • 120 TB FET of VM and file data (mix of files, VSA on VMs and on MediaAgent)

Notes:

  • Assumes incremental forever strategy with periodic DASH fulls and staggered schedules
  • Combination of above data types not to exceed 80 TB to 120 TB FET on the primary copies
  • Do not use extended mode with two primary copies for application or proxy backups
MediaAgent hosting one Primary Copy DDB and one Secondary Copy DDB
  • Primary Copy
    • 100 TB FET Files (includes OnePass for Files)
    • 80 TB FET for VMs and files (mix of files with VSA on MediaAgent, and multiple VMs with VSA)
    • 60 TB FET for databases or applications
  • Secondary Copy
    • 80 to 100 FET originating from primary copy of another deduplication node
  • Primary Copy
    • 80 TB FET Files (includes OnePass for Files)
    • 60 TB FET for VMs and files (mix of files with VSA on MediaAgent, and multiple VMs with VSA)
    • 40 TB FET for databases or applications
  • Secondary Copy
    • 40 to 60 FET originating from primary copy of another deduplication node
Supported Targets
Tape Drives Not Recommended Not Recommended
Disk Storage without NetApp Deduplication Not Recommended Not Recommended
Deduplication Disk Storage Up to 400 TB

Direct Attached (OR) NAS

Up to 300 TB

Direct Attached (OR) NAS

Third-Party Deduplication Appliances Not Recommended Not Recommended
Cloud Storage Yes

Primary copy on Disk and Secondary copy on Cloud

Yes

Primary copy on Disk and Secondary copy on Cloud

Deploying MediaAgent on Cloud / Virtual Environments NA NA

Note: The TB values are base-2.

  1. Maximum size per DDB.
  2. Assumes standard retention of up to 90 days. Larger retention might affect FET managed by the specified configuration, the backend capacity remains the same.
  3. SSD class disk indicates PCIe based cards or internal dedicated endurance value drives.
  4. When multiple DDBs are on the volume, each DDB needs dedicated IOPs. IOPs might be limited by SAN controller even though SSD drives are used.
  5. Recommendation for unstructured data types like files, VMs and granular messages. Structured data types like application, databases and so on need considerably less index cache.
  6. For the following data-intensive use cases, placing your index data on a solid-state drive (SSD) might provide better indexing performance:
    • Exchange Mailbox Agent
    • Virtual Server Agents
    • NAS filers running NDMP backups
    • Backing up large file servers
    • SharePoint Agents
    • Ensuring maximum performance whenever it is critical
  7. Assumes retention of up to 15 days.