logo
Главная страница Случаи

Dell PERC13 Transforms NVMe Hardware RAID for the AI Era

Сертификация
Китай Beijing Qianxing Jietong Technology Co., Ltd. Сертификаты
Китай Beijing Qianxing Jietong Technology Co., Ltd. Сертификаты
Просмотрения клиента
Торговый персонал CO. технологии Пекин Qianxing Jietong, Ltd очень профессионален и терпелив. Они могут обеспечить цитаты быстро. Качество и упаковка продуктов также очень хороши. Наше сотрудничество очень ровно.

—— LLC》 Festfing DV 《

Когда я искал C.P.U. intel и SSD Тошиба срочно, Sandy от CO. технологии Пекин Qianxing Jietong, Ltd дала мне много помощь и получила мне продукты мне быстро. Я действительно оцениваю ее.

—— Иены киски

Sandy CO. технологии Пекин Qianxing Jietong, Ltd очень осторожный продавец, который может напомнить меня об ошибок конфигурации во времени когда я покупаю сервер. Инженеры также очень профессиональны и могут быстро выполнить испытывая процесс.

—— Strelkin Mikhail Vladimirovich

Мы очень довольны нашим опытом работы с Beijing Qianxing Jietong. Качество продукции отличное, и доставка всегда вовремя. Их отдел продаж профессионален, терпелив и очень полезен во всех наших вопросах. Мы искренне ценим их поддержку и надеемся на долгосрочное партнерство. Настоятельно рекомендуется!

—— Ахмад Навид

Качество: Очень хороший опыт работы с моим поставщиком. МикроТик RB3011 уже использовался, но он был в очень хорошем состоянии и все работало идеально.и все мои проблемы были решены быстро- Очень надежный поставщик. - Очень рекомендую.

—— Джеран Колесио

Оставьте нам сообщение

Dell PERC13 Transforms NVMe Hardware RAID for the AI Era

March 12, 2026
Dell’s H975i, part of the PERC13 RAID controller series, represents the most significant advancement the company has made in hardware RAID in more than a decade. While Dell has rolled out regular updates to its PERC lineup over the years, these were mostly incremental—focused on controller optimization and enhanced bandwidth as PCIe generations evolved. However, the underlying architecture remained rooted in the SATA and SAS legacy that has shaped enterprise RAID for years. The PERC H975i breaks this cycle once and for all. Built on Broadcom’s SAS51xx chipset series, this controller signals a clear shift to a flash-first, NVMe-native design. By exclusively supporting NVMe drives and discontinuing compatibility with traditional HDDs and SATA technologies, the H975i adopts a forward-thinking approach to storage infrastructure, tailored to meet the high-performance, low-latency requirements of modern data-intensive and AI-centric workloads.
 

Key Takeaways

  • Flash-first NVMe RAID: PERC13 H975i moves off SAS/SATA entirely, built on Broadcom SAS51xx for an NVMe-native, AI-ready architecture.
  • Big generational jump: PCIe Gen5 x16 with up to 16 NVMe drives per controller (32 with two) delivered 52.5 GB/s and 12.5M IOPS per controller in testing, with gains vs PERC12 including +88% read bandwidth, +318% write bandwidth, +31% 4K read IOPS, and +466% 4K write IOPS.
  • AI server fit: Front-integrated design frees rear PCIe slots for GPUs, shortens MCIO runs, and enables a dedicated storage pipe per accelerator for steadier, more deterministic throughput with no CPU overhead.
  • Resiliency under stress: Supercapacitor-protected cache and faster rebuilds cut time to as low as 10 min/TiB while maintaining high performance during rebuilds (up to 53.7 GB/s reads, 68 GB/s writes, 17.3M/5.33M 4K IOPS).
  • End-to-end security: Hardware Root of Trust, SPDM device identity, and full-spectrum encryption that covers drives, in-flight data, and controller cache.

 

The PERC H975i delivers unparalleled performance and architectural innovations. Featuring a PCIe Gen 5 x16 host interface and support for up to 16 NVMe drives (32 NVMe drives per system when paired with two controllers), the H975i achieved impressive results in our testing—boasting a maximum throughput of 52.5 GB/s and 12.5 million IOPS per controller. This marks nearly a 2x performance improvement across all key metrics compared to the PERC12, which capped out at 6.9 million IOPS and 27 GB/s throughput. Beyond raw speed, the PERC13 series introduces a supercapacitor-based cache protection system, replacing traditional battery-backed solutions to ensure data integrity without sacrificing operational reliability. Building on the security foundations of its predecessor, the H975i now offers full-spectrum encryption capabilities, encrypting data within the cache and providing comprehensive protection for data both in transit and at rest.
последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  0
The PERC H975i stands out as a purpose-built storage accelerator engineered to meet the unprecedented computational demands of AI workloads. It combines high density, exceptional performance, and low-latency storage—all without imposing CPU overhead. In practical applications, pairing a PCIe Gen5 RAID card capable of saturating a x16 interface with a Gen5 GPU creates a dedicated storage pipeline for each accelerator. This simplifies PCIe/NUMA topology, eliminates noisy-neighbor effects, and isolates rebuild or background tasks to that GPU’s I/O domain.
 
Scaling this setup to dual RAID cards paired with two GPUs preserves linear performance while avoiding contention on shared lanes or caches. The outcome is more consistent input bandwidth for data-intensive training and inference workloads—including large batches, rapid shuffles, and fast checkpoint reads—along with tighter latency distributions under load and during rebuild processes. This architecture doesn’t merely push higher peak performance numbers; it delivers more deterministic throughput, a critical requirement for multi-GPU AI servers aiming to maintain high utilization rates.
последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  1
 

Dell PERC12 H965i and PERC13 H975i Specifications

Feature PERC12 H965i Front PERC13 H975i Front
RAID Levels 0, 1, 5, 6, 10, 50, 60 0, 1, 5, 6, 10, 50, 60
Non-RAID (JBOD) Yes Yes
Host Bus Type PCIe Gen4 x16 PCIe Gen5 x16
Side-band Management I2C, PCIe VDM I2C, PCIe VDM
Enclosures per Port Not applicable Not applicable
Processor / Chipset Broadcom RAID-on-Chip, SAS4116W Broadcom RAID-on-Chip, SAS5132W
Energy Pack / Power-backup Battery Supercapacitor
Local Key Management Security Yes Yes
Secure Enterprise Key Manager Yes Yes
Controller Queue Depth 8,192 8,192
Non-volatile Cache Yes Yes
Cache Memory 8 GB DDR4 3200 MT/s Integrated RAID cache
Cache Functions Write-back, read-ahead, write-through, always write-back, no read-ahead Write-back, write-through, always write-back, no read-ahead
Max Complex Virtual Disks 64 16
Max Simple Virtual Disks 240 64
Max Disk Groups 64 32
Max VDs per Disk Group 16 8
Max Hot-spare Devices 64 8
Hot-swap Devices Supported Yes Yes
Auto-Configure (Primary & Execute once) Yes Yes
Hardware XOR Engine Yes Yes
Online Capacity Expansion Yes Yes
Dedicated & Global Hot Spare Yes Yes
Supported Drive Types NVMe Gen3 and Gen4 NVMe Gen3, Gen4 and Gen5
VD Strip Element Size 64KB 64KB
NVMe PCIe Support Gen4 Gen5
Configuration Max NVMe Drives 8 drives per controller 16 drives per controller
Supported Sector Sizes 512B, 512e, 4Kn 512B, 512e, 4Kn
Storage Boot Support UEFI-only UEFI-only

 

 

The PERC13 H975i Front controller in Dell PowerEdge servers is designed for seamless integration into the system architecture. Unlike traditional add-in cards that occupy rear PCIe slots, the H975i connects directly to the front drive backplane and interfaces with the front MCIO connectors on the motherboard through dedicated PCIe 5.0 interfaces. This integrated design preserves rear PCIe slots for high-performance GPUs and additional PCIe expansion, while significantly reducing the length of cables. This aids in maintaining signal integrity, making the system more reliable and easier to service. The result is a cleaner internal layout and improved airflow for dense, compute-intensive deployments.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  2

The H975i implements a comprehensive security architecture that spans from silicon-level hardware attestation through full-spectrum data encryption of data in place with SED drives. At its foundation, Hardware Root of Trust establishes an immutable chain of cryptographic verification from the Internal Boot ROM through each firmware component, ensuring only authenticated Dell-certified firmware can execute on the controller. This hardware-based security extends through Security Protocol and Data Model (SPDM) implementation, where each controller contains a unique Device Identity certificate enabling iDRAC to perform real-time authentication verification. The controller extends cryptographic protection beyond traditional data-at-rest scenarios to include cache memory. It maintains encryption keys in secure memory regions that are inaccessible to unauthorized firmware. As a result, sensitive data remains protected whether residing on drives or actively being processed in cache.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  3

Power protection in the H975i is another significant evolution from traditional battery-backed systems through its integration of a supercapacitor. The supercapacitor provides instantaneous power delivery during unexpected power loss events, ensuring an encrypted and complete cache flush to non-volatile storage, where data remains protected indefinitely. In addition, unlike battery-based systems that require 4-8 hours for learn cycles, the H975i’s supercapacitor completes its Transparent Learn Cycle within 5-10 minutes without any performance degradation during calibration. This design eliminates the maintenance overhead and degradation concerns inherent in battery solutions, while providing superior reliability for mission-critical data protection.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  4

Integrated Monitoring and Management

Dell’s PERC13 RAID controller, like many of Dell’s RAID solutions, can be managed and monitored in many ways, including during platform boot via System Setup in the BIOS, through the iDRAC web GUI, the PERC12 utility, and even Dell OpenManage UI and CLI.

iDRAC Controller Management

When viewing the iDRAC management interface, the controllers tab offers an overview of the server’s storage hardware. Alongside the BOSS card, you’ll see the dual PERC H975i controllers, complete with information on firmware versions, cache memory, and battery health. This summary enables you to quickly verify the controllers’ readiness and configuration without needing to access the BIOS or use CLI tools.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  5

The Virtual Disks tab in iDRAC shows the storage arrays that have been created, including their RAID level, size, and caching policy. In this system, two RAID-10 groups are listed, all built on SSDs. From this view, administrators can confirm volumes are online, create new virtual disks, or use the Actions menu to adjust or delete existing configurations.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  6

RAID Controller Configuration Utility

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  7

The image above shows an example of entering the PERC H975i Front Configuration Utility System Setup on the PowerEdge R7715 platform. From this interface, you can manage all key RAID controller settings, including Configuration Management, Controller Management, Device Management, and more. This utility provides a streamlined way to set up virtual disks and monitor hardware components directly during the platform boot process.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  8

After selecting the RAID level, we move on to choosing physical disks for the array. In this example, all available NVMe SSDs are listed and marked as RAID-capable. We select multiple 3.2 TiB Dell DC NVMe drives from the unconfigured capacity pool. Filters like media type, interface, and logical sector size help narrow the selection. Once the desired drives are checked, we can proceed by clicking “OK” to finalize the disk selection and continue creating the Virtual Disk.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  9

Before finalizing the virtual disk creation, the system displays a warning that confirms all data on the selected physical disks will be permanently deleted. To proceed, we check the “Confirm” box and select “Yes” to authorize the operation. This safeguard helps prevent accidental data loss during the RAID creation process.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  10

Once the virtual disk is created, it appears under the “Virtual Disk Management” menu. In this example, our new RAID 5 virtual disk is listed with a capacity of 43.656 TiB and a status of “Ready.” With just a few simple steps, the storage is configured and ready for use.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  11

While the PERC BIOS Configuration Utility and iDRAC interface offer intuitive options for local and remote management, Dell also provides a powerful command-line tool called PERC CLI (perccli2). This utility supports Windows, Linux, and VMware, making it ideal for scripting, automation, or managing PERC controllers in headless environments. Dell also provides detailed documentation on installation and command usage for PERC CLI on their support site.

Dell PERC13 Performance Testing

Before diving into performance testing, we prepared our environment using the Dell PowerEdge R7715 platform configured with dual PERC H975i front controllers. These were paired with thirty-two 3.2TB Dell NVMe drives, each rated for up to 12,000 MB/s sequential reads and 5,500 MB/s sequential writes using 128 KiB block sizes. This high-performance foundation enables us to push the limits of the PERC13 controller’s throughput and evaluate RAID behavior at scale.

  • Platform: Dell PowerEdge R7715
  • CPU: AMD EPYC 9655P 96-Core Processor
  • Ram: 768GB (12 x 64GB) DDR5-5200 ECC
  • Raid Controller: 2 x PERC13 H975i
  • Storage: 32 x 3.2TB Dell CD8P NVMe Drives
  • PCIe Accelerators: 2 x NVIDIA H100 GPU

NVIDIA Magnum IO GPU Direct Storage: AI Meets Storage

Modern AI pipelines are often I/O-bound, not compute-bound. Data batches, embeddings, and checkpoints must be transferred from storage to GPU memory quickly enough to keep accelerators busy. NVIDIA’s Magnum IO GDS (via cuFile) short-circuits the traditional “SSD → CPU DRAM → GPU” path and lets data DMA directly from NVMe to GPU memory. That removes CPU bounce-buffer overhead, lowers latency, and makes throughput more predictable under load, all of which translates to higher GPU utilization, shorter epoch times, and quicker checkpoint save/load cycles.

Our GDSIO test is geared to measure the storage-to-GPU data path itself, sweeping block sizes and thread counts to show how quickly PERC13-backed NVMe set can stream into H100 memory. With each H975i on a PCIe 5.0 x16 link (theoretical ~64 GB/s per controller, unidirectional), two controllers set an aggregate ceiling near ~112 GB/s; where our curves plateau tells you whether you’re link or media-limited. For practitioners, read the charts as proxies for real workloads: large sequential reads map to dataset streaming and checkpoint restores; large sequential writes map to checkpoint saves; smaller transfers with concurrency reflect dataloader shuffles and prefetch. In short, strong GDSIO scaling means fewer GPU stalls and more consistent performance during both training and high-throughput inference.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  12
GDSIO Read Sequential Throughput

Starting with sequential read, throughput began modestly at lower block sizes and thread counts, starting around 0.3 GiB/s at 8K blocks with a single thread. Performance scaled sharply between 16K and 512K blocks, particularly when increasing thread count from 4 to 16. The most substantial gains occurred at 1M, 5M, and 10M block sizes, where throughput jumped dramatically, peaking at 103 GiB/s at 10M block size with 256 threads. This progression shows that the PERC13 array benefits from larger block sizes and multithreaded parallelism, with optimal saturation around 64-128 threads, beyond which gains plateau.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  13
GDSIO Read Sequential Throughput Differential

In sequential read testing across block sizes from 8K to 10M, the PERC13 (H975i) consistently outperformed the PERC12 (H965i), with percentage gains scaling dramatically at larger block sizes and higher thread counts.

At smaller block sizes (8K-16K), improvements were modest (typically ranging from 0-20%), and in some isolated cases the H975i trailed slightly due to test variability at low queue depths. By 32K-64K block sizes, the advantage became more consistent, with the H975i delivering 30-50% higher throughput across most thread counts.

The most significant differences were observed at larger block sizes (128K through 10M), where the PERC13 controller unlocked the full sequential read potential of the system. Here, the H975i demonstrated gains of 50-120% compared to the H965i. For example, at 1M block size with 8-16 threads, throughput was over 55 GiB/s higher, equating to roughly a 90% uplift. At block sizes of 5M and 10M, improvements regularly exceeded 100%, with some configurations showing nearly double the performance compared to the previous generation.

Overall, the PERC13 (H975i) established a commanding lead in sequential read workloads, especially as block size and thread count scaled. While smaller block sizes showed incremental improvement, at 256K and above, the newer controller consistently delivered 50-100%+ higher performance, clearly highlighting the architectural advancements in Dell’s latest RAID platform.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  14
GDSIO Read Sequential Latency

As sequential read throughput increased, latency remained manageable at smaller block sizes and lower thread counts. For example, latency stayed below 100 µs up to 64K blocks and 16 threads, showing efficient handling of reads in that range. Once block sizes and thread counts scaled higher, especially at 5M and 10M with 64 or more threads, latency climbed rapidly, peaking at 211.8 ms at a 10M block size with 256 threads. This highlights how controller or queuing bottlenecks emerge under extreme workloads, even though throughput remains high.

The best balance of performance and efficiency was observed at the 1M block size with 8-16 threads, where the array sustained 87.5-93.7 GiB/s throughput while keeping latency between 179-334 µs. This zone represents the sweet spot for maximizing bandwidth while keeping delays well under a millisecond.

GDSIO Write Sequential Throughput

Write performance showed strong early scaling as block sizes increased, with throughput climbing from 1.2 GiB/s at 8K and 1 thread to 13.9 GiB/s by 256K. The most substantial growth appeared between 128K and 1M block sizes, where throughput reached over 80 GiB/s at 8 to 16 threads. Peak performance came at the 5M and 10M block sizes, sustaining 100 to 101 GiB/s from 8 threads onward.

Performance flattened across 8 to 64 threads for these larger blocks, indicating the controllers reached saturation early in the scaling curve. At higher thread counts, especially 128 and 256 threads, throughput stability varied, holding steady at large 5M and 10M blocks at 101 GiB/s but declining for mid-range block sizes, such as 256K, falling from 61.2 GiB/s at 32 threads to 45.3 GiB/s at 256 threads.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  15
GDSIO Write Sequential Throughput Differential

In sequential write testing, the PERC13 (H975i) delivered substantial gains over the PERC12 (H965i), particularly as block sizes and thread counts scaled. At small block sizes (8K-32K), improvements were modest, generally within 0-10%, with occasional test noise showing negligible differences.

From 64K onward, the advantage of the H975i became more pronounced. At 64K block size, improvements reached 40-70%, with throughput rising by more than 12-17 GiB/s compared to the H965i. At 128K-256K, the uplift grew stronger, where the H975i consistently delivered 50-70% higher throughput at moderate to high thread counts.

The most dramatic performance gap appeared at larger block sizes (512K through 10M). At 512K, the H975i achieved gains of +31 to +56 GiB/s, equating to 60-80% improvement over the H965i. At 1M block size, the lead extended further, with throughput jumps of +40 to +68 GiB/s, representing 70-90% gains. Finally, at 5M and 10M block sizes, the PERC 13 nearly doubled throughput compared to the PERC 12, with deltas of +75 to +79 GiB/s, translating into 100% improvement in some thread-rich scenarios.

Overall, the PERC 13 controller showed a clear generational leap in sequential write performance. While differences are minor at the smallest block sizes, once workloads scale past 64K, the H975i consistently delivers 50–100% higher throughput, firmly establishing its superiority over the H965i in write-intensive sequential workloads.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  16
GDSIO Write Sequential Latency

Latency during sequential writes remained impressively low at smaller block sizes and lower thread counts, often staying under 50 µs through 128K blocks with up to 8 threads. As thread counts increased, latency scaled more noticeably. For example, latency reached 392 µs at 512K with 32 threads and exceeded 1 ms at 1M block size with 64 threads.

Saturation effects became more evident at the largest block sizes and highest concurrency levels. Latency rose to 12.4 ms at 5M with 128 threads and peaked at 50.3 ms at 10M with 256 threads.

The most efficient operating point for sequential write workloads occurred at the 1M or 5M block sizes with 8 to 16 threads, where throughput reached 87.9 to 101.2 GiB/s while latency remained within 178 µs – 1.7 ms, providing strong sustained performance without triggering excessive write queue delays.

MLPerf Storage 2.0 Performance

To evaluate real-world performance in AI training environments, we utilized the MLPerf Storage 2.0 test suite. MLPerf Storage is specifically designed to test I/O patterns in real, simulated deep learning workloads. It provides insights into how storage systems handle challenges such as checkpointing and model training.

Checkpointing Benchmark

When training machine learning models, checkpoints are essential for periodically saving the model’s state. This helps prevent loss of progress due to interruptions, such as hardware failures, enables early stopping during training, and allows researchers to branch from various checkpoints for experiments and ablations.

The checkpoint save duration comparison revealed that Dell PERC13 consistently outperformed PERC12 across all model configurations. PERC 13 achieved save times ranging from 7.61 to 10.17 seconds, while PERC12 required 10.41 to 20.67 seconds for the same operations. The performance gap was most pronounced with the 1T parameter model, where PERC13 completed saves in just over 10 seconds compared to PERC12’s 20+ seconds. This represents approximately a 50% reduction in save time for the largest models.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  17

Examining the Save throughput results, the data showcases PERC13’s superior bandwidth utilization, consistently delivering higher data transfer rates. PERC13 achieves throughput between 11.46 and 14.81 GB/s, with peak performance on the 1T model. In contrast, PERC12 tops out at 9.49 GB/s and drops to 6.98 GB/s for the largest configuration. The newer controller maintains more stable performance across different model sizes, suggesting better optimization for handling large sequential writes typical of checkpoint operations.

последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  18
The Micron 7600 MAX is the company’s newest PCIe Gen5 NVMe SSD tailored for mainstream data center deployments, crafted to deliver outstanding quality of service and consistent responsiveness across AI, cloud, and mixed-use workloads. Offered in U.2, E1.S, and E3.S form factors, the 7600 series encompasses two endurance classes: PRO (read-intensive, 1 DWPD) and MAX (mixed-use, 3 DWPD). For this review, we were provided with the 6.4TB 7600 MAX E3.S model.
последний случай компании о Dell PERC13 Transforms NVMe Hardware RAID for the AI Era  19
Micron 7600 MAX E3.S and U.2 SSD front.
Powered by Micron’s ninth-generation TLC NAND, the 7600 MAX stands as the world’s first mainstream data center SSD to utilize this cutting-edge flash technology. Paired with a vertically integrated controller and firmware stack fully developed by Micron, the drive delivers industry-leading consistency and low latency under sustained load—especially in mixed 70/30 and RocksDB workloads, where Micron claims up to 76% better latency consistency than competing Gen5 data center SSDs.
 
On paper, the 6.4TB MAX model achieves 12 GB/s sequential read, 7 GB/s sequential write, up to 2.1 million IOPS random read, and 675K IOPS random write—all within a ≤ 14 W RMS power budget. These performance traits make it an excellent fit for AI data pipelines, database backends, virtualization nodes, and real-time analytics, where predictable latency and sustained throughput take priority over peak burst performance.
 
Security and standards compliance are also key priorities. The drive supports SPDM 1.2 attestation, a hardware root of trust, and optional FIPS 140-3 Level 2 SED encryption, while adhering to OCP 2.5 specifications for open data center interoperability.
 
For this review, we received the Micron 7600 MAX 6.4TB drive. We will compare it with similar Gen5-class drives and assess their performance under enterprise test conditions, focusing on efficiency and workload consistency.
 

Micron 7600 MAX Specifications

The table below outlines the supported specifications for the Micron 7600 MAX, a mixed-use PCIe Gen5 NVMe SSD rated for up to 3 drive writes per day (DWPD).

Micron 7600 MAX Specifications (U.2 / E3.S / E1.S)
Use Case
Контактная информация
Beijing Qianxing Jietong Technology Co., Ltd.

Контактное лицо: Ms. Sandy Yang

Телефон: 13426366826

Оставьте вашу заявку (0 / 3000)