Skip to content
Trusted US Based Fiber Optics Partner
1.6T Roadmap

800G OSFP Guide: IHS vs RHS Selection for AI Data Centers

800G OSFP Technical Guide for AI Data Centers | 2026

Everything network architects need to know about 800G form factors — from physical architecture to deployment strategy. The decision you make here ripples through your entire infrastructure.

The 800G Inflection Point: Why This Decision Matters More Than You Think

We’re in the middle of the fastest networking transition the industry has ever seen. According to TrendForce, 800G transceiver shipments are projected to explode from 24 million units in 2025 to 63 million in 2026 — a 162% year-over-year surge driven almost entirely by AI infrastructure buildouts. Dell’Oro Group notes that 800G reached 20 million ports in just three years, compared to six or seven years for 400G to hit the same milestone.

Here’s what makes this transition different: you’re not just choosing a transceiver speed. You’re choosing between two fundamentally different physical architectures — OSFP-IHS (Integrated Heat Sink) and OSFP-RHS (Riding Heat Sink) — that determine which equipment you can use, how you cool your racks, and whether your infrastructure can scale to 1.6T without a forklift upgrade.

Get this wrong, and you’ll discover the modules you ordered won’t physically fit into your equipment. Get it right, and you’ve built a foundation that carries you through the next three years of AI infrastructure expansion.

This guide gives you everything you need: the physical architecture of each form factor, when to use which, compatible equipment across the NVIDIA ecosystem, breakout strategies, implementation frameworks, and a clear decision process. Let’s get into it.

Part 1: Understanding OSFP — The Foundation

Before diving into IHS versus RHS, let’s establish what OSFP actually is and why it became the dominant form factor for 800G.

What is OSFP?

OSFP (Octal Small Form-factor Pluggable) is a hot-pluggable transceiver form factor developed by the OSFP MSA (Multi-Source Agreement) consortium. The “Octal” refers to its eight electrical lanes, each capable of carrying 100Gbps using PAM4 modulation — delivering 800Gbps aggregate bandwidth.

Compared to QSFP-DD (Quad Small Form-factor Pluggable Double Density), OSFP offers:

  • 22% larger housing — more room for thermal management and optical components
  • Higher power capacity — up to 30W standard (OFSP800) (vs. 14W for QSFP-DD), with 1.6T modules targeting 40W+
  • Better thermal headroom — critical as power-per-port continues climbing
  • Superior signal integrity — wider pin spacing reduces crosstalk at 100G/lane speeds

The trade-off is port density. OSFP’s larger footprint means fewer ports per 1U switch compared to QSFP-DD. But for AI networking where thermal management and signal integrity are paramount, OSFP has become the clear winner for new deployments.

The Eight-Lane Architecture

Every 800G OSFP module uses eight electrical lanes running at 106.25 Gbaud with PAM4 (4-level Pulse Amplitude Modulation). This creates some interesting deployment flexibility:

  • 8×100G — Full 800G single-port operation
  • 2×4×100G — Twin-port operation (two independent 400G ports)
  • Various breakout configs — 2×400G, 4×200G, or 8×100G depending on cabling

This lane architecture is identical between IHS and RHS modules. The difference isn’t electrical — it’s entirely thermal and mechanical.

Part 2: OSFP-IHS — Integrated Heat Sink Architecture

Physical Structure of OFSP-IHS

OSFP-IHS modules integrate the heat sink directly into the transceiver housing. When you look at an IHS module, you see aluminum or copper fins rising from the top surface — this is the thermal solution, built into the module itself.

Key physical characteristics:

Specification Value
Total module height 13-21mm (including fins)
Heat sink type Integrated aluminum/copper fins
Thermal interface Air-cooled, front-to-back airflow
Cage type Standard OSFP cage
Also known as Finned-top OSFP, closed-top OSFP

The integrated fins interact directly with the airflow moving through your switch or chassis. When properly designed, this creates efficient convective heat transfer without requiring any thermal engineering on the host side — just ensure adequate airflow.

How IHS Thermal Management Works

The OSFP MSA Rev 5.22 specification defines airflow-impedance curves for IHS modules. Here’s the basic physics:

  1. Heat generation — The DSP (Digital Signal Processor), laser driver, and optical components generate 12-17W for typical gray optics, up to 30W for coherent modules
  2. Conductive transfer — Heat moves from components through the module housing to the integrated heat sink
  3. Convective dissipation — Front-to-back airflow (typically 200-400 LFM) carries heat away from the fins
  4. Exhaust — Hot air exits through the rear of the switch

The beauty of IHS is simplicity: the module handles its own thermal management. The host equipment just needs to provide airflow — no cold plates, no thermal interface materials, no riding heat sinks to align.

IHS Variants: Open-Top vs. Closed-Top

Within the IHS family, you’ll encounter two sub-variants:

Open-top IHS exposes the fins directly to airflow. Maximum thermal performance, but the fins are vulnerable to physical damage during handling.

Closed-top IHS encloses the heat sink with a smooth top surface while maintaining internal fin structures. Slightly reduced thermal performance but better mechanical protection. Most production modules use this design.

Both variants are mechanically compatible with standard OSFP cages.

OSFP-IHS vs OSFP-RHS physical architecture showing integrated heat sink and riding heat sink cooling designs.

Part 3: OSFP-RHS — Riding Heat Sink Architecture

Physical Structure of OFSP-RHS

OSFP-RHS takes the opposite approach: the module itself contains no integrated heat sink. Instead, it presents a flat top surface designed to mate with a host-provided thermal solution.

Key physical characteristics:

Specification Value
Module height 9.5mm (standardized flat top)
Heat sink type None integrated — host-provided
Thermal interface TIM (Thermal Interface Material) to riding heatsink or cold plate
Cage type OSFP-RHS specific cage
Also known as Flat-top OSFP

That 9.5mm standardized height is critical. It’s roughly half the height of IHS modules, enabling higher port density in space-constrained environments like PCIe adapter cards.

How RHS Thermal Management Works

RHS shifts thermal engineering responsibility from the module to the host system:

  1. Heat generation — Same as IHS: 12-17W typical, up to 30W for high-power modules
  2. Conductive transfer — Heat moves to the flat top surface of the module
  3. TIM interface — Thermal interface material (paste, pad, or phase-change) bridges the module top to the riding heat sink or cold plate
  4. Host-side dissipation — The riding heat sink handles convective cooling (air) or the cold plate handles liquid cooling

This architecture enables two significant advantages:

Liquid cooling compatibility — The flat top surface creates perfect contact for cold plates. As data center liquid cooling adoption accelerates (the market is projected to reach $17.77 billion by 2030, growing at 21.6% CAGR according to Grand View Research), RHS becomes increasingly strategic.

Unified thermal management — In systems like NICs and DPUs, the transceiver shares a thermal solution with the host silicon. One heat sink or cold plate covers everything, reducing complexity and enabling integrated thermal design.

The RHS Cage Difference

Here’s the critical compatibility point: RHS modules require RHS-specific cages. The OSFP MSA specifies different positive stop geometries between standard OSFP (IHS) and OSFP-RHS cages. These mechanical keys physically prevent cross-insertion:

  • IHS modules are too tall to fit in RHS cages
  • RHS modules would have inadequate thermal contact in standard cages (no riding heat sink)

This is intentional. The specification designers knew that mixing form factors would create thermal failures, so they made the mistake mechanically impossible.

Part 4: IHS vs RHS — The Complete Comparison

OSFP-IHS vs OSFP-RHS complete comparison table for cooling, height, power, and use cases.

Let’s put everything side-by-side:

Characteristic OSFP-IHS OSFP-RHS
Module height Variable (integrated fins) 9.5mm
Heat sink Integrated fins Host-provided
Cooling method Air-cooled (front-to-back) Air or liquid (host-dependent)
Cage compatibility Standard OSFP OSFP-RHS only
Primary use case Switches NICs, DPUs, liquid-cooled systems
Port density Lower (taller modules) Higher (shorter modules)
Thermal engineering Module handles it Host handles it
Liquid cooling ready No (fins block contact) Yes (flat top for cold plates)
Max power (current) 33W (OSFP800) 33W (OSFP800)
Max power (1.6T) 42.9W 42.9W

Electrical and Optical: Identical

Everything below the heat sink is the same:

  • Bandwidth: 800Gbps (8×100G PAM4)
  • Modulation: PAM4 at 106.25 Gbaud per lane
  • Standards: IEEE 802.3ck (800GBASE-SR8, DR8), IEEE 802.3df
  • Management: CMIS 5.x (Common Management Interface Specification)
  • Connector: Same electrical connector and pinout

You can think of IHS and RHS as the same transceiver with different hats. The optical and electrical performance is identical — only the thermal architecture differs.

Part 5: Equipment Compatibility for 800G OFSP IHS vs RHS — What Works With What

This is where theory meets reality. Let’s map form factors to actual equipment.

NVIDIA Switches: IHS Required

Spectrum-4 Ethernet Switches (SN5600 Series)

The SN5600, SN5600D, and SN5610 deliver 51.2 Tbps using 64 twin-port OSFP cages. These switches exclusively accept IHS (finned-top) transceivers.

Compatible modules: – MMA4Z00-NS: 800G 2×SR4/SR8, 50m reach on OM4 MMF – MMS4X00-NM: 800G 2×DR4, 500m reach on SMF – MMS4X00-NS400: 800G 2×FR4, 2km reach on SMF

Power consumption: 15-17W per twin-port transceiver. The twin-port configuration doubles effective density — each OSFP cage delivers two independent 400G ports.

Quantum-X800 InfiniBand Switches (QM3400, QM3200)

Supporting XDR 800Gb/s InfiniBand, these switches also require IHS transceivers:

  • QM3400: 72 twin-port OSFP cages, 115.2 Tb/s aggregate
  • QM3200: 64 twin-port OSFP cages, 102.4 Tb/s aggregate

Compatible XDR transceivers: – MMS4A00-XM: 800G twin-port 2×DR4 (1.6T aggregate per cage), IHS

NVIDIA NICs and DPUs: RHS Required

ConnectX-7 OSFP

Single-port OSFP cage supporting 400G Ethernet or NDR InfiniBand. Requires RHS (flat-top) modules.

ConnectX-8 SuperNIC (C8180)

NVIDIA’s latest NIC supports 800G XDR InfiniBand or 2×400GbE through a single OSFP-RHS cage.

Compatible modules: – MMS4A20-XM800: 800G single-port DR4, 500m SMF, RHS – MMA4A00-XS800: 800G SR8, 50m MMF, RHS

The PCIe card form factor cannot accommodate the height of IHS modules — RHS is mandatory.

BlueField-3 DPUs

Note: BlueField-3 uses QSFP112 form factor, not OSFP. This maintains compatibility with existing 400G NDR infrastructure while supporting the DPU’s integrated processing architecture.

Quick Reference Matrix

Equipment Form Factor Cage Type Notes
Spectrum-4 SN5600 IHS only Standard OSFP Twin-port, air-cooled
Spectrum-4 SN5610 IHS only Standard OSFP Twin-port, air-cooled
Quantum-X800 QM3400 IHS only Standard OSFP 72 twin-port cages
Quantum-X800 QM3200 IHS only Standard OSFP 64 twin-port cages
ConnectX-7 OSFP RHS only OSFP-RHS Single-port
ConnectX-8 SuperNIC RHS only OSFP-RHS Single-port, 800G
BlueField-3 DPU N/A QSFP112 Not OSFP
NVIDIA 800G OSFP equipment compatibility matrix showing IHS vs RHS requirements.

Part 6: 800G Module Types and Reach Specifications

Choosing between IHS and RHS is just the first decision. Next: which optical variant do you need?

The 800G Module Family

 

Module Type Reach Fiber Connector Power IHS/RHS Primary Use Case
SR8 50-100m MMF (OM4/OM5) Dual MPO-12/APC 12-16W Both Intra-rack, ToR
DR8 500m SMF (G.652) Dual MPO-12/APC 14-17W Both Campus backbone, building interconnect
2×FR4 2km SMF Dual LC Duplex 13-14.5W Both Building interconnect, campus
2×LR4 10km SMF Dual LC Duplex 15-18W IHS preferred Metro, campus backbone
ZR 80km SMF (coherent) LC Duplex 25-28W IHS only DCI, metro DWDM
ZR+ 120km+ SMF (coherent) LC Duplex 27-30W IHS only Extended DCI, regional

Understanding the Naming Convention

The module names encode key information:

  • SR = Short Reach (multimode fiber)
  • DR = Data center Reach (500m single-mode)
  • FR = Fiber Reach (2km)
  • LR = Long Reach (10km)
  • ZR = “Z” Reach (coherent, 80km+)

The number indicates lanes: 8 means 8×100G (800G aggregate), 4 means 4×100G per port (twin-port modules use 2×FR4 or 2×LR4 naming for two 400G ports).

Connector Types Explained

Dual MPO-12/APC (SR8, DR8): Uses two 12-fiber MPO connectors with Angled Physical Contact polish. Eight fibers active per direction (8 Tx, 8 Rx).

Dual LC Duplex (FR4, LR4, ZR): Uses two standard LC duplex connectors. Each carries a 400G signal via CWDM4 wavelengths internally.

The connector choice affects cabling infrastructure. MPO requires structured cabling with MPO trunk cables and cassettes; LC duplex uses traditional patch panels and fiber management.

800G OSFP module reach guide showing distance ranges from DAC to ZR+ coherent optics.

Part 7: Breakout Configurations and Cabling Strategies

One of 800G OSFP’s most powerful features is breakout flexibility. A single 800G module can connect to multiple lower-speed ports, enabling gradual infrastructure migration.

Twin-Port vs Single-Port Modules

Twin-port modules (like 2×DR4) present two independent 400G ports through one OSFP cage. Each port has its own MAC address and can connect to separate destinations. This effectively doubles switch port count without increasing cage count.

Single-port modules (like DR8) present one 800G port. These support breakout through cabling, not through the module itself.

800G Breakout Options

Source Module Breakout Config Cable Type Target Ports
800G DR8 2×400G-DR4 MPO-16 to 2× MPO-12 Two 400G switches
800G DR8 8×100G-DR1 MPO-16 to 8× LC duplex Eight 100G ports
800G SR8 2×400G-SR4 MPO-16 to 2× MPO-12 Two 400G SR4 ports
800G SR8 8×100G-SR1 MPO-16 to 8× LC duplex Eight 100G ports
800G 2×FR4 N/A (already twin-port) Direct LC duplex Two 400G FR4 ports

When to Breakout 800G

Migration scenarios: You’re upgrading spine switches to 800G but leaf switches remain at 400G. Use 800G DR8 modules with breakout cables to connect one spine port to two leaf switches.

Mixed-generation infrastructure: Your existing 100G servers need connectivity to new 800G fabric. 8×100G breakout from a single 800G port serves eight servers.

Density optimization: Rather than dedicating separate 400G ports, use twin-port 800G modules to double effective port count per switch.

Cabling Best Practices

  1. Plan for structured cabling — MPO trunk cables with modular cassettes provide flexibility for breakout configurations
  2. Mind polarity — MPO has specific polarity requirements (Type-A, Type-B, Type-C). Mismatched polarity causes link failures
  3. Label everything — Breakout cables create many-to-one relationships that become confusing without clear labeling
  4. Consider bend radius — MPO cables have larger minimum bend radius than LC duplex; plan cable routing accordingly
  5. Test before deployment — Validate breakout configurations in lab before production rollout

Part 8: Linear Pluggable Optics (LPO) — The Power Efficiency Revolution

LPO represents the most significant efficiency improvement in transceiver technology since PAM4 modulation. Understanding LPO is essential for forward-looking 800G deployments.

What LPO Changes

Traditional 800G modules include a DSP (Digital Signal Processor) that handles: – Signal retiming and equalization – Forward Error Correction (FEC) encoding/decoding – Chromatic dispersion compensation – Channel conditioning

This DSP consumes 6-8W — roughly half the module’s total power budget. LPO eliminates the DSP entirely, shifting signal conditioning to the host switch’s SerDes. The module retains only analog components: TIA (Trans-Impedance Amplifier) with CTLE (Continuous Time Linear Equalization) and linear drivers.

The Numbers

Metric DSP Module LPO Module Improvement
Power consumption 14-17W 7-8.5W 40-50% reduction
Latency 8-10ns <3ns 5-7ns reduction
Component count Higher (DSP silicon) Lower (analog only) Simplified
Heat dissipation Higher Lower Easier thermal management

LPO Requirements and Limitations

Host compatibility: LPO requires switch silicon with advanced SerDes capable of handling raw optical signals. Compatible platforms include: – Broadcom Tomahawk 5 (51.2 Tbps) — LPO ready – Broadcom Tomahawk 6 (102.4 Tbps) — Full LPO support – NVIDIA Spectrum-4 — LPO compatible (varies by module) – NVIDIA Spectrum-5 (expected) — Enhanced LPO support

Reach limitations: Without DSP compensation for chromatic dispersion, LPO performs best at reaches under 2km. For longer reaches (LR4, ZR), DSP modules remain necessary.

LRO hybrid: Linear Receive Optics (LRO) offers a middle ground — DSP on transmit, linear on receive. This provides ~25% power savings with better interoperability characteristics.

LPO in IHS and RHS

LPO is available in both form factors. The power reduction benefits both:

 IHS: Lower power means less heat to dissipate through fins, reduced airflow requirements

 RHS: Lower power reduces cold plate capacity requirements, enables higher density

For new AI data center deployments with short-reach requirements (ToR to leaf, leaf to spine within building), LPO should be the default choice regardless of form factor.

Part 9: The Decision Framework — Selecting Your Form Factor

Let’s synthesize everything into a practical decision process.

Step 1: Identify Your Equipment Category

The first question is binary:

Are you populating switches? → IHS is almost certainly required. Verify with the switch datasheet, but air-cooled spine/leaf switches universally use standard OSFP cages designed for IHS modules.

Are you populating NICs, DPUs, or adapter cards? → RHS is almost certainly required. The PCIe card form factor cannot accommodate IHS module height.

Step 2: Verify Cage Specifications

Don’t assume — confirm. Check the equipment documentation for explicit cage type:

  • “OSFP cage” or “Standard OSFP” → IHS compatible
  • “OSFP-RHS cage” or “Flat-top OSFP” → RHS required

When in doubt, contact the equipment vendor. A five-minute clarification prevents a five-week procurement delay.

Step 3: Assess Current and Future Cooling Infrastructure

If you’re committed to air cooling through 2027, IHS in switches and RHS in NICs is your path. Ensure adequate front-to-back airflow for switch deployments — plan for 15-17W per 800G port, scaling to 25-30W at 1.6T.

If liquid cooling is in your roadmap, weight your infrastructure decisions toward RHS compatibility where possible. Cold plate integration is becoming standard in high-density AI deployments; RHS modules are designed for it.

Step 4: Determine Reach Requirements

Map your connectivity needs to module types:

Connection Type Typical Distance Recommended Module
ToR to server <3m DAC (zero power)
Cross-rack 3-10m AEC (6-12W, linear)
ToR to leaf 10-100m SR8 or AOC
Leaf to spine (same building) 100-500m DR8
Building to building 500m-2km FR4
Campus backbone 2-10km LR4
Metro DCI 10-80km ZR (coherent)
Regional DCI 80-120km+ ZR+ (coherent)

Step 5: Evaluate LPO Feasibility

For reaches under 2km with LPO-compatible host equipment: – Specify LPO modules (not DSP) – Plan for 7-8.5W power instead of 14-17W – Benefit from reduced latency (<3ns vs 8-10ns) For reaches over 2km or non-LPO hosts: – Specify standard DSP modules – Plan thermal budget for full power consumption

Step 6: Consider 1.6T Upgrade Path

Both IHS and RHS support the 1.6T roadmap:

  • OSFP1600: 8×200G PAM4, backward compatible with current OSFP cages
  • OSFP-XD: 16×100G PAM4, new cage design (not backward compatible)

If you’re building infrastructure today that must support 1.6T by 2027: – Standard OSFP cages (IHS) will accept OSFP1600 modules – OSFP-RHS cages will accept RHS variants of OSFP1600 – OSFP-XD requires new cage hardware regardless

The 800G OSFP Deployment Decision Matrix

Part 10: Implementation Best Practices

Pre-Deployment Checklist

Infrastructure verification:

[ ] Confirm cage types in all target equipment

[ ] Validate airflow capacity (IHS) or cold plate specifications (RHS)

[ ] Verify fiber type matches module requirements (MMF vs SMF)

[ ] Confirm connector compatibility (MPO vs LC)

[ ] Check switch SerDes compatibility for LPO modules

Procurement:

[ ] Specify exact part numbers including form factor (IHS/RHS)

[ ] Verify TAA/NDAA compliance if required for your deployment

[ ] Confirm lead times and order accordingly (plan for 12-24 weeks typical)

[ ] Order 10-15% spares for failure replacement

Testing:

[ ] Lab validation before production deployment

[ ] Verify optical power levels and link integrity

[ ] Test breakout configurations if planned

[ ] Validate CMIS management interface access

Common Mistakes to Avoid

  1. Ordering IHS for NIC deployments — The most common error. The modules physically won’t fit.
  2. Ignoring thermal budgets — Calculating aggregate rack power without accounting for transceiver heat contribution. Each 800G port adds 12-17W.
  3. Mixing fiber types — DR8 modules require single-mode fiber. Connecting to multimode infrastructure will not work.
  4. Skipping interoperability testing — MSA defines compatibility, but implementations vary. Test specific module/switch combinations before production.
  5. Underestimating lead times — Major vendors quote 12-24+ weeks. Plan procurement accordingly.
  6. Forgetting polarity — MPO cables have polarity requirements. Mismatched polarity causes link failures.

Thermal Budget Planning

For a typical 1U 32-port 800G switch:

Previous Post Next Post

Leave A Comment

Please note, comments need to be approved before they are published.

Welcome to our store
Welcome to our store
Welcome to our store