The High-Speed Interconnect Platform for the Future
Unmatched throughput with 64 GT/s scalable lane architectures
Standards-compliant PHY + Controller + CXL 3.0 support
Portable, modular IP optimized for FPGA prototyping and ASIC transition
ARCHITECTURE OVERVIEW
A Complete, Modular Interconnect Architecture
Our PCIe & CXL IP suite is engineered as a modular, standards-aligned architecture that supports fast integration, portable scaling,
and deterministic verification across FPGA and ASIC platforms.
Building Blocks
- PCIe 6/7 Controller IP — Transaction, Data Link, Physical coding layers
- PCIe 6/7 PHY IP — PAM4 encoding, equalization, link training, LTSSM
- CXL 3.0 Controller IP — CXL.io, CXL.cache, CXL.mem protocols
- Subsystem Utilities — DMA engines, BAR management, lane aggregation, error handling
- Verification Environment — Compliance testbench, coverage agents, protocol monitors
- Reference Platforms — FPGA test designs for AMD, Intel, Lattice
Verified for Standards Compliance & Interoperability
Our verification methodology integrates constrained-random testing, coverage-driven validation, and automated YAML-based test flows to ensure protocol correctness and interoperability across diverse platforms.
Full PCI-SIG Compliance
Validated for LTSSM transitions, link training, equalization, encoding, and error recovery.
CXL 3.0 Protocol Coverage
Verified for CXL.io, cache/mem flows, coherency, ordering, and transaction-level correctness.
FPGA-Proven Across Vendors
Tested on multiple FPGA families for portability and predictable ASIC migration.
REFERENCE DESIGNS & TOOL FLOWS
Accelerate Prototyping with Ready Reference Designs & Tool Flows
JESD204C @ 32 Gbps per lane
Deterministic latency and deterministic subclass support.
100G UDP Stack
Line-rate packet processing with FPGA-optimized microarchitectures.
ARINC 818-2 / 818-3
Verified video transport up to 12 Gbps with programmable VCID mapping.
USE CASES
Built for High-Performance Systems
Data Center & HPC
Ultra-low-latency interconnects for high-performance compute clusters.
AI Accelerators & ML Engines
High-bandwidth link interfaces for GPU/TPU-class architectures.
SmartNICs & DPUs
Offload engines for programmable networking devices using PCIe/CXL links.
Storage & Memory Expansion
CXL-based pooling, tiered memory, and composable disaggregated architectures.