Difference between revisions of "5G OAI Neural Receiver Testbed with USRP X410"
NeelPandeya (Talk | contribs) |
NeelPandeya (Talk | contribs) |
||
| Line 48: | Line 48: | ||
* The neural receiver shows a clear Bit Error Rate (BER) advantage at lower MCS and lower SNR. | * The neural receiver shows a clear Bit Error Rate (BER) advantage at lower MCS and lower SNR. | ||
* At higher MCS levels, the performance gap narrows (a trade-off that merits further analysis). | * At higher MCS levels, the performance gap narrows (a trade-off that merits further analysis). | ||
| − | + | * A reduced uplink bandwidth was used to meet strict real-time latency requirements (500 μs slot duration with 30 KHz SCS). | |
| − | + | * The neural receiver model complexity was reduced by 15 times (from 700K to 47K parameters) to achieve real-time GPU inference. | |
These results underscore the crucial balance between complexity, latency, and performance in AI-enhanced wireless physical-layer deployments. | These results underscore the crucial balance between complexity, latency, and performance in AI-enhanced wireless physical-layer deployments. | ||
Revision as of 18:34, 30 October 2025
Contents
Application Note Number and Authors
AN-829
Authors
Bharat Agarwal and Neel Pandeya
Executive Summary
Overview
This Application Note presents a practical, system-level benchmarking platform leveraging NI USRP software-defined radios (SDRs) and the OpenAirInterface (OAI) 5G/NR stack for evaluating AI-enhanced wireless receivers in real-time. It addresses one of the key challenges in deploying AI/ML at the physical layer-ensuring reliable system performance under real-time constraints.
Motivation and Context
AI and ML techniques hold promise for improving both wireless and non-wireless KPIs across the stack, from core-level optimization (e.g., load balancing, power savings), to tightly-timed PHY/MAC innovations such as:
- ML-based digital predistortion to improve power efficiency.
- Neural receivers for channel estimation and symbol detection with improved SNR tolerance.
- Intelligent beam and positioning prediction, even under fast channel dynamics.
Consortia such as 3GPP (Release-18/19) and O-RAN are actively defining how AI/ML can be incorporated into future cellular network standards.
Neural Receiver Model
We demonstrate a real-time implementation of a neural receiver that is based on a published model architecture called DeepRX, which replaces the traditional OFDM receiver blocks (channel estimation, interpolation, equalization, detection) with a single neural network that treats the time-frequency grid data as image-like input. Model training and validation are performed using the NVIDIA Sionna link-level simulator, and training data is stored using the open SigMF format for reproducibility.
More information about the SigMF file format can be found on the project website, on the Wikipedia page, and on the GitHub page.
Real-Time Benchmarking Platform
To validate the performance of the neural receiver in real hardware, the prototype integrates:
- The OAI real-time 5G protocol stack (complete core, RAN, and UE) running on commodity CPUs.
- NI USRP SDR hardware as the RF front-end.
- An optional O-RAN Near-RT RIC (via FlexRIC) integration.
- Neural receiver inference performed on a GPU (e.g., Nvidia A100, RTX 4070, RTX 4090), accessed via TensorFlow RT C-API for seamless integration within OAI.
This setup enables a direct comparison between the traditional receiver baseline against the neural receiver in an end-to-end real-time system.
Benchmarking Results
Initial testing focuses on uplink performance using various MCS levels (MCS-11, MCS-15, MCS-20 are specifically highlighted in this document) and SNR ranges (5 dB to 18 dB) under a realistic fading channel profile (urban micro, 2 m/s, 45ns delay spread). Each measurement is averaged over 300 transport blocks.
Some of the key findings are listed below.
- The neural receiver shows a clear Bit Error Rate (BER) advantage at lower MCS and lower SNR.
- At higher MCS levels, the performance gap narrows (a trade-off that merits further analysis).
- A reduced uplink bandwidth was used to meet strict real-time latency requirements (500 μs slot duration with 30 KHz SCS).
- The neural receiver model complexity was reduced by 15 times (from 700K to 47K parameters) to achieve real-time GPU inference.
These results underscore the crucial balance between complexity, latency, and performance in AI-enhanced wireless physical-layer deployments.