AI-Based Spectrum Sensing with Nvidia Jetson and USRP

From Ettus Knowledge Base
Revision as of 06:09, 5 November 2025 by NeelPandeya (Talk | contribs) (Installing and Configuring the UHD Software)

Jump to: navigation, search

Contents

Application Note Number and Authors

AN-811

Authors

Bharat Agarwal and Neel Pandeya

Executive Summary

This application note presents a complete framework for real-time spectrum sensing using NI Universal Serial Radio Peripheral (USRP) Software-Defined Radios (SDRs) and NVIDIA Jetson or standard x86 compute platforms. The framework is not limited to a single USRP model—the X410, X310, and B2xx series (e.g., B206) can all be used as transmitters or receivers depending on the deployment scenario. The solution leverages the NI-RF Data Recording API to enable scalable RF data acquisition, SigMF-compliant metadata tagging, and seamless integration with machine learning workflows.

The document outlines three core usage scenarios:

  1. x86-Based Development Workflow: Using a workstation or server-class x86 machine, paired with high-end USRPs such as the X410 or X310, the system supports wideband spectrum sensing (up to 400 MHz instantaneous bandwidth per channel). This configuration is ideal for laboratory development, algorithm training, and high-throughput dataset generation.
  2. Jetson-Based Embedded Sensing (Primary Use Case): Using an NVIDIA Jetson platform as the host (e.g., AGX Orin) with a compact B206 SDR as receiver and an X410 as transmitter, the system delivers efficient edge inference with GPU acceleration. Although the B206 limits the instantaneous bandwidth to 56 MHz, this configuration emphasizes portability, low power, and real-time embedded operation.
  3. User-Defined Dataset Integration: In addition to live spectrum sensing, the framework supports integration and generation of user-defined datasets. This functionality extends the applicability of the system beyond real-time capture, enabling flexible experimentation, reproducibility, and seamless AI/ML dataset preparation. Two complementary capabilities are supported:
    1. SigMF Dataset Recording
      • All captured RF data is stored in the Signal Metadata Format (SigMF).
      • SigMF pairs raw IQ samples (.sigmf-data) with a corresponding metadata file (.sigmf-meta) in JSON format.
      • Metadata describes acquisition parameters such as frequency, bandwidth, gain, device type, timestamps, and scenario context.
      • Being human-readable and portable, SigMF datasets can be used across a wide range of software environments, making them ideal for wireless research, spectrum monitoring, AI/ML training for 6G, and regulatory validation.
      • Example: A spectrum sensing session at 3.5 GHz, 20 MHz bandwidth, and 10-second duration will result in a SigMF-compliant dataset ready for further processing or ML-based classification.
    2. Continuous Waveform Playback with User-Defined Files
      • The platform supports continuous transmission and replay of user-defined waveforms in TDMS or MATLAB (.mat) formats.
      • This allows testing with standard-compliant signals such as 5G NR, LTE, Radar, or Wi-Fi, or custom-designed waveforms.
      • By replaying predefined waveforms, researchers can benchmark algorithms, validate coexistence scenarios, and reproduce experiments consistently across testbeds.
      • Example: A MATLAB-generated LTE downlink frame can be continuously transmitted via an X410 while a B206 or X310 records the received signal in SigMF format for classification.

Together, these capabilities ensure that the NI-RF Data Recording API can handle both dataset creation (SigMF-based recording) and waveform-driven experimentation (TDMS/MAT playback), thereby covering the entire pipeline from signal generation to ML-ready dataset production.

By combining NI's reliable SDR hardware with NVIDIA's efficient edge compute platforms and a unified data interface, this solution supports a wide range of spectrum intelligence applications—from interference detection and dynamic spectrum access to embedded RF analytics. The methodology enables scalable deployment from lab to field, supporting real-time insights and long-term data collection in a streamlined, modular pipeline.


USRP B206 Overview

NI USRP B206 Software Defined Radio

The USRP B206 is a compact, low-cost SDR developed by NI / Ettus Research. It supports full-duplex operation with one transmit and one receive channel, making it ideal for a variety of wireless communication and sensing applications. The B206 covers a wide RF frequency range from 70 MHz to 6 GHz and supports up to 56 MHz of instantaneous bandwidth. This makes it suitable for applications such as spectrum sensing, dynamic spectrum access, and cognitive radio.

The device connects to a host system via a high-speed USB 3.0 interface, which enables data rates sufficient for wideband real-time signal acquisition and transmission. It also supports USB 2.0 with reduced performance. The B206 includes a Xilinx Spartan-6 FPGA for onboard signal processing and is powered either through USB or an external DC supply, the latter being preferred for optimal RF performance.

The USRP B206 is compatible with both x86 and ARM-based hosts, including embedded platforms like the NVIDIA Jetson series. This enables portable and energy-efficient deployment of spectrum sensing pipelines at the network edge. It is fully supported by the open-source UHD and integrates with popular SDR development tools such as GNU Radio, MATLAB, and LabVIEW.

Typical use cases for the B206 include real-time spectrum monitoring, wireless signal classification using machine learning, prototyping of 4G/5G systems, and SDR education and training. Its compact size and flexible software support make it an excellent choice for both laboratory research and embedded field deployments.

Key Features of the USRP B206:

  • RF Capabilities: 1 TX, 1 RX, independently tunable, RF transceiver, 70 MHz to 6 GHz, up to 56 MHz bandwidth
  • Programmable Logic: FPGA: Xilinx Spartan-6 XC6SLX150
  • Software: UHD 4.9 or later, GNU Radio, C/C++ and Python
  • Synchronization: REF (external 10 MHz or PPS reference)
  • Digital Interfaces: USB 3.0, GPIO (8 I/O lines with 3.3 V I/O voltage), and JTAG
  • Power, form factor: 5 V DC, 0.9 A maximum; Board-only: 84.3 mm × 51.0 mm × 8.7 mm; Enclosed: 84.9 mm × 55.7 mm × 19.8 mm

NI-RF Data Recording API Overview

The NI-RF Data Recording API is an open-source, Python-based framework developed by National Instruments (NI) in collaboration with the Genesys Lab at Northeastern University. It is designed to streamline RF data collection using NI USRP SDRs, with support for structured metadata via the Signal Metadata Format (SigMF).

Purpose and Scope

This API enables efficient recording, labeling, and replay of real-world RF signals. It is particularly suited for generating datasets used in AI/ML workflows, wireless research, and spectrum monitoring. The framework abstracts low-level UHD interactions, allowing users to define RF parameters through JSON or YAML configuration files.

Key Features

  • Support for both signal transmission and reception using NI USRP hardware.
  • Native recording in SigMF format, capturing both IQ samples and rich metadata.
  • Python-based, modular architecture supporting custom extensions and automation.
  • Multi-SDR support via coordinated configuration files.
  • Sample waveform libraries included (e.g., LTE, NR, radar, Wi-Fi) in TDMS/MAT formats.
  • Utility scripts for standalone use: transmit, receive, replay, or continuous capture.

System Requirements

The API has been validated on Ubuntu 22.04 systems with the following dependencies:

  • At least one compatible NI USRP device (e.g., B206, X310, X410).
  • Installed UHD drivers with Python bindings.
  • Python 3.x and required libraries (e.g., NumPy, PyYAML).
  • Optional Docker environment for containerized deployment.

Relevance to Our Use Cases

In this application note, we explore three deployment scenarios of the NI-RF Data Recording API:

  1. x86-based Spectrum Sensing: Using the API on a desktop or server-class system, the USRP B206 is configured to perform spectrum capture, and the data is saved in SigMF format. This setup is optimal for high-throughput and lab-based development environments.
  2. Embedded Jetson Platform: The API is deployed on an NVIDIA Jetson device interfaced with the USRP B206 over USB 3.0. This enables compact, power-efficient, and real-time spectrum sensing at the edge. Onboard GPU resources are leveraged for FFT computation and ML inference.
  3. User-Defined Dataset Integration: The API provides flexible support for user-defined datasets through two complementary capabilities:
    1. Importing Pre-Generated Data: Users can seamlessly import and tag custom IQ recordings (e.g., SigMF-compliant files or previously captured spectrum data) into the repository. This enables integration of external datasets for benchmarking, anomaly detection, or reproducible research.
    2. Data Lake Storage for AI/ML Pipelines: All captured and imported datasets can be stored in a structured data lake, significantly simplifying automated dataset selection, management, and preprocessing. This facilitates streamlined workflows for AI/ML model design, training, and validation in spectrum sensing and 6G wireless research.

The NI-RF Data Recording API provides a flexible, hardware-agnostic foundation for both live RF capture and offline dataset handling, making it central to spectrum intelligence and edge-aware signal processing workflows.


Reference Architecture for Spectrum Sensing

To support flexible and scalable RF data collection workflows, we propose a dual-mode reference architecture that demonstrates spectrum sensing using NI USRP hardware with two compute platforms: a high-performance x86 host and an embedded NVIDIA Jetson device. Both configurations utilize the NI-RF Data Recording API to capture, store, and manage RF data in SigMF format. The hardware setup supports real-time signal acquisition, tagging, and streaming for downstream machine learning or signal intelligence tasks.

x86-Based High-Performance Architecture

x86-based spectrum sensing architecture using NI USRP B206 and X410

In this high-performance lab-based deployment, a desktop-class x86 host system is used. The USRP X410 (or alternatively the X310) serves as the receiver, connected to the workstation via a 10 GbE Ethernet interface to support high-throughput data streaming. The transmitter is also an NI USRP X410, connected through a 10 GbE link via a network switch. A 30 dB attenuator is inserted between the TX and RX paths to protect the RF front-end from saturation during close-proximity transmission.

This configuration demonstrates the full high-performance capability of the platform, enabling wideband spectrum sensing and scalable data capture.

Host System Specifications:

  • Operating System: Ubuntu 22.04
  • UHD Compatibility: The NI-RF Data Recording API supports UHD ≥ 4.2. Most devices such as the X410 or X310 work with older versions, but the B206 requires UHD ≥ 4.9.
  • Processor: Intel Xeon w7-2495X (24 cores, 2.5 GHz)

This setup is suited for high-throughput spectrum recording, algorithm development, and dataset generation in a lab environment. It offers large storage capacity, stable power, and CPU-intensive post-processing capabilities.

Jetson-Based Embedded Architecture

Jetson-based spectrum sensing architecture using NI USRP B2x0 and X410

In this configuration, an NVIDIA Jetson module serves as the edge processing unit. The Jetson connects to a USRP B2x0 (e.g., B206) over a USB 3.0 interface, acting as the spectrum sensor (receiver). A USRP X410 acts as the transmitter, linked via a LAN switch. A 30 dB attenuator is used between the TX and RX paths to prevent RF front-end saturation during close-proximity transmission.

The Jetson executes the RF data acquisition pipeline and leverages onboard GPU resources to perform high-speed FFTs, signal classification, and real-time metadata tagging. A display, keyboard, and mouse connect directly for standalone operation.

'Jetson System Specifications:

  • Operating System: Ubuntu 22.04 with JetPack 6.2.1
  • UHD Version: 4.9
  • Processor: NVIDIA Jetson AGX Orin 64 GB

This configuration is ideal for low-power, field-deployable sensing nodes where edge inference, minimal latency, and portability are required. The NI-RF Data Recording API runs natively on ARM-based Jetson, ensuring consistent data acquisition across architectures.

Common Features Across Architectures

Both architectures support:

  • Real-time IQ sample recording and metadata tagging using NI-RF Data Recording API
  • Integration with SigMF-compliant datasets
  • Wideband RF capture across 70 MHz–6 GHz (with B206)
  • Configurable gain, center frequency, bandwidth, and LO offsets via JSON/YAML files

The dual-platform design allows researchers to prototype, validate, and deploy spectrum sensing pipelines in a variety of scenarios—from power-constrained edge sensing to scalable, cloud-connected research environments.

Bill of Materials

This section lists the hardware and software components required to replicate the spectrum sensing setup described in the reference architectures.

Jetson-Based Embedded Spectrum Sensing Setup

  • NI USRP B206 SDR (Receiver)
    • Frequency Range: 70 MHz – 6 GHz
    • Bandwidth: up to 56 MHz
    • Interface: USB 3.0
  • NI USRP X410 SDR (Transmitter)
    • Frequency Range: up to 7.2 GHz
    • Bandwidth: up to 1 GHz per channel
    • Interface: 10 GbE (SFP+)
  • NVIDIA Jetson AGX Orin 64 GB Developer Kit (Edge Host)
    • GPU: 2048-core Ampere GPU
    • Interfaces: USB 3.0, 10 Gb Ethernet
    • OS: Ubuntu 22.04 (ARM64) with JetPack 6.2.1
  • Display and Input Devices
    • Monitor (DisplayPort or HDMI)
    • USB Keyboard and Mouse
  • 30 dB RF Attenuator
    • Protects RX frontend during loopback or close-range transmission
  • Network Switch (Gigabit)
    • Routes LAN traffic between Jetson and X410
  • RF Cables and Antennas or Dummy Load
  • Power Supply for USRP X410 and Jetson
  • USB 3.0 Cable (for Jetson–B206 interface)


x86-Based Spectrum Sensing Setup

  • NI USRP B206 SDR (Receiver)
  • NI USRP X410 SDR (Transmitter)
  • x86 Workstation or Server (Host PC)
    • CPU: Intel Xeon w7-2495X, 24 cores, 2.5 GHz
    • OS: Ubuntu 22.04 LTS
    • UHD: Version 4.8 or newer
    • RAM: Minimum 32 GB recommended
    • Storage: SSD for high-speed IQ data logging
  • Display and Input Devices
    • Monitor (DisplayPort or HDMI)
    • USB Keyboard and Mouse
  • 30 dB RF Attenuator
  • USB 3.0 Cable (PC–B206 interface)
  • Ethernet Cables (PC and X410 to switch)
  • Network Switch (Gigabit or 10 GbE)
  • Coaxial Cable (RF connection between TX and RX)


Software Requirements (Common)

  • UHD (USRP Hardware Driver)
    • Version 4.9 recommended
    • Installed natively
  • Python 3.x Environment
    • Required packages: numpy, pyyaml, sigmf, uhd, etc.


Hardware Requirements

To implement the proposed spectrum sensing architecture, the following hardware components are required. The selected devices are chosen for their compatibility with the NI-RF Data Recording API, support for UHD drivers, and ability to perform high-speed RF acquisition and processing.

NI USRP B206 (Receiver SDR)

The USRP B206 is a low-cost, full-duplex software-defined radio with wide RF coverage and USB 3.0 connectivity, making it ideal for spectrum sensing tasks.


NI USRP X410 (Transmitter SDR)

The USRP X410 is a high-performance, 4-channel SDR capable of wideband signal transmission and reception. It supports 10 GbE connectivity and real-time FPGA processing.


NVIDIA Jetson AGX Orin 64 GB Developer Kit (Edge Host)

The Jetson AGX Orin series provides a powerful embedded GPU platform for edge AI and RF signal processing.

  • GPU: NVIDIA Ampere architecture
  • RAM: 32 GB / 64 GB LPDDR4/5
  • Connectivity: USB 3.0, Ethernet, GPIO
  • OS Support: Ubuntu 20.04 / 22.04 (ARM64)
  • Purchase Link: https://store.nvidia.com/jetson/store/


Network Switch (Gigabit or 10 GbE)

A managed or unmanaged Ethernet switch is required to route LAN traffic between the Jetson or x86 host and the USRP X410.


High-Performance x86 Host (Optional for Lab Use)

An x86 workstation is recommended for development, high-throughput data collection, or as an alternative to Jetson in a lab environment.

  • Processor: Minimum specification of an 8-core CPU at 3 GHz or higher (e.g., Intel Xeon or equivalent). Higher core counts (e.g., 24-core Xeon W7-2495X) can improve throughput and parallel data processing but are not mandatory.
  • RAM: 64 GB or more
  • Storage: NVMe SSD for high-speed data logging
  • Operating System: Ubuntu 22.04 LTS
  • Form Factor: Tower workstation or server


RF Accessories

  • RF Coaxial Cables (SMA)
  • 30 dB Attenuator – Protects RX during close TX–RX loopback tests
  • Antennas (Wideband or band-specific)
  • Dummy Load (for isolated lab TX tests)
  • USB 3.0 Cable (for USRP B206)
  • Ethernet Cables (Cat 6 or SFP+ DAC for X410)
  • Power Supplies:
    • Jetson: 19 V / 4.74 A adapter (usually included)
    • X410: External DC or rack supply per specifications


Software Requirements

This section outlines the required software components for enabling spectrum sensing using the NI-RF Data Recording API with USRP B206/X410 and NVIDIA Jetson or x86 hosts. These tools are compatible across both embedded and desktop-class platforms and support real-time signal acquisition and metadata tagging in SigMF format.

Ubuntu Operating System


NI-RF Data Recording API

  • Description: Open-source Python API developed by NI and the Genesys Lab (Northeastern University) for recording and labeling RF data in SigMF format using USRP devices.
  • Features: Configurable YAML/JSON setups, multi-SDR coordination, SigMF conversion, supports transmission/reception workflows.
  • Repository: https://github.com/ni/ni-rf-data-recording-api


UHD – USRP Hardware Driver

  • Description: The official driver and API library for controlling and interfacing with all NI/Ettus USRP SDR hardware. Required for low-level communication between Python and the hardware.
  • Version: The NI-RF Data Recording API requires a UHD version that supports the selected USRP device:
    • For X410 (and other X-Series): UHD ≥ 4.2 (first stable release UHD 4.4 recommended)
    • For B206: UHD ≥ 4.9 required
  • Repository: https://github.com/EttusResearch/uhd
  • Install Guide: UHD and GNU Radio Install Guide


Python Environment

  • Version: Python 3.10.12 or newer
  • Required Packages:
    • numpy, pyyaml, sigmf, uhd, scipy, matplotlib, etc.
  • Package Manager: pip or conda
  • Recommended Setup: Create a Python virtual environment for isolation and reproducibility.
  • Download Link: https://www.python.org/


SigMF Library (Python)

  • Description: Used for generating and parsing metadata in the Signal Metadata Format (SigMF), enabling dataset interoperability and ML dataset labeling.
  • Supported Version: Validated with SigMF 1.0.0 (later versions such as 1.1.x or 1.2.x introduce major changes and have not been validated).
  • Repository: https://github.com/gnuradio/sigmf-numpy
  • Installation Command: pip install sigmf==1.0.0
  • Reference: For more details, see the NI RF Data Recording API Getting Started Guide.


Installing and Configuring the UHD Software

This section explains how to build and install the USRP Hardware Driver (UHD) from source code. At the time of this writing, the recommended version is UHD 4.9.

  • It is strongly recommended to build UHD from source rather than installing from binary packages to ensure compatibility and access to the latest updates.
  • For x86/64 Ubuntu systems with released UHD versions available, you may install via APT Debian packages.
  • For Jetson (ARM64) systems, UHD must be built from source since no binary packages are provided.

---

Before building UHD, install all required dependencies (Ubuntu 22.04):

Note: If your system already has another UHD version installed, remove it first:

sudo apt remove libuhd* uhd-host
sudo rm -rf /usr/lib/uhd /usr/include/uhd /usr/local/lib/uhd /usr/local/include/uhd

Then install build dependencies:

sudo apt update && sudo apt install -y \
  cmake g++ libboost-all-dev libusb-1.0-0-dev \
  libuhd-dev python3 python3-mako python3-numpy \
  python3-requests python3-ruamel.yaml libfftw3-dev \
  libqt5opengl5-dev qtbase5-dev qtchooser qt5-qmake \
  qtbase5-dev-tools doxygen

Clone the UHD repository and check out version v4.9.0.0:

git clone https://github.com/EttusResearch/uhd.git
cd uhd
git checkout v4.9.0.0

Build and install UHD:

cd host
mkdir build && cd build
cmake ..
make -j$(nproc)
sudo make install
sudo ldconfig
export LD_LIBRARY_PATH=/usr/local/lib
sudo usrp_images_downloader

Verify the installation:

uhd_usrp_probe
uhd_find_devices

For more details, see the official UHD GitHub page: https://github.com/EttusResearch/uhd

File:fae8d810-08f0-4ae3-ab87-5b6a31eeaa66.png
uhd_usrp_probe output for B206
File:5026560b-fc07-4f77-953c-c17f41acfccc.png
uhd_find_devices output for N310 and X410

Figure: Examples of UHD utilities used for USRP probing and device discovery.


Post UHD Installation Tasks

  1. Download USRP images
sudo /usr/local/bin/uhd_images_downloader
  1. Add USB udev rule (can be limited to specific vendor/device IDs)
sudo nano /etc/udev/rules.d/99-usb.rules
# Add this line:
# SUBSYSTEM=="usb",MODE="0666"
sudo udevadm control --reload-rules
sudo udevadm trigger
  1. Unplug and replug the USB device if it was already connected.
  1. Enable Python UHD API visibility
echo "/usr/local/lib/python3.10/site-packages" | \
sudo tee /usr/local/lib/python3.10/dist-packages/local-site-packages.pth


UHD Installation Verification

  1. Find connected USRP devices
uhd_find_devices
  1. Run throughput benchmark on the B2x0 device
/usr/local/lib/uhd/examples/benchmark_rate --args "type=b200" --rx_rate 10e6
  1. Run Python throughput benchmark
python3.10 /usr/local/lib/uhd/examples/python/benchmark_rate.py --args "type=b200" --rx_rate 10e6

Installing and Configuring the USRP X410 Radio

For detailed documentation, see the official Ettus manual: https://files.ettus.com/manual/page_usrp_x4xx.html


Connecting to the X410

You can connect to the USRP X410 using either of the following interfaces:

  • Ethernet (RJ45)
  • USB-C JTAG Console

If you cannot connect to the X410 (e.g., because it has a static IP address):

  1. Connect the USRP to your PC using a USB-C ↔ USB cable.
 See the Serial connection section in the Ettus manual: [1]
  1. Change the static IP to a DHCP-assigned IP.
 See the Network interfaces section: [2]


Updating the Filesystem

For full details, refer to the Ettus manual section: Updating Filesystems.

The easiest method is to perform the update directly on the X410 using the built-in usrp_update_fs utility:

# Login to the USRP
ssh root@usrp_ip

# Update filesystem to UHD 4.9
usrp_update_fs -t UHD-4.9

# Or install the UHD master version
usrp_update_fs -t master

# Reboot the USRP
reboot

# If the reboot works and the device is functional, commit the changes
mender commit


Updating the FPGA Image

For details, see: Updating the FPGA.

You can verify and benchmark the X410 performance using the UHD example utility:

./benchmark_rate --args="mgmt_addr=10.89.12.177,addr=192.168.10.2,\
second_addr=192.168.11.2,clock_source=internal,time_source=internal" \
--rx_rate 200e6 --channels 0 --rx_channels 0


Installing Spectrum Sensing Example on x86 Architecture

This section provides a detailed procedure for installing and running the Spectrum Sensing example on an x86 architecture. The spectrum_sensing folder within the NI-RF Data Recording API repository provides a ready-to-run demonstration of live RF spectrum sensing using a single receiver (e.g., USRP B206/X410) connected to an x86 host.


Setup Instructions

  1. Clone the repository:
git clone https://github.com/ni/ni-rf-data-recording-api.git
  1. Clone the YOLOv5 repository:
git clone https://github.com/ultralytics/yolov5
  1. Install dependencies for NI RF Data Recording API:
cd ni-rf-data-recording-api
pip install -r requirements.txt


Python Package Dependencies

The following Python packages are required to run the spectrum sensing pipeline using the NI-RF Data Recording API.

  • termcolor – Prints colored text in terminal for log readability.
  • numpy (>=1.23.5, <2.0.0) – Core numerical library for IQ array operations, FFTs, and signal processing.
  • scipy (>=1.4.1) – Used for filtering, spectral analysis, and mathematical routines.
  • matplotlib (>=3.3) – Generates spectrum plots, spectrograms, and PSD visualizations.
  • pandas (>=1.1.4) – Handles RF metadata and experiment logs.
  • pyyaml (>=5.3.1) – Loads YAML configuration files for USRP setup parameters.
  • nptdms – Enables reading and writing NI TDMS waveform files.
  • sigmf – Implements Signal Metadata Format for storing IQ recordings with metadata.


  1. Install dependencies for the example:
cd ni-rf-data-recording-api/examples/spectrum_sensing
pip install -r requirements.txt


Advanced Python Dependencies for Spectrum Sensing and ML Integration

These additional libraries enable advanced visualization, AI/ML inference, and web dashboard integration.

  • dash – Web-based dashboard framework for real-time spectrum visualization.
  • dash-daq – Adds instrumentation UI components for live control.
  • dash-bootstrap-components – Provides responsive Bootstrap layouts for Dash apps.
  • pillow (>=10.3.0) – Handles image saving and processing of spectrograms.
  • torch (>=1.8.0) – PyTorch deep learning framework for inference/training.
  • torchvision (>=0.9.0) – Vision utilities for preprocessing spectrograms.
  • ultralytics (>=8.2.34) – YOLOv8 utilities for signal classification.
  • gitpython (>=3.1.30) – Enables automated Git repository handling.
  • opencv-python (>=4.1.1) – Performs spectrogram image manipulation.
  • seaborn (>=0.11.0) – Provides data visualization and heatmaps.
  • tqdm (>=4.66.3) – Adds progress bars during capture or inference.
  • requests (>=2.32.2) – Handles model downloads and HTTP requests.
  • setuptools (>=70.0.0, <80.9.0) – Ensures consistent Python packaging.


Configuring the Example

The configuration files are located at: ~/ni-rf-data-recording-api/examples/spectrum_sensing/config/

They define:

  • RF parameters: center frequency, gain, bandwidth, sample rate
  • Device type (e.g., B206) and connection interface
  • Capture duration and number of records
  • Output directory and naming conventions


Key Configuration Parameters

Parameter Description Example
"rx_recorded_data_path" Path to store captured IQ data "datasets/records/"
"nrecords" Number of snapshots to capture 10
"freq" Center frequency (Hz) 3.6e9
"rate" IQ sample rate (Sps) 50e6
"bandwidth" Analog bandwidth 20e6
"gain" RX gain (dB) 40
"duration" Recording duration (s) 0.04
"rate_source" Sample rate mode "user_defined"
"captured_data_file_name" Prefix for SigMF files "rx-waveform-td-rec-"
"antenna" Antenna port "TX/RX"
"clock_reference" Reference clock "internal"


Execution

(a) Run the UI application:

cd ~/ni-rf-data-recording-api/examples/spectrum_sensing
python spectrum_sensing.py

After launching, you’ll see: Dash is running on http://127.0.0.1:8050/ Open this link in your browser to access the dashboard.

  • Load a configuration file from the dashboard.
  • Click Start to begin sensing.
  • IQ samples will be captured and saved in SigMF format at:

~/ni-rf-data-recording-api/examples/spectrum_sensing/datasets/records

AI-based spectrum sensing dashboard using NI USRP SDRs and NI RF Data Recording API

Figure: AI-based spectrum sensing system using NI USRP SDRs, the NI RF Data Recording API, and a web-based control dashboard.

System Workflow Description

The figure above shows the end-to-end architecture for AI-driven spectrum sensing with NI USRPs.

  • TX Configuration: User selects the waveform to transmit; it can be sent over-the-air or via RF cable.
  • Start/Stop Control: Clicking Start launches the sensing and recording pipeline with live indicators for SDR initialization and capture status.


(b) Run the inference script:

cd ~/ni-rf-data-recording-api/examples/spectrum_sensing
python inference.py

The inference script processes IQ recordings stored in: ~/ni-rf-data-recording-api/examples/spectrum_sensing/datasets/records

It converts each dataset into a spectrogram image saved at: ~/ni-rf-data-recording-api/examples/spectrum_sensing/datasets/images

The spectrograms are passed to a pre-trained YOLOv5 model for signal classification.

Figure: Real-time inference output showing successful detection of a 5G NR waveform using a pre-trained YOLOv5 model.


Live Inference Visualization

After IQ samples are captured and stored, inference.py generates spectrograms and classifies signals. In the shown example, the YOLOv5 model identifies a 5G NR waveform (5GNR) with a confidence score of 0.96. Detected signals show high classification accuracy and clear time-frequency boundaries.


Spectrum Sensing Application with NI USRP and NVIDIA Jetson

This section summarizes the official documentation for running the spectrum_sensing application using NI USRP SDR hardware on NVIDIA Jetson platforms.

Overview

The application demonstrates real-time spectrum sensing by interfacing a NI USRP SDR (e.g., B206) with an NVIDIA Jetson device over USB 3.0. The Jetson hosts the NI RF Data Recording API and executes the entire data acquisition pipeline — including RF configuration, signal capture, visualization, and data formatting into SigMF files. Because Jetson devices are ARM-based, a Jetson-specific PyTorch package is available from NVIDIA, while TorchVision must be built from source.


PyTorch Installation

  1. Install required dependencies:
sudo apt update
sudo apt install python3-pip libopenblas-base libopenmpi-dev
sudo pip3 install --upgrade pip
  1. For Python 3.10 and JetPack 6.2.1, install PyTorch 2.5:
wget https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl
pip3 install torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl
  1. Fix libcusparse-related errors (if any):
mkdir -p ~/tmp_cusparselt && cd ~/tmp_cusparselt
wget https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-aarch64/libcusparse_lt-linux-aarch64-0.7.0.0-archive.tar.xz

tar xf *.tar.xz
sudo cp -a libcusparse_lt-linux-aarch64-0.7.0.0-archive/include/* /usr/local/cuda/include/
sudo cp -a libcusparse_lt-linux-aarch64-0.7.0.0-archive/lib/* /usr/local/cuda/lib64/
sudo ldconfig
cd ~ && rm -rf ~/tmp_cusparselt

# Verify installation
python3 -c "import torch; print(torch.__version__); print(torch.cuda.is_available())"
# Output should show:
# 2.5.0a0+872 and True


PyTorch Vision (torchvision)

  1. Install dependencies:
sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libopenblas-dev \
libavcodec-dev libavformat-dev libswscale-dev
  1. Clone the source repository:
git clone --branch v0.20.0 https://github.com/pytorch/vision.git
  1. Build and install:
cd vision
export BUILD_VERSION=0.20.0
python3 setup.py build
python3 setup.py install


Python Virtual Environment (Optional)

To isolate the working environment from the system:

  1. Install venv support:
sudo apt-get install python3.10-venv
  1. Create and activate environment:
python3.10 -m venv .venv --system-site-packages --prompt demo
source .venv/bin/activate


Python Requirements

  1. Install SigMF:
pip install sigmf
  1. Install npTDMS:
pip install npTDMS
  1. Colored terminal output:
pip install colored termcolor
  1. Dash and dashboard components:
pip install dash dash_daq dash_bootstrap_components
  1. YOLOv5 pre-requirements:
pip install -U "gitpython>=3.1.30" "matplotlib>=3.3" "numpy>=1.23.5" \
"opencv-python>=4.1.1" "pillow>=10.3.0" psutil "PyYAML>=5.3.1" \
"requests>=2.32.2" "scipy>=1.4.1" "thop>=0.1.1" "tqdm>=4.66.3" \
"ultralytics>=8.2.34" "setuptools>=70.0.0" "seaborn>=0.11.0"


YOLOv5 Model

The demo application uses the YOLOv5 image detection model from Ultralytics (AGPL-3.0 license).

git clone https://github.com/ultralytics/yolov5


Demo Code and Data Recording API

Clone the NI RF Data Recording API repository:

git clone https://github.com/ni/ni-rf-data-recording-api.git

After all dependencies are installed, the Spectrum Sensing use case can be executed on Jetson following the same procedure as described for the x86 system.


Waveform Creation and Signal Recording Pipeline

This section outlines the process for generating waveforms, capturing RF data using the NI-RF Data Recording API, and producing spectrogram images for machine learning applications.

Waveform Repository

The src/waveforms/ directory contains all pre-generated test signals used with the NI RF Data Recording API. It includes four subfolders: 5G NR, LTE, Wi-Fi, and Radar.

Each waveform consists of:

  • IQ Data File (.tdms or .mat) — contains complex baseband samples.
  • Configuration File (.rfws, .yaml, or .csv) — describes waveform parameters such as bandwidth and sampling rate.

Examples:

  • LTE: LTE_TDD_DL_20MHz_....tdms + ...rfws
  • Radar: Radar_Waveform_BW_2M.mat + Radar_Waveform_BW_2M.yaml

Figure: Waveform repository structure showing pre-generated 5G NR, LTE, Wi-Fi, and Radar signals mapped through the Wireless Link Parameter Dictionary.


Waveform Sources

  • RFmx Waveform Creator: Used for generating 5G NR and LTE waveforms (.tdms + .rfws).
  • IEEE MATLAB Wi-Fi Generator: Used for Wi-Fi test signals (.mat + .csv).
  • Simulated Radar Generator (MATLAB): Used for radar signals (.mat + .yaml).


Usage in the API

During recording, JSON/YAML configuration files in src/config/ reference these waveform paths. The wireless_link_parameter_map.yaml dictionary maps waveform configuration fields (e.g., bandwidth, sampling rate, standard) to the SigMF metadata format — ensuring standardized dataset descriptions.


Recording IQ Data and Metadata via API

Once waveforms are prepared:

  1. Edit the configuration file (YAML/JSON) with your TX/RX parameters such as frequency, gain, and waveform paths.
  2. Run the recording:
python3 main_rf_data_recording_api.py --config path/to/your_config.yaml
  1. The API maps parameters to SigMF metadata, controls USRP Tx/Rx via UHD, and writes:
    • .sigmf-data (binary IQ samples)
    • .sigmf-meta (JSON metadata)


Spectrogram Image Generation via Preprocessing

After dataset generation:

  • Run preprocessing scripts (e.g., rf_data_pre_processing_plot.py) to visualize or convert SigMF recordings into time/frequency plots.
  • Generate and crop spectrograms, partitioning them into training and validation sets for ML workflows.
  • The structured image datasets form the foundation for AI-based spectrum classification and detection.

This end-to-end pipeline — from waveform generation to SigMF-formatted capture and spectrogram creation — enables reproducible, metadata-rich dataset production for AI-driven spectrum sensing research.


How to use RF Data Recording API with user defined dataset?

To use the NI RF Data Recording API with a user-defined dataset for training and inference using YOLOv8, follow this multi-step process covering signal generation, data preprocessing, model training, and inference.

---

SigMF Data and Metadata Generation

Once the transmission signal is configured, stream IQ samples and record them in SigMF format by running data_recording.py:

  • Location of the script:
/ni-rf-data-recording-api/examples/spectrum_sensing
  • SigMF outputs:
    • .sigmf-data: Binary file with raw IQ samples.
    • .sigmf-meta: JSON metadata (frequency, sample rate, gain, antenna, timestamps, etc.).

The script uses your YAML/JSON control file for parameters (center frequency, sample rate, bandwidth, gain, capture duration, number of records).

  • Output directory:
/ni-rf-data-recording-api/examples/spectrum_sensing/datasets/records

These SigMF files become the primary dataset for later analysis, visualization, and ML-based classification (e.g., spectrogram-based YOLO).

---

Spectrogram Generation and Dataset Preprocessing

Convert SigMF recordings into labeled spectrogram images using pre-processing.py. It orchestrates:

  1. spectrogram_creator.py – Reads .sigmf-data, applies STFT, saves spectrogram images (e.g., in datasets/images).
  2. image_cropper.py – Removes non-signal plot artifacts (axes, labels, borders) to produce clean images for detection models.
  3. dataset_partitioner.py – Splits dataset into train/val (e.g., 80/20) with balanced classes.
  4. label_maker.py – Creates YOLO-compatible label files for each image in the format:
<class_id> <x_center> <y_center> <image_width> <image_height>

Resulting structure:

  • Cleaned spectrogram images: datasets/images
  • YOLO labels: datasets/labels
  • Splits: datasets/train, datasets/val

This pipeline yields a model-ready dataset for accurate training and inference.

---

Dataset Configuration: data.yaml for YOLO Training

Fields:

  • train – Path to training images
  • val – Path to validation images
  • nc – Number of classes
  • names – List of class names in class-id order

Example:

train: datasets/train/images
val: datasets/val/images

nc: 3
names: ['5gnr', 'wifi', 'lte']

Use this file with YOLOv5/YOLOv8 training commands. Store it in the project root or inside the dataset folder.

---

Model Training Using YOLOv8 (Example)

Cloning YOLOv8 from Source

# Clone Ultralytics YOLOv8
git clone https://github.com/ultralytics/ultralytics.git
cd ultralytics

# (Optional) Virtual environment
python3 -m venv .venv
source .venv/bin/activate   # Linux/macOS
# .venv\Scripts\activate    # Windows PowerShell

# Install in editable mode
pip install --upgrade pip
pip install -e .

Verify:

yolo help

YOLOv8 Training Command

Train the nano model on your spectrogram dataset:

yolo detect train \
  model=yolov8n.pt \
  data=/content/dataset/data.yaml \
  epochs=50 \
  imgsz=640 \
  batch=16 \
  project=burst_train \
  name=yolov8n_spectrogram

Parameter notes:

  • model=yolov8n.pt – Base architecture (nano).
  • data=... – Path to data.yaml.
  • epochs=50 – Training epochs.
  • imgsz=640 – Input resolution.
  • batch=16 – Batch size.
  • project/name – Output directories for logs/artifacts.

Outputs:

burst_train/yolov8n_spectrogram

(Weights, logs, confusion matrices, PR curves, etc.)


Conclusion

The NI RF Data Recording API provides a powerful and flexible framework for real-time spectrum sensing, dataset generation, and AI-driven signal classification across both x86 and embedded platforms such as NVIDIA Jetson. By leveraging standardized formats like SigMF and integrating deep learning models such as YOLOv8, the framework enables a complete end-to-end workflow—from RF signal acquisition and metadata tagging to spectrogram creation, training, and live inference.

This modular approach allows researchers and engineers to rapidly prototype, evaluate, and deploy intelligent wireless sensing systems that bridge the gap between traditional SDR experimentation and modern AI-based spectrum analytics. The same unified methodology can be extended to multi-band sensing, interference detection, cognitive radio, and 6G spectrum intelligence research, ensuring scalability and reproducibility in both laboratory and field environments.