O-RAN O-DU Low

User Guide, June 2022

O-DU Low Project Introduction

The O-DU low project focus on the baseband PHY Reference Design, which uses Xeon® series Processor with Intel Architecture. This 5GNR Reference PHY consists of a L1 binary and three kinds of interfaces which are validated on a Intel® Xeon® SkyLake / CascadeLake platforms and demonstrates the capabilities of the software running different 5GNR L1 features. It implements the relevant functions described in [3GPP TS 38.211, 212, 213, 214 and 215].

The L1 has three interfaces to communicate with other network functions as described below:

  • Interface between L1 and Front Haul, it adopts the WG4 specification for the CUS plane communication.

  • Interface between O-DU Low and O-DU High, it adopts the FAPI interface according to the WG8 AAL specification.

  • Interface between O-DU Low and accelerator, DPDK BBDev was adopted as original contribution, it will follow the WG6 definition after the WG6 specification is finalized.

The following figure shows the ORAN O-CU, O-DU and O-RU blocks for a gNB implemetation. The O-DU Low projects implements the FAPI interface by a 5G FAPI TM module, the OFH-U and OFH-C by means of the FHI Library and the functionality of the High-PHY and a test MAC are available through github in the form of a binary blob for the current release. For the details refer to the Running L1 and Testmac section of this document

Figure1.Oran OCU ODU and ORU Block Diagram

Scope

In this O-DU Low document, the details on how the build the modules supporting each interface, how to run the L1 and associated components, the architecture for each interface implementation and the release notes that describe each component release details are provided.

Intended Audience

The intended audience for this document are software engineers and system architects who design and develop
5G systems using the O-RAN Specifications.

Terminology

Table 1. Terminology

Term

Description

5G NR

Fifth Generation New Radio

ACS

Access Control system

API

Application Programming Interface

BOM

Bill of Materials

CP

Cyclic Prefix

DDP

Dynamic Device Personalization

DPDK

Data Plane Development Kit

eAxC

Extended Antenna Carrier

eCPRI

Enhanced Common Public Radio Interface

eNB

Enode B

ETH

Ethernet

FCS

Frame Check Sequence

FEC

Forward Error Correction

FFT

Fast Fourier Transform

FH

Front Haul

gNB

Next-generation NodeB also named as Base Station

GNSS

Global Navigation Satellite System

GPS

Global Positioning System

HARQ

Hybrid Automatic Repeat Request

HW

Hardware

IFG

Interframe Gap

IFFT

Inverse Fast Fourier Transform

IoT

Inter-Operability Testing

IQ

In-band and Quadrature

LAA

License Assisted Access

LTE

Long Term Evolution

MAC

Media Access Control

MEC

Mobile Edge Computing

M-Plane

Management Plane

mmWave

Millimeter Wave

NIC

Network Interface Controller

O-DU

O-RAN Distributed Unit: a logical node hosting RLC/MAC/High-PHY layers based on a lower layer functional split.

O-RU

O-RAN Radio Unit: a logical node hosting Low-PHY layer and RF processing based on a lower layer functional split. This is similar to 3GPP’s “TRP” or “RRH” but more specific in including the Low-PHY layer (FFT/IFFT, PRACH extraction).

OWD

One Way Delay

PDCCH

Physical Downlink Control Channel

PDSCH

Physical Downlink Shared Channel

PHC

Physical Hardware Clock

PHP

Hypetext Preprocessor

PMD

Poll Mode Driver

POSIX

Portable Operating System Interface

PRACH

Physical Random Access Channel

PRB

Physical Resource Block

PRTC

Protected Real Time Clock

PUCCH

Physical Uplink Control Channel

PUSCH

Physical Uplink Shared Channel

PTP

Precision Time Protocol

RA

Random Access

RAN

Radio Access Network

RB

Resource Block

RE

Resource Element

RLC

Radio Link Control

RoE

Radio over Ethernet

RT

Real Time

RTE

Real Time Environment

RSS

Receive Side Scaling

RU

Radio Unit

SR-IOV

Single Root Input/Output Virtualization

SW

Software

SyncE

Synchronous Ethernet

TDD

Time Division Duplex

ToS

Top of the Second

TSC

Time Stamp Counter

TTI

Transmission Time Interval

UE

User Equipment

UL

Uplink

VF

Virtual Function

VIM

Virtual Infrastructure Manager

VLAN

Virtual Local Area Network

VM

Virtual Machine

WLS

Wireless Subsystem Interface

xRAN

Extensible Radio Access Network

Reference Documents

Table 2. Reference Documents

Document

Document No./Location

FlexRAN Reference Solution
Software Release Notes

575822

FlexRAN Reference Solution L1
XML Configuration User Guide

571741

FlexRAN Reference Solution LTE
eNB L2-L1 Application
Programming Interface [API]
Specification

571742

FlexRAN Reference Solution
L2-L1 nFAPI Specification

576423

FlexRAN and Mobile Edge
Compute(MEC) Platform
Setup Guide

575891

FlexRAN 5G NR Reference
Solution RefPHY (Doxygen).

603577

Intel® Ethernet Controller
E810
Dynamic Device
Personalization (DDP)
Technology Guide

617015

3GPP* specification series

https://www.3gpp.org/specifications

Wolf Pass Server Documentation

Intel® C++ Compiler in Intel® Parallel Studio XE

DPDK documentation

https://core.dpdk.org/doc/

O-RAN Fronthaul Working Group
Control, User and
Synchronization Plane
Specification
(ORAN-WG4.CUS.0-v04.00)

https://www.o-ran.org/specifications

ORAN Specifications

https://www.o-ran.org/specifications

IEEE-1588-2008 IEEE Standard
for a Precision Clock
Synchronization Protocol for
Networked Measurement and
Control Systems

https://standards.ieee.org/ standard/1588-2008.html

eCPRI Specification V2.0
Interface Specification
post/2019/10/05/capturing-/
packets-through-ecpri-v20/
-which-enables-5g

Assumptions, Dependencies, and Constraints

This chapter contains limitations on the scope of the document.

Assumptions

An L1 with a proprietary interface and a testmac supporting the FAPI interface are available through the Open Source Community(OSC) github in binary blob form and with the reference files that support the tests required for the current O-RAN Release. The required header files that are needed to build the 5G FAPI TM and to run validation tests and to integrate with the O-DU High to check network functionality are available from the same site. The L1 App and Testmac repository is at https://github.com/intel/FlexRAN/

Requirements

  • Only Xeon® series Processor with Intel Architecture are supported and the platform should be either Intel® Xeon® SkyLake or CascadeLake with at least 2.0 GHz core frequency.

  • FPGA/ASIC card for FEC acceleration that is compliant with the BBDev framework and interface. Only needed to run high throughput cases with the HW FEC card assistance.

  • Bios setting steps and options may have differences, however at least you should have the same Bios setting as decribed in the README.md file available at https://github.com/intel/FlexRAN Bios settings section.

  • Running with FH requires PTP for Linux* version 2.0 (or later) to be installed to provide IEEE 1588 synchronization.

Dependencies

O-RAN library implementation depends on the Data Plane Development Kit (DPDK v20.11.3).

DPDK v20.11.3 should be patched with corresponding DPDK patch provided with FlexRAN release (see Table 1, FlexRAN Reference Solution Software Release Notes)

Intel OneApi DPC++/C++ Compiler is used. Version 2022.0.0 or newer.

Intel® C++ Compiler v19.0.3 can also be used but not verified with the f release.

  • Optionally Octave v3.8.2 can be used to generate reference IQ samples (octave-3.8.2-20.el7.x86_64).

Constraints

This release has been designed and implemented to support the following numerologies defined in the 3GPP specifications for LTE and 5GNR (refer to Table 2):

5G NR

Category A support:

  • Numerology 0 with bandwidth 5/10/20 MHz with up to 12 cells in 2x2 antenna configuration

  • Numerology 0 with bandwidth 40 MHz with 1 cell.

  • Numerology 1 with bandwidth 20/40 MHz with 1 cell and URLLC use cases for 40 MHz

  • Numerology 1 with bandwidth 100 MHz with up to 16 cells

  • Numerology 3 with bandwidth 100 MHz with up to 3 cells

Category B support:

Numerology 1 with bandwidth 100 MHz where the antenna panel is up to 64T64R with up to 3 cells.

LTE

Category A support:

Bandwidth 5/10/20 MHz with up to 12 cells

Category B support:

Bandwidth 5/10/20 MHz for 1 cell

The feature set of O-RAN protocol should be aligned with Radio Unit (O-RU) implementation. Inter-operability testing (IoT) is required to confirm the correctness of functionality on both sides. The exact feature set supported is described in Chapter 4.0 Transport Layer and O-RAN Fronthaul Protocol Implementation of this document.

Build Prerequisite

This section describes how to install and build the required components needed to build the FHI Library, WLS Library and the 5G FAPI TM modules. For the f release the ICC compiler is optional and support will be discontinued in future releases

Download and Install oneAPI

Download and install the Intel® oneAPI Base Toolkit by issuing the following commands from yor Linux Console:

wget https://registrationcenter-download.intel.com/akdlm/irc_nas/18673/l_BaseKit_p_2022.2.0.262_offline.sh

sudo sh ./l_BaseKit_p_2022.2.0.262_offline.sh

Then follow the instructions on the installer. Additional information available from

https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html?operatingsystem=linux&distributions=webdownload&options=offline

Install ICC and System Studio

Intel® C++ Compiler and System Studio v19.0.3 is used for the test application and system integration with L1,available from the following link https://registrationcenter-download.intel.com/akdlm/irc_nas/emb/15322/system_studio_2019_update_3_composer_edition_offline.tar.gz

The Intel® C++ Compiler can be used with a community license generated from the text below save it to a file named license.lic

PACKAGE IF1C22FFF INTEL 2023.0331 4F89A5B28D2F COMPONENTS=”CCompL

Comp-CA Comp-CL Comp-OpenMP Comp-PointerChecker MKernL PerfPrimL ThreadBB” OPTIONS=SUITE ck=209 SIGN=9661868C5C80

INCREMENT IF1C22FFF INTEL 2023.0331 31-mar-2023 uncounted

90F19E3889A3 VENDOR_STRING=”SUPPORT=COM https://registrationcenter.intel.com” HOSTID=ID=07472690 PLATFORMS=”i86_n i86_r i86_re amd64_re x64_n” ck=175 SN=SMSAJ7B6G8WS TS_OK SIGN=920966B67D16

PACKAGE IF1C22FFF INTEL 2023.0331 4F89A5B28D2F COMPONENTS=”CCompL

Comp-CA Comp-CL Comp-OpenMP Comp-PointerChecker MKernL PerfPrimL ThreadBB” OPTIONS=SUITE ck=209 SIGN=9661868C5C80

INCREMENT IF1C22FFF INTEL 2023.0331 31-mar-2023 uncounted

32225D03FBAA VENDOR_STRING=”SUPPORT=COM https://registrationcenter.intel.com” HOSTID=ID=07472690 PLATFORMS=”i86_n i86_r i86_re amd64_re x64_n” ck=80 SN=SMSAJ7B6G8WS SIGN=2577A4F65138

PACKAGE IF1C22FFF INTEL 2023.0331 4F89A5B28D2F COMPONENTS=”CCompL

Comp-CA Comp-CL Comp-OpenMP Comp-PointerChecker MKernL PerfPrimL ThreadBB” OPTIONS=SUITE ck=209 SIGN=9661868C5C80

INCREMENT IF1C22FFF INTEL 2023.0331 31-mar-2023 uncounted

90F19E3889A3 VENDOR_STRING=”SUPPORT=COM https://registrationcenter.intel.com” HOSTID=ID=07472690 PLATFORMS=”i86_n i86_r i86_re amd64_re x64_n” ck=175 SN=SMSAJ7B6G8WS TS_OK SIGN=920966B67D16

PACKAGE IF1C22FFF INTEL 2023.0331 4F89A5B28D2F COMPONENTS=”CCompL

Comp-CA Comp-CL Comp-OpenMP Comp-PointerChecker MKernL PerfPrimL ThreadBB” OPTIONS=SUITE ck=209 SIGN=9661868C5C80

INCREMENT IF1C22FFF INTEL 2023.0331 31-mar-2023 uncounted

32225D03FBAA VENDOR_STRING=”SUPPORT=COM https://registrationcenter.intel.com” HOSTID=ID=07472690 PLATFORMS=”i86_n i86_r i86_re amd64_re x64_n” ck=80 SN=SMSAJ7B6G8WS SIGN=2577A4F65138

PACKAGE I4BB00C7C INTEL 2023.0331 8D6186E5077C COMPONENTS=”Comp-CA

Comp-CL Comp-OpenMP Comp-PointerChecker MKernL PerfPrimL ThreadBB” OPTIONS=SUITE ck=131 SIGN=7BB6EE06F9A6

INCREMENT I4BB00C7C INTEL 2023.0331 31-mar-2023 uncounted

F1765BD5FCB4 VENDOR_STRING=”SUPPORT=COM https://registrationcenter.intel.com” HOSTID=ID=07472690 PLATFORMS=”i86_mac x64_mac” ck=114 SN=SMSAJ7B6G8WS SIGN=4EC364AC3576

Then copy license file to the build directory under license.lic

COPY license.lic $BUILD_DIR/license.lic

Note: Use serial number CG7X-J7B6G8WS

You can follow the installation guide from above website to download Intel System Studio and install. Intel® Math Kernel Library, Intel® Integrated Performance Primitives and Intel® C++ Compiler are mandatory components. Here we are using the Linux* Host,Linux* Target and standalone installer as one example, below link might need update based on the website

#wget https://registrationcenter-download.intel.com/akdlm/irc_nas/emb/15322/system_studio_2019_update_3_composer_edition_offline.tar.gz
#cd /opt && mkdir intel && cp $BUILD_DIR/license.lic intel/license.lic
#tar -zxvf $BUILD_DIR/system_studio_2019_update_3_composer_edition_offline.tar.gz

Edit system_studio_2019_update_3_composer_edition_offline/silent.cfg to accept the EULA file as below example:

ACCEPT_EULA=accept
PSET_INSTALL_DIR=opt/intel
ACTIVATION_LICENSE_FILE=/opt/intel/license.lic
ACTIVATION_TYPE=license_file

Silent installation:

#./install.sh -s silent.cfg
Set env for oneAPI or ICC:

Check for your installation path. The following is an example for ICC.

#source /opt/intel_2019/system_studio_2019/compiler_and_libraries_2019.3.206/linux/bin/iccvars.sh intel64 #export PATH=/opt/intel_2019/system_studio_2019/compiler_and_libraries_2019.3.206/linux/bin/:$PATH

Download and Build DPDK

  • download DPDK:

    #wget http://static.dpdk.org/rel/dpdk-20.11.3.tar.x
    #tar -xf dpdk-20.11.3.tar.xz
    #export RTE_TARGET=x86_64-native-linuxapp-icc
    #export RTE_SDK=Intallation_DIR/dpdk-20.11.3
    
  • patch DPDK for O-RAN FHI lib, this patch is specific for O-RAN FHI to reduce the data transmission latency of Intel NIC. This may not be needed for some NICs, please refer to
    O-RAN FHI Lib Introduction -> setup configuration -> A.2 prerequisites

  • SW FEC was enabled by default, to enable HW FEC with specific accelerator card, you need to get the associated driver and build steps from the accelerator card vendors.

  • build DPDK

    This release uses DPDK version 20.11.3 so the build procedure for the DPDK is the following

    Setup compiler environment

    if [ $oneapi -eq 1 ]; then

    export RTE_TARGET=x86_64-native-linuxapp-icx export WIRELESS_SDK_TOOLCHAIN=icx export SDK_BUILD=build-${WIRELESS_SDK_TARGET_ISA}-icc source /opt/intel/oneapi/setvars.sh export PATH=$PATH:/opt/intel/oneapi/compiler/2022.0.1/linux/bin-llvm/ echo “Changing the toolchain to GCC 8.3.1 20190311 (Red Hat 8.3.1-3)” source /opt/rh/devtoolset-8/enable

    else

    export RTE_TARGET=x86_64-native-linuxapp-icc export WIRELESS_SDK_TOOLCHAIN=icc export SDK_BUILD=build-${WIRELESS_SDK_TARGET_ISA}-icc source /opt/intel/system_studio_2019/bin/iccvars.sh intel64 -platform linux

    fi

    The build procedure uses meson and ninja so if not present in your system please install before the next step

    Then at the root of the DPDK folder issue:

    meson build
    cd build
    ninja
    
  • set DPDK path

    DPDK path is needed during build and run lib/app:

    #export RTE_SDK=Installation_DIR/dpdk-20.11.3
    #export DESTDIR=Installation_DIR/dpdk-20.11.3
    

Install google test

Download google test from https://github.com/google/googletest/releases
  • Example build and installation commands:

    #tar -xvf googletest-release-1.7.0.tar.gz
    #mv googletest-release-1.7.0 gtest-1.7.0
    #export GTEST_DIR=YOUR_DIR/gtest-1.7.0
    #export GTEST_ROOT= $GTEST_DIR
    #cd ${GTEST_DIR}
    #g++ -isystem ${GTEST_DIR}/include -I${GTEST_DIR} -pthread -c ${GTEST_DIR}/src/gtest-all.cc
    #ar -rv libgtest.a gtest-all.o
    #cd ${GTEST_DIR}/build-aux
    #cmake ${GTEST_DIR}
    #make
    #cd ${GTEST_DIR}
    #ln -s build-aux/libgtest_main.a libgtest_main.a
    
  • Set the google test Path

    this path should be always here when you build and run O-RAN FH lib unit test:

    #export DIR_ROOT_GTEST="your google test path"
    

Configure FEC card

For the F Release either a SW FEC, or an FPGA FEC (Vista Creek N3000) or an ASIC FEC (Mount Bryce ACC100) can be used. The procedure to configure the HW based FECs is explained below.

Customize a setup environment shell script

Using as an example the provided in the folder phy\setupenv.sh as the starting point customize this script to provide the paths to the tools and libraries that are used building and running the code. You can add for example the following entries based on your particular installation and the following illustration is just an example (use icx for oneApi instead of icc):

- export DIR_ROOT=/home/
- #set the L1 binary root DIR
- export DIR_ROOT_L1_BIN=$DIR_ROOT/FlexRAN
- #set the phy root DIR
- export DIR_ROOT_PHY=$DIR_ROOT/phy
- #set the DPDK root DIR
- #export DIR_ROOT_DPDK=/home/dpdk-20.11.3
- #set the GTEST root DIR
- #export DIR_ROOT_GTEST=/home/gtest/gtest-1.7.0
- export DIR_WIRELESS_TEST_5G=$DIR_ROOT_L1_BIN/testcase
- export DIR_WIRELESS_SDK=$DIR_ROOT_L1_BIN/sdk/build-avx512-icc
- export DIR_WIRELESS_TABLE_5G=$DIR_ROOT_L1_BIN/l1/bin/nr5g/gnb/l1/table
- #source /opt/intel/system_studio_2019/bin/iccvars.sh intel64 -platform linux
- export XRAN_DIR=$DIR_ROOT_PHY/fhi_lib
- export XRAN_LIB_SO=true
- export RTE_TARGET=x86_64-native-linuxapp-icc
- #export RTE_SDK=$DIR_ROOT_DPDK
- #export DESTDIR=""
- #export GTEST_ROOT=$DIR_ROOT_GTEST
- export ORAN_5G_FAPI=true
- export DIR_WIRELESS_WLS=$DIR_ROOT_PHY/wls_lib
- export DEBUG_MODE=true
- export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$DIR_WIRELESS_WLS:$XRAN_DIR/lib/build
- export DIR_WIRELESS=$DIR_ROOT_L1_BIN/l1
- export DIR_WIRELESS_ORAN_5G_FAPI=$DIR_ROOT_PHY/fapi_5g
- export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$DIR_ROOT_L1_BIN/libs/cpa/bin

Then issue:

- source ./setupenv.sh

This sets up the correct environment to build the code

Then build the wls_lib, FHI_Lib, 5G FAPI TM prior to running the code with the steps described in the Run L1 section

O-DU Low Project Release Notes

O-DU Low oran_f_release_v1.0. June 2022

  • Enhanced features and optimizations in support of MMIMO and URLC.

  • Enhanced test coverage prior to the comunity release.

  • Incorporation of Bug Fixes for E2E connectivity from Commercial product.

  • Additional information at the component level release notes.

  • Support for oneAPI compiler.

O-DU Low oran_e_maintenance_release_v1.0, Mar 2022

  • Enhanced features for MMIMO, URLLC and additional bandwidth use cases.

  • Enhanced test coverage by multiple new test cases added to the release.

  • Bug fixes per O-RAN 2021 Plugest integration with multiple equipment vendors.

  • Refer to the individual components release notes for additional information.

O-DU Low Bronze Release V 1.1, Aug 2020

  • Enhanced feature set for O-RAN FrontHaul compliant Radio<-> L1 interface, FAPI compliant L1<->L2 interfaces, and a shared memory and buffer management library for efficient L1<->L2 communication

  • Enhanced the code coverage test through more test cases.

  • Bug fixes according to unit test and integration test with third party.

  • Please refer to version FH oran_release_bronze_v1.1, FAPI TM oran_release_bronze_v1.1 and WLS oran_release_bronze_v1.1 release notes for additional details.

O-DU Low Bronze Release V 1.0, May 2020

O-DU Low Bronze release include: * ORAN WG8/WG4 Software Specification compliant DU Low implementation including O-RAN FrontHaul compliant Radio<-> L1 interface, FAPI compliant L1<->L2 interfaces, and a shared memory and buffer management library for efficient L1<->L2 communication * Ability to link in a high-performance L1 stack application with advanced 5GNR features including 3GPP TS 38.211, 212, 213, 214 and 215, running on Intel Xeon processor based O-DU hardware, and packaged with a comprehensive functional and performance evaluation framework * Please refer version FH oran_release_bronze_v1.0, FAPI TM oran_release_bronze_v1.0 and WLS oran_release_bronze_v1.0 release notes for Detail features

O-DU Low Amber Release V 1.0, 1 Nov 2019

O-DU Low Amber release include: * ORAN WG4 Software Specification compliant O-FH lib implementation * Please refer version FH oran_release_amber_v1.0 release notes for Detail features

FHI Library

Front Haul Interface Library Overview

The O-RAN FHI Lib is built on top of DPDK to perform U-plane and C-plane functions according to the ORAN Fronthaul Interface specification between O-DU and O-RU. The S-Plane support requires PTP for Linux version 2.0 or later. The Management plane is outside of the scope of this library implementation.

Project Resources

The source code is available from the Linux Foundation Gerrit server:

https://gerrit.o-ran-sc.org/r/gitweb?p=o-du%2Fphy.git;a=summary

The build (CI) jobs will be in the Linux Foundation Jenkins server:

https://jenkins.o-ran-sc.org

Issues are tracked in the Linux Foundation Jira server:

https://jira.o-ran-sc.org/secure/Dashboard.jspa

Project information is available in the Linux Foundation Wiki:

https://wiki.o-ran-sc.org

ODULOW

The ODULOW uses the FHI library to access the C-Plane and U-Plane interfaces to the O-RU. The FHI Lib is defined to communicate TTI event, symbol time, C-plane information as well as IQ sample data.

DPDK

DPDK is used by the FHI Library to interface to an Ethernet Port The FHI Library is built on top of DPDK to perform U-plane and C-plane functions per the ORAN Front Haul specifications

Linux PTP

Linux PTP is used to synchronize the system timer to the GPS time

O-RAN FH Lib Introduction

Architecture Overview

This section provides an overview of the O-RAN architecture.

Introduction

The front haul interface, according to the O-RAN Fronthaul specification, is part of the 5G NR L1 reference implementation provided with the FlexRAN software package. It performs communication between O-RAN Distributed Unit (O-DU) and O-RAN Radio Unit (O-RU) and consists of multiple HW and SW components.

The logical representation of HW and SW components is shown in Figure 1.

The same architecture design is applicable for LTE; however, the FH library is not integrated with the PHY pipeline for FlexRAN LTE.

Figure 1. Architecture Block Diagram

Figure 1. Architecture Block Diagram

From the hardware perspective, two networking ports are used to communicate to the Front Haul and Back (Mid) Haul network as well as to receive PTP synchronization. The system timer is used to provide a “sense” of time to the gNB application.

From the software perspective, the following components are used:

  • Linux* PTP provides synchronization of system timer to GPS time: - ptp4l is used to synchronize oscillator on Network Interface Controller (NIC) to PTP GM. - phc2sys is used to synchronize system timer to oscillator on NIC.

  • The DPDK to provide the interface to the Ethernet port.

  • O-RAN library is built on top of DPDK to perform U-plane and C-plane functionality according to the O-RAN Fronthaul specification.

  • 5GNR reference PHY uses the O-RAN library to access interface to O-RU. The interface between the library and PHY is defined to communicate TTI event, symbol time, C-plane information as well as IQ sample data.

  • 5G NR PHY communicates with the L2 application using the set of MAC/PHY APIs and the shared memory interface defined as WLS.

  • L2, in turn, can use Back (Mid) Haul networking port to connect to the CU unit in the context of 3GPP specification.

In this document, we focus on details of the design and implementation of the O-RAN library for providing Front Haul functionality for both mmWave and Sub-6 scenarios as well as LTE.

The O-RAN M-plane is not implemented and is outside of the scope of this description. Configuration files are used to specify selected M-plane level parameters.

5G NR L1 Application Threads

The specifics of the L1 application design and configuration for the given scenario can be found in document 603577, FlexRAN 5G NR Reference Solution RefPHY (Doxygen) (refer to Table 2) Only information relevant to front haul is presented in this section.

Configuration of l1app with O-RAN interface for Front Haul is illustrated acting as an O-DU in Figure 2.

Figure 2. 5G NR L1app Threads

Figure 2. 5G NR L1app Threads

In this configuration of L1app, the base architecture of 5G NR L1 is not changed. The original Front Haul FPGA interface was updated with the O-RAN fronthaul interface abstracted via the O-RAN library.

O-RAN FH Thread Performs:

  • Symbol base “time event” to the rest of the system based on System Clock synchronized to GPS time via PTP

  • Baseline polling mode driver performing TX and RX of Ethernet packets

  • Most of the packet processing such as Transport header, Application header, Data section header, and interactions with the rest of the PHY processing pipeline.

  • Polling of BBDev for FEC on PAC N3000 acceleration card

The other threads are standard for the L1app and created the independent usage of O-RAN as an interface to the Radio.

Communication between L1 and O-RAN layer is performed using a set of callback functions where L1 assigned callback and O-RAN layer executes those functions at a particular event or time moment. Detailed information on callback function options and setting, as well as design, can be found in the sections below.

Design and installation of the l1app do not depend on the Host, VM, or container environment and the same for all cases.

Sample Application Thread Model

Configuration of a sample application for both the O-DU and O-RU follows the model of 5G NR l1app application in Figure 2, but no BBU or FEC related threads are needed as minimal O-RAN FH functionality is used only.

Figure 3. Sample Application Threads

Figure 3. Sample Application Threads

In this scenario, the main thread is used only for initializing and closing the application. No execution happens on core 0 during run time.

Functional Split

Figure 1 corresponds to the O-RU part of the O-RAN split. Implementation of the RU side of the O-RAN protocol is not covered in this document.

Figure 4. eNB/gNB Architecture with O-DU and RU

Figure 4. eNB/gNB Architecture with O-DU and RU

More than one RU can be supported with the same implementation of the O-RAN library and depends on the configuration of gNB in general. In this document, we address details of implementation for a single O-DU – O-RU connection.

The O-RAN Fronthaul specification provides two categories of the split of Layer 1 functionality between O-DU and O-RU: Category A and Category B.

Figure 5. Functional Split

Figure 5. Functional Split

Data Flow

Table 3 lists the data flows supported for a single RU with a single Component Carrier.

Table 3. Supported Data Flow

Plane

ID

Name

Contents

Periodicity

U-Plane

1a

DL Frequency
Domain IQ Data
DL user data
(PDSCH),
control channel
data (PDCCH,
etc.)

symbol

1b

UL Frequency
Domain IQ Data
UL user data
(PUSCH),
control channel
data (PUCCH,
etc.)

symbol

1c

PRACH Frequency
Domain IQ Data

UL PRACH data

slot or symbol

C-Plane

2a

Scheduling
Commands
(Beamforming is
not supported)
Scheduling
information,
FFT size, CP
length,
Subcarrier
spacing, UL
PRACH
scheduling

~ slot

S-Plane

S

Timing and
Synchronization
IEEE 1588 PTP
packets
Figure 6. Data Flows

Figure 6. Data Flows

Information on specific features of C-Plane and U-plane provided in Sample Application Section Configuration of S-plane used on test setup for simulation is provided in Appendix 2.

Data flow separation is based on VLAN (applicable when layer 2 or layer 3 is used for the C/U-plane transport.)

  • The mechanism for assigning VLAN ID to U-Plane and C-Plane is assumed to be via the M-Plane.

  • VLAN Tag is configurable via the standard Linux IP tool, refer to Appendix A, Setup Configuration.

  • No Quality of Service (QoS) is implemented as part of O-RAN library. Standard functionality of ETH port can be used to implement QoS.

Figure 7. C-plane and U-plane Packet Exchange

Figure 7. C-plane and U-plane Packet Exchange

Timing, Latency, and Synchronization to GPS

The O-RAN Fronthaul specification defines the latency model of the front haul interface and interaction between O-DU and 0-RU. This implementation of the O-RAN library supports only the category with fixed timing advance and Defined Transport methods. It determines O-DU transmit and receive windows based on pre-defined transport network characteristics, and the delay characteristics of the RUs within the timing domain.

Table 4 below provides default values used for the implementation of O-DU – O-RU simulation with mmWave scenario. Table 5 and Table 6 below provide default values used for the implementation of O-DU – O-RU simulation with numerology 0 and numerology 1 for Sub6 scenarios. Configuration can be adjusted via configuration files for sample application and reference PHY.

However, simulation of the different range of the settings was not performed, and additional implementation changes might be required as well as testing with actual O-RU. The parameters for the front haul network are out of scope as a direct connection between O-DU and 0-RU is used for simulation.

Table 4. Front Haul Interface Latency (numerology 3 - mmWave)

Model
Parameters

C-Plane

U-Plane

DL

UL

DL

UL

O-RU

T2amin

T2a_min_cp_dl
= 50
T2a_min_cp_ul
= 50
T2a_min_up
= 25

NA

T2amax

T2a_max_cp_dl
= 140
T2a_max_cp_ul
= 140
T2a_max_up
= 140

NA

Tadv_cp_dl

NA

NA

NA

Ta3min

NA

NA

NA

Ta3_min=20

Ta3max

NA

NA

NA

Ta3_max=32

O-DU

T1amin

T1a_min_cp_dl
= 70
T1a_min_cp_ul
= 60
T1a_min_up
= 35

NA

T1amax

T1a_max_cp_dl
= 100
T1a_max_cp_ul
= 70
T1a_max_up
= 50

NA

Ta4min

NA

NA

NA

Ta4_min=0

Ta4max

NA

NA

NA

Ta4_max=45

Table 5. Front Haul Interface Latency (numerology 0 - Sub6)

Model
Parameters

C-Plane

U-Plane

DL

UL

DL

UL

O-RU

T2amin

T2a_min_cp_dl
= 400
T2a_min_cp_ul
= 400
T2a_min_up
= 200

NA

T2amax

T2a_max_cp_dl
= 1120
T2a_max_cp_ul
= 1120
T2a_max_up
= 1120

NA

Tadv_cp_dl

NA

NA

NA

Ta3min

NA

NA

NA

Ta3_min
= 160

Ta3max

NA

NA

NA

Ta3_max
= 256

O-DU

T1amin

T1a_min_cp_dl
= 560
T1a_min_cp_ul
= 480
T1a_min_up
= 280

NA

T1amax

T1a_max_cp_dl
= 800
T1a_max_cp_ul
= 560
T1a_max_up
= 400

NA

Ta4min

NA

NA

NA

T a4_min=0

Ta4max

NA

NA

NA

Ta4 _max=360

Table 6. Front Haul Interface Latency (numerology 1 - Sub6)

Model Parameters

C-Plane

U-Plane

DL

UL

DL

UL

O-RU

T2amin

T2a_min_cp_dl
= 285
T2a_min_cp_ul
= 285
T2a_min_up
= 71

NA

T2amax

T2a_max_cp_dl
= 429
T2a_max_cp_ul
= 429
T2a_max_up
= 428

NA

Tadv_cp_dl

NA

NA

NA

Ta3min

NA

NA

NA

Ta3_min
= 20

Ta3max

NA

NA

NA

Ta3_max
= 32

O-DU

T1amin

T1a_min_cp_dl
= 285
T1a_min_cp_ul
= 285
T1a_min_up
= 96

NA

T1amax

T1a_max_cp_dl
= 429
T1a_max_cp_ul
= 300
T1a_max_up
= 196

NA

Ta4min

NA

NA

NA

Ta4_min
= 0

Ta4max

NA

NA

NA

Ta4_max
= 75

IEEE 1588 protocol and PTP for Linux* implementations are used to synchronize local time to GPS time. Details of the configuration used are provided in Appendix B, PTP Configuration. Local time is used to get Top of the Second (ToS) as a 1 PPS event for SW implementation. Timing event is obtained by performing polling of local time using clock_gettime(CLOCK_REALTIME,..)

All-time intervals are specified concerning the GPS time, which corresponds to OTA time.

Virtualization and Container-Based Usage

O-RAN implementation is deployment agnostic and does not require special changes to be used in virtualized or container-based deployment options. The only requirement is to provide one SRIOV base virtual port for C-plane and one port for U-plane traffic per O-DU instance. This can be achieved with the default Virtual Infrastructure Manager (VIM) as well as using standard container networking.

To configure the networking ports, refer to the FlexRAN and Mobile Edge Compute (MEC) Platform Setup Guide (Table 2) and readme.md in O-RAN library or Appendix A.

Transport Layer and O-RAN Fronthaul Protocol Implementation

This chapter describes how the transport layer and O-RAN Fronthaul protocol are implemented.

Introduction

The following figure presents an overview of the O-RAN Fronthaul process.

Figure 8. O-RAN Fronthaul Process

Figure 8. O-RAN Fronthaul Process

The O-RAN library provides support for transporting In-band and Quadrature (IQ) samples between the O-DU and O-RU within the O-RAN architecture based on functional split 7.2x. The library defines the O-RAN packet formats to be used to transport radio samples within Front Haul according to the O-RAN Fronthaul specification. refer to Table 2. It provides functionality for generating O-RAN packets, appending IQ samples in the packet payload, and extracting IQ samples from O-RAN packets.

Note: The F release version of the library supports U-plane and C-plane only. M-plane is not supported. It is ready to be used in the PTP synchronized environment.

Note: Regarding the clock model and synchronization topology, configurations C1 and C3 of the connection between O-DU and O-RU are the only configurations supported in this release of the O-RAN implementation.

Note: Quality of PTP synchronization with respect to S-plane of O-RAN Fronthaul requirements as defined for O-RU is out of the scope of this document. PTP primary and PTP secondary configuration are expected to satisfy only the O-DU side of requirements and provide the “best-effort” PTP primary for O-RU. This may or may not be sufficient for achieving the end to end system requirements of S-plane. Specialized dedicated NIC card with additional HW functionality might be required to achieve PTP primary functionality to satisfy O-RU precision requirements for RAN deployments scenarios.

Figure 9. Configuration C1

Figure 9. Configuration C1

Figure 10. Configuration C3

Figure 10. Configuration C3

Supported Feature Set

The O-RAN Fronthaul specification defines a list of mandatory functionalities.

Note: Not all features defined as Mandatory for O-DU are currently supported to a full extension. The following tables contain information on what is available and the level of validation performed for this release.

Note. Cells with a red background are listed as mandatory in the specification but not supported in this implementation of O-RAN.

Table 7. O-RAN Mandatory and Optional Feature Support

Category

Feature

O-DU Support

Support

RU Category

Support for
CAT-A RU (up to
8 spatial
streams)

Mandatory

Y

Support for
CAT-A RU (> 8
spatial
streams)

Y

Support for
CAT-B RU
(precoding in
RU)

Mandatory

Y

Beamforming

Beam Index
based

Mandatory

Y

Real-time BF
Weights

Mandatory

Y

Real-Time
Beamforming
Attributes

N

UE Channel Info

N

Bandwidth Saving

Programmable
staticbitwidth
Fixed Point IQ

Mandatory

Y

Real-time
variable-bit
-width

Y

Compressed IQ

Y

Block floating
point
compression

Y

Block scaling
compression

N

u-law
compression

N

modulation
compression

Y

beamspace
compression

Y

Variable Bit
Width per
Channel (per
data section)

Y

Static
configuration
of U-Plane IQ
format and
compression
header

N

Use of symInc
flag to allow
multiple
symbols in a
C-Plane section

N

Energy Saving

Transmission
blanking

N

O-DU - RU Timing

Pre-configured
Transport Delay
Method

Mandatory

Y

Measured
Transport
Method (eCPRI
Msg 5)

N

Synchronization

G.8275.1

Mandatory

Y (C3 only)

G.8275.2

N

GNSS based sync

N

SyncE

N

Transport Features

L2 : Ethernet

Mandatory

Y

L3 : IPv4, IPv6
(CUS Plane)

N

QoS over
Fronthaul

Mandatory

Y

Prioritization
of different
U-plane traffic
types

N

Support of
Jumbo Ethernet
frames

N

eCPRI

Mandatory

Y

support of
eCPRI
concatenation

N

IEEE 1914.3

N

Application
fragmentation

Mandatory

Y

Transport
fragmentation

N

Other

LAA LBT O-DU
Congestion
Window mgmt

N

LAA LBT RU
Congestion
Window mgm

N

Details on the subset of O-RAN functionality implemented are shown in Table 8.

Level of Validation Specified as:

  • C: Completed code implementation for O-RAN Library

  • I: Integrated into Intel FlexRAN PHY

  • T: Tested end to end with O-RU

Table 8. Levels of Validation

Category

Item

Status

C

I

T

General

Radio
access
technology
(LTE / NR)

NR/LTE

N/A

N/A

N/A

Nominal
sub-carrier
spacing
15
/30/120KHz

Y

Y

N

FFT size

512/1024
/2048/4096

Y

Y

N

Channel
bandwidth
5/10
/20/100Mhz

Y

Y

N

Number of
Cells
(Component
Carriers)

12

Y

Y

N

RU
category

A, B

Y

Y

N

TDD Config

Supported
/Flexible

Y

Y

N

FDD
Support

Supported

Y

Y

N

Tx/Rx
switching
based on
‘data
Direction’
field of
C-plane
message

Supported

Y

Y

N

IP version
for
Management
traffic at
fronthaul
network

N/A

N/A

N/A

N/A

PRACH

One Type 3
message
for all
repeated
PRACH
preambles

Supported

Y

Y

N

Type 3
message
per
repeated
PRACH
preambles

1

Y

Y

N

timeOffset
including
cpLength

Supported

Y

Y

N

Supported

Supported

Y

Y

N

PRACH
preamble
format/
index
number
(number of
occasions)

Supported

Y

Y

N

Delay
management
Network
delay
determination

Supported

Y

Y

N

lls-CU
timing
advance
type

Supported

Y

Y

N

Non-delay
managed
U-plane
traffic
Not
supported

N

N

N

C/U-plane
Transport
Transport
encapsulation
(Ethernet/IP)

Ethernet

Y

Y

N

Jumbo
frames

Supported

Y

Y

N

Transport
header
(eCPRI/RoE)

eCPRI

Y

Y

N

IP version
when
Transport
header is
IP/UDP

N/A

N/A

N/A

N/A

eCPRI
Concatenation
when
Transport
header is
eCPRI
Not
supported

N

N

N

eAxC ID
CU_Port_ID
bitwidth

4 *

Y

Y

N

eAxC ID
BandSector_ID
bitwidth

4 *

Y

Y

N

eAxC ID
CC_ID
bitwidth

4 *

Y

Y

N

eAxC ID
RU_Port_ID
bitwidth

4 *

Y

Y

N

Fragmentation

Supported

Y

Y

N

Transport
prioritization
within
U-plane

N/A

N

N

N

Separation
of
C/U-plane
and
M-plane

Supported

Y

Y

N

Separation
of C-plane
and
U-plane
VLAN ID




Y

Y

N

Max Number
of VLAN
per
physical
port

16

Y

Y

N

Reception
Window
Monitoring
(Counters)

Rx_on_time

Supported

Y

Y

N

Rx_early

Supported

N

N

N

Rx_late

Supported

N

N

N

Rx_corrupt

Supported

N

N

N

Rx_pkt_dupl

Supported

N

N

N

Total_msgs_rcvd

Supported

Y

N

N


Beam-
forming
RU
beamforming
type
Index and
weights

Y

Y

N

Beamforming
control
method

C-plane

Y

N

N

Number of
beams
No res-
strictions

Y

Y

N

IQ
compre ssion
U-plane
data
compression
method

Supported

Y

Y

Y

U-plane
data IQ
bitwidth
(Before /
After
compression)


BFP:
8,9,12,14
bits

Modulation
compression:
1,2,3,4 bits

Y

Y

Y

Static
configuration
of U-plane
IQ format
and
compression
header

Supported

N

N

N

eCPRI
Header
Format
ecpriVersion


001b

Y

Y

Y

ecpriReserved

Supported

Y

Y

Y

ecpriCon catenation
Not
supported

N

N

N

ecpri
Message

U-plane

Supported

Y

Y

Y

C-plane

Supported

Y

Y

Y

Delay
measure ment

Supported

Y

Y

Y

ecpri
Payload
(payload
size in
bytes)

Supported

Y

Y

Y

ecpriRtcid
/ecpriPcid

Supported

Y

Y

Y

ecpri
Seqid:
Sequence
ID

Supported

Y

Y

Y

ecpri
Seqid:
E bit

Supported

Y

Y

Y

ecpri
Seqid:
Sub
sequence
ID
Not
supported

N

N

N

C-plane
Type
Section
Type 0
Not
supported

N

N

N

Section
Type 1

Supported

Y

Y

Y

Section
Type 3

Supported

Y

Y

Y

Section
Type 5
Not
supported

N

N

N

Section
Type 6
Not
supported

N

N

N

Section
Type 7
Not
supported

N

N

N

C-plane
Packet
Format
Coding
of Infor mation
Elements
Appli cation
Layer,
Common
data
Direction
(data
direction
(gNB
Tx/Rx))


Supported

Y

Y

N

payload Version
(payload
version)
001b



Y

Y

N

filter Index
(filter
index)

Supported

Y

Y

N

frameId
(frame
iden tifier)

Supported

Y

Y

N

subframeId
(subframe
iden tifier)

Supported

Y

Y

N

slotId
(slot
iden tifier)

Supported

Y

Y

N

start
Symbolid
(start
symbol
iden tifier)

Supported

Y

Y

N

number
Ofsections
(number of
sections)
up to the
maximum
number of
PRBs

Y

Y

N

section
Type
(section
type)
1 and 3



Y

Y

N

udCompHdr
(user data
com pression
header)

Supported

Y

Y

N

number
OfUEs
(number Of
UEs)
Not
supported

N

N

N

timeOffset
(time
offset)

Supported

Y

Y

N

frame
Structure
(frame
structure)

mu=0,1,3

Y

Y

N

cpLength
(cyclic
prefix
length)

Supported

Y

Y

N

Coding
of Infor mation
Elements
Ap plication
Layer,
Sections
sectionId
(section
iden tifier)

Supported

Y

Y

N

rb
(resource
block
indicator)

0

Y

Y

N

symInc
(symbol
number
increment
command)

0 or 1

Y

Y

N

startPrbc
(starting
PRB of
control
section)

Supported

Y

Y

N

reMask
(resource
element
mask)

Supported

Y

Y

N

numPrbc
(number of
contiguous
PRBs per
control
section)

Supported

Y

Y

N

numSymbol
(number of
symbols)

Supported

Y

Y

N

ef
(extension
flag)

Supported

Y

Y

N

beamId
(beam
iden tifier)

Support

Y

Y

N

ueId (UE
iden tifier)
Not
supported

N

N

N

freqOffset
(frequency
offset)

Supported

Y

Y

N

regulari zation
Factor
(regulari zation
Factor)
Not
supported

N

N

N

ciIsample,
ciQsample
(channel
infor mation
I and Q
values)
Not
supported

N

N

N

laaMsgType
(LAA
message
type)
Not
supported


N

N

N

laaMsgLen
(LAA
message
length)
Not
supported

N

N

N

lbtHandle

Not
supported

N

N

N

lbtDefer
Factor
(listen
before
talk
defer
factor)
Not
supported





N

N

N

lbtBack
offCounter
(listen
before
talk
backoff
counter)
Not
supported





N

N

N

lbtOffset
(listen-
before
talk
offset)
Not
supported

N

N

N

MCOT
(maximum
channel
occupancy
time)
Not
supported

N

N

N

lbtMode
(LBT Mode)
Not
supported

N

N

N

lbt PdschRes
(LBT PDSCH
Result)
Not
supported

N

N

N

sfStatus
(subframe
status)
Not
supported

N

N

N

lbtDrsRes
(LBT DRS
Result)
Not
supported

N

N

N

initial PartialSF
(Initial partial SF)
Not
supported

N

N

N

lbtBufErr
(LBT Buffer
Error)
Not
supported

N

N

N

sfnSf
(SFN/SF End)
Not
supported

N

N

N

lbt
CWConfig_H
(HARQ
Parameters
for
Congestion
Window
mana gement)
Not
supported

N

N

N

lbt
CWConfig_T
(TB Parameters
for
Congestion
Window
mana gement)
Not
supported

N

N

N

lbtTr afficClass
(Traffic
class
priority
for
Congestion
Window
mana gement)
Not
supported

N

N

N

lbtCWR_Rst
(Noti cation
about
packet
reception
successful
or not)
Not
supported

N

N

N

reserved
(reserved
for future
use)

0

N

N

N

Section
Exten sion
Commands
extType
(extension
type)

Supported

Y

Y

N

ef (extension
flag)

Supported

Y

Y

N

extLen
(extension
length)

Supported

Y

Y

N

Coding of
Infor mation
Elements –
Appli cation
Layer,
Section
Exten sions

Ext
Type=1:
Beam
forming
Weights
Exten* *sion
Type
























bfw
CompHdr
(beam forming
weight
compre ssion
header)

Supported

Y

Y

N


bf wCompParam
(beam
forming
weight
compre ssion
parameter)

Supported

Y

Y

N

bfwl
(beam forming
weight
in-phase
value)

Supported

Y

Y

N

bfwQ
(beam forming
weight
quadrature
value)

Supported

Y

Y

N


ExtType
=2:
Beam forming
Attribu tes
Exten
sion
Type













































bfaCompHdr

(beamforming
attributes
compre ssion
header)

Supported

Y

N

N

bfAzPt
(beam forming
azimuth
pointing
parameter)

Supported

Y

N

N

bfZePt
(beam forming
zenith
pointing
parameter)

Supported

Y

N

N

bfAz3dd
(beam forming
azimuth
beamwidth
parameter)

Supported

Y

N

N

bfZe3dd
(beam forming
zenith
beamwidth
parameter)

Supported

Y

N

N

bfAzSl
(beam forming
azimuth
sidelobe
parameter)

Supported

Y

N

N

bfZeSl
(beam forming
zenith
sidelobe
parameter)

Supported

Y

N

N

zero- padding

Supported

Y

N

N


ExtType
=3:
DL
Preco ding
Exten sion
Type


























code
bookIndex

(precoder
codebook

used for
trans mission

Supported

Y

N

N

layerID
(Layer ID
for DL
trans mission)

Supported

Y

N

N

txScheme
(trans mission
scheme)

Supported

Y

N

N

numLayers
(number of
layers
used for
DL
trans mission)

Supported

Y

N

N

crsReMask
(CRS
resource
element
mask)

Supported

Y

N

N

crs
SyumINum
(CRS
symbol
number
indi cation)

Supported

Y

N

N

crsShift
(crsShift
used for
DL
trans mission)

Supported

Y

N

N

beamIdAP1
(beam id
to be used
for
antenna
port 1)

Supported

Y

N

N

beamIdAP2
(beam id
to be used
for
antenna
port 2)

Supported

Y

N

N

beamIdAP3
(beam id
to be used
for
antenna
port 3)

Supported

Y

N

N

ExtType
=4:
Modula tion
Compre ssion
Parame ters
Exten sion
Type
csf
(cons tellation
shift
flag)






Supported


Y

Y

N

mod
CompScaler
(
modulation
compre ssion
scaler value)
Supported







Y

Y

N


ExtType
=5:
Modula tion
Compre ssion
Additio
Parame
ters
Exten sion
Type*
mcScale
ReMask
(
modulation
compre ssion
power
RE
mask)


Supported










Y

N

N

csf
(cons tellation
shift
flag)
Supported




Y

N

N

mcScale
Offset
(scaling
value for
modulation
compre ssion)

Supported

Y

N

N

E xtType=6:
Non-con tiguous
PRB
alloca tion in
time and
frequen cy domain
rbgSize
(resource
block
group
size)

Supported

Y

N

N

rbgMask
(resource
block
group bit
mask)

Supported

Y

N

N

symbol
Mask
(symbol
bit mask)

Supported

Y

N

N

Ext Type=10:
Section
des* *cription
for gro* *up
configu* *ration of
multiple
ports
beam
GroupType

Supported

Y

N

N

numPortc

Supported

Y

N

N

Ext Type=11:
Flexible
Beam forming
Weights
Exten sion
Type
b fwCompHdr
(beam forming
weight
compre ssion
header)

Supported

Y

Y

N

bfw
CompParam
for PRB
bundle x
(beam forming
weight
compre ssion
parameter)

Supported

Y

Y

N

numBundPrb
(Number
of
bundled
PRBs per
beam forming
weights)

Supported

Y

Y

N

bfwI
(beam forming
weight
in-phase
value)

Supported

Y

Y

N

bfwQ
(beam forming
weight
quadra ture
value)

Supported

Y

Y

N

disable
BFWs
(disable
beam forming
weights)

Supported

Y

Y

N

RAD
(Reset
After PRB
Discon tinuity)

Supported

Y

Y

N

U-plane
Packet
Format
data
Direction
(data
direction
(gNB
Tx/Rx))

Supported

Y

Y

Y

payload
Version
(payload
version)

001b

Y

Y

Y

filter
Index
(filter
index)

Supported

Y

Y

Y

frameId
(frame
iden tifier)

Supported

Y

Y

Y

subframeId
(subframe
iden tifier)

Supported

Y

Y

Y

slotId
(slot
iden tifier)

Supported

Y

Y

Y

symbolId
(symbol
iden tifier)

Supported

Y

Y

Y

sectionId
(section
iden tifier)

Supported

Y

Y

Y

rb
(resource
block
indicator)

0

Y

Y

Y

symInc
(symbol
number
increment
command)

0

Y

Y

Y

startPrbu
(startingPRB
of user
plane
section)

Supported

Y

Y

Y

numPrbu
(number of
PRBs per
user plane
section)

Supported

Y

Y

Y

udCompHdr
(user data
com pression
header)

Supported

Y

Y

N

reserved
(reserved
for future
use)

0

Y

Y

Y

udCompParam
(user data
compre ssion
parameter)

Supported

Y

Y

N

iSample
(in-phase sample)

16

Y

Y

Y

qSample
( quadrature sample)

16

Y

Y

Y

S-plane

Topology
confi guration:
C1

Supported

N

N

N

Topology
confi guration:
C2

Supported

N

N

N

Topology
confi guration:
C3

Supported

Y

Y

Y

Topology
confi guration:
C4

Supported

N

N

N

PTP

Full
Timing
Support
(G.8275.1)

Supported

Y

Y

N

M-plane

Not
supported

N

N

N

* The bit width of each component in eAxC ID can be configurable.

Transport Layer

O-RAN Fronthaul data can be transported over Ethernet or IPv4/IPv6. In the current implementation, the O-RAN library supports only Ethernet with VLAN.

Figure 11. Native Ethernet Frame with VLAN

Figure 11. Native Ethernet Frame with VLAN

Standard DPDK routines are used to perform Transport Layer functionality.

VLAN tag functionality is offloaded to NIC as per the configuration of VF (refer to Appendix A, Setup Configuration).

The transport header is defined in the O-RAN Fronthaul specification based on the eCPRI specification, Refer to Table 2.

Figure 12. eCPRI Header Field Definitions

Figure 12. eCPRI Header Field Definitions

Only ECPRI_IQ_DATA = 0x00 , ECPRI_RT_CONTROL_DATA= 0x02 and ECPRI_DELAY_MEASUREMENT message types are supported.

For one-way delay measurements the eCPRI Header Field Definitions are the same as above until the ecpriPayload. The one-delay measurement message format is shown in the next figure.

Figure 13. ecpri one-way delay measurement message

Figure 13. ecpri one-way delay measurement message

In addition, for the eCPRI one-delay measurement message there is a requirement of dummy bytes insertion so the overall ethernet frame has at least 64 bytes.

The measurement ID is a one-byte value used by the sender of the request to distinguish the response received between different measurements.

The action type is a one-byte value defined in Table 8 of the eCPRI Specification V2.0.

Action Type 0x00 corresponds to a Request

Action Type 0x01 corresponds to a Request with Follow Up

Both values are used by an eCPRI node to initiate a one-way delay measurement in the direction of its own node to another node.

Action Type 0x02 corresponds to a Response

Action Type 0x03 is a Remote Request

Action Type 0x04 is a Remote Request with Follow Up

Values 0x03 and 0x04 are used when an eCPRI node needs to know the one-way delay from another node to itself.

Action Type 0x05 is the Follow_Up message.

The timestamp uses the IEEE-1588 Timestamp format with 8 bytes for the seconds part and 4 bytes for the nanoseconds part. The timestamp is a positive time with respect to the epoch.

The compensation value is used with Action Types 0x00 (Request), 0x02 (Response) or 0x05 (Follow_up) for all others this field contains zeros. This value is the compensation time measured in nanoseconds and multiplied by 216 and follows the format for the correctionField in the common message header specified by the IEE 1588-2008 clause 13.3.

Handling of ecpriRtcid/ecpriPcid Bit field size is configurable and can be defined on the initialization stage of the O-RAN library.

Figure 14. Bit Allocations of ecpriRtcid/ecpriPcid

Figure 14. Bit Allocations of ecpriRtcid/ecpriPcid

For ecpriSeqid only, the support for a sequence number is implemented. The following number is not supported.

Comments in the source code can be used to see more information on the implementation specifics of handling this field.

U-plane

The following diagrams show O-RAN packet protocols’ headers and data arrangement with and without compression support.

O-RAN packet meant for traffic with compression enabled has the Compression Header added after each Application Header. According to O-RAN Fronthaul’s specification (Refer to Table 2), the Compression Header is part of a repeated Section Application Header. In the O-RAN library implementation,the header is implemented as a separate structure, following the Application Section Header. As a result, the Compression Header is not included in the O-RAN packet, if compression is not used.

Figure 15 shows the components of an ORAN packet.

Figure 15. O-RAN Packet Components

Figure 15. O-RAN Packet Components

Radio Application Header

The next header is a common header used for time reference.

Figure 16. Radio Application Header

Figure 16. Radio Application Header

The radio application header specific field values are implemented as follows:

  • filterIndex = 0

  • frameId = [0:99]

  • subframeId = [0:9]

  • slotId = [0:7]

  • symbolId = [0:13]

Data Section Application Data Header

The Common Radio Application Header is followed by the Application Header that is repeated for each Data Section within the eCPRI message. The relevant section of the O-RAN packet is shown in color.

Figure 17. Data Section Application Data Header

Figure 17. Data Section Application Data Header

A single section is used per one Ethernet packet with IQ samples startPrbu is equal to 0 and numPrbu is equal to the number of RBs used:

  • rb field is not used (value 0).

  • symInc is not used (value 0)

Data Payload

An O-RAN packet data payload contains several PRBs. Each PRB is built of 12 IQ samples. Flexible IQ bit width is supported. If compression is enabled, udCompParam is included in the data payload. The data section is shown in color.

Figure 18. Data Payload

Figure 18. Data Payload

C-plane

C-Plane messages are encapsulated using a two-layered header approach. The first layer consists of an eCPRI standard header, including corresponding fields used to indicate the message type, while the second layer is an application layer, including necessary fields for control and synchronization. Within the application layer, a “section” defines the characteristics of U-plane data to be transferred or received from a beam with one pattern id. In general, the transport header, application header, and sections are all intended to be aligned on 4-byte boundaries and are transmitted in “network byte order” meaning the most significant byte of a multi-byte parameter is transmitted first.

Table 9 is a list of sections currently supported.

Table 9. Section Types

Section Type

Target Scenario

Remarks

0

Unused Resource Blocks
or symbols in Downlink
or Uplink

Not supported

1

Most DL/UL radio
channels

Supported

2

reserved for future use

N/A

3

PRACH and
mixed-numerology
channels
Only PRACH is supported.
Mixed numerology is not
supported.

4

Reserved for future use

Not supported

5

UE scheduling
information (UE-ID
assignment to section)

Not supported

6

Channel information

Not supported

7

LAA

Not supported

8-255

Reserved for future use

N/A

Section extensions are not supported in this release.

The definition of the C-Plane packet can be found lib/api/xran_pkt_cp.h, and the fields are appropriately re-ordered in order to apply the conversion of network byte order after setting values. The comments in the source code of O-RAN lib can be used to see more information on the implementation specifics of handling sections as well as particular fields. Additional changes may be needed on the C-plane to perform IOT with an O-RU depending on the scenario.

Ethernet Header

Refer to Figure 11.

eCPRI Header

Refer to Figure 12.

This header is defined as the structure of xran_ecpri_hdr in lib/api/xran_pkt.h.

Radio Application Common Header

The Radio Application Common Header is used for time reference. Its structure is shown in Figure 19.

Figure 19. Radio Application Common Header

Figure 19. Radio Application Common Header

This header is defined as the structure of xran_cp_radioapp_common_header in lib/api/xran_pkt_cp.h.

Note: The payload version in this header is fixed to XRAN_PAYLOAD_VER (defined as 1) in this release.

Section Type 0 Structure

Figure 20 describes the structure of Section Type 0.

Figure 20. Section Type 0 Structure

Figure 20. Section Type 0 Structure

In Figure 19 through Figure 23, the color yellow means it is a transport header; the color pink is the radio application header; others are repeated sections.

Section Type 1 Structure

Figure 21 describes the structure of Section Type 1.

Figure 21. Section Type 1 Structure

Figure 21. Section Type 1 Structure

Section Type 1 message has two additional parameters in addition to radio application common header:

  • udCompHdr : defined as the structure of xran_radioapp_udComp_header

  • reserved : fixed by zero

Section type 1 is defined as the structure of xran_cp_radioapp_section1, and this part can be repeated to have multiple sections.

Whole section type 1 message can be described in this summary:

xran_cp_radioapp_common_header

xran_cp_radioapp_section1_header

xran_cp_radioapp_section1

……

xran_cp_radioapp_section1

Note: Even though the API function can support composing multiple sections in a C-Plane message, the current implementation is limited to composins a single section per C-Plane message.

Section Type 3 Structure

Figure 22 describes the structure of Section Type 3.

Figure 22. Section Type 3 Structure

Figure 22. Section Type 3 Structure

Section Type 3 message has below four additional parameters in addition to radio application common header.

  • timeOffset

  • frameStructure: defined as the structure of xran_cp_radioapp_frameStructure

  • cpLength

  • udCompHdr: defined as the structure of xran_radioapp_udComp_header

Section Type 3 is defined as the structure of xran_cp_radioapp_section3 and this part can be repeated to have multiple sections.

Whole section type 3 message can be described in this summary:

xran_cp_radioapp_common_header

xran_cp_radioapp_section3_header

xran_cp_radioapp_section3

……

xran_cp_radioapp_section3

Section Type 5 Structure

Figure 23 describes the structure of Section Type 5.

Figure 23. Section Type 5 Structure

Figure 23. Section Type 5 Structure

Section Type 6 Structure

Figure 24 describes the structure of Section Type 6.

Figure 24. Section Type 6 Structure

Figure 24. Section Type 6 Structure

O-RAN Library Design

The O-RAN Library consists of multiple modules where different functionality is encapsulated. The complete list of all *.c and *.h files, as well as Makefile for O-RAN (aka FHI Lib)f release is:

├── app

│ ├── dpdk.sh

│ ├── gen_test.m

│ ├── Makefile

│ ├── ifft_in.txt

│ ├── run_o_du.sh

│ ├── run_o_ru.sh

│ ├── src

│ │ ├── app_bbu_main.c

│ │ ├── app_bbu_pool.c

│ │ ├── app_bbu_pool.h

│ │ ├── app_dl_bbu_pool_tasks.c

│ │ ├── app_io_fh_xran.c

│ │ ├── app_io_fh_xran.h

│ │ ├── app_profile_xran.c

│ │ ├── app_profile_xran.h

│ │ ├── app_ul_bbu_pool_tasks.c

│ │ ├── aux_line.c

│ │ ├── aux_line.h

│ │ ├── common.c

│ │ ├── common.h

│ │ ├── config.c

│ │ ├── config.h

│ │ ├── debug.h

│ │ ├── ebbu_pool_cfg.c

│ │ ├── ebbu_pool_cfg.h

│ │ ├── sample-app.c

│ │ └── xran_mlog_task_id.h

│ └── usecase

│ ├── cat_a

│ ├── cat_b

│ ├── dss

│ ├── lte_a

│ ├── lte_b

├── build.sh

├── lib

│ ├── api

│ │ ├── xran_compression.h

│ │ ├── xran_compression.hpp

│ │ ├── xran_cp_api.h

│ │ ├── xran_ecpri_owd_measurements.h

│ │ ├── xran_fh_o_du.h

│ │ ├── xran_fh_o_ru.h

│ │ ├── xran_lib_mlog_tasks_id.h

│ │ ├── xran_mlog_lnx.h

│ │ ├── xran_pkt_cp.h

│ │ ├── xran_pkt.h

│ │ ├── xran_pkt_up.h

│ │ ├── xran_sync_api.h

│ │ ├── xran_timer.h

│ │ ├── xran_transport.h

│ │ └── xran_up_api.h

│ ├── ethernet

│ │ ├── ethdi.c

│ │ ├── ethdi.h

│ │ ├── ethernet.c

│ │ └── ethernet.h

│ ├── Makefile

│ └── src

│ ├── xran_bfp_byte_packing_utils.hpp

│ ├── xran_bfp_cplane8.cpp

│ ├── xran_bfp_cplane8_snc.cpp

│ ├── xran_bfp_cplane16.cpp

│ ├── xran_bfp_cplane16_snc.cpp

│ ├── xran_bfp_cplane32.cpp

│ ├── xran_bfp_cplane32_snc.cpp

│ ├── xran_bfp_cplane64.cpp

│ ├── xran_bfp_cplane64_snc.cpp

│ ├── xran_bfp_ref.cpp

│ ├── xran_bfp_uplane.cpp

│ ├── xran_bfp_uplane_9b16rb.cpp

│ ├── xran_bfp_uplane_snc.cpp

│ ├── xran_bfp_utils.hpp

│ ├── xran_cb_proc.c

│ ├── xran_cb_proc.h

│ ├── xran_common.c

│ ├── xran_common.h

│ ├── xran_compression.cpp

│ ├── xran_compression_snc.cpp

│ ├── xran_cp_api.c

│ ├── xran_cp_proc.c

│ ├── xran_cp_proc.h

│ ├── xran_delay_measurement.c

│ ├── xran_dev.c

│ ├── xran_dev.h

│ ├── xran_frame_struct.c

│ ├── xran_frame_struct.h

│ ├── xran_main.c

│ ├── xran_main.h

│ ├── xran_mem_mgr.c

│ ├── xran_mem_mgr.h

│ ├── xran_mod_compression.cpp

│ ├── xran_mod_compression.h

│ ├── xran_prach_cfg.h

│ ├── xran_printf.h

│ ├── xran_rx_proc.c

│ ├── xran_rx_proc.h

│ ├── xran_sync_api.c

│ ├── xran_timer.c

│ ├── xran_transport.c

│ ├── xran_tx_proc.c

│ ├── xran_tx_proc.h

│ ├── xran_ul_tables.c

│ └── xran_up_api.c

└── test

├── common

│ ├── common.cpp

│ ├── common.hpp

│ ├── common_typedef_xran.h

│ ├── json.hpp

│ ├── MIT_License.txt

│ ├── xranlib_unit_test_main.cc

│ └── xran_lib_wrap.hpp

├── master.py

├── readme.txt

└── test_xran

├── c_plane_tests.cc

├── chain_tests.cc

├── compander_functional.cc

├── conf.json

├── init_sys_functional.cc

├── Makefile

├── mod_compression_unit_test.cc

├── prach_functional.cc

├── prach_performance.cc

├── unittests.cc

└── u_plane_functional.cc

├── u_plane_performance.cc

General Introduction

The O-RAN FHI Library functionality is broken down into two main sections:

  • O-RAN specific packet handling (src)

  • Ethernet and supporting functionality (ethernet)

External functions and structures are available via a set of header files in the API folder.

This library depends on DPDK primitives to perform Ethernet networking in user space, including initialization and control of Ethernet ports. Ethernet ports are expected to be SRIOV virtual functions (VF) but also can be physical functions (PF) as well

This library is expected to be included in the project via xran_fh_o_du.h, statically compiled and linked with the L1 application as well as DPDK libraries. The O-RAN packet processing-specific functionality is encapsulated into this library and not exposed to the rest of the 5G NR pipeline.

This way, O-RAN specific changes are decoupled from the L1 pipeline. As a result, the design and implementation of the 5G L1 pipeline code and O-RAN FHI library can be done in parallel, provided the defined interface is not modified.

Ethernet consists of two modules:

  • Ethernet implements O-RAN specific HW Ethernet initialization, close, send and receive

  • ethdi provides Ethernet level software primitives to handle O-RAN packet exchange

The O-RAN layer implements the next set of functionalities:

  • Common code specific for both C-plane and U-plane as well as TX and RX

  • Implementation of C-plane API available within the library and externally

  • The primary function where general library initialization and configuration performed

  • Module to provide the status of PTP synchronization

  • Timing module where system time is polled

  • eCPRI specific transport layer functions

  • APIs to handle U-plane packets

  • A set of utility modules for debugging (printf) and data tables are included as well.

Figure 25. Illustration of O-RAN Sublayers

Figure 25. Illustration of O-RAN Sublayers

A detailed description of functions and input/output arguments, as well as key data structures, can be found in the Doxygen file for the FlexRAN 5G NR release, Refer to Table 2. In this document, supplemental information is provided for the overall design and implementation assumptions.(Available only outside of the Community Version)

Initialization and Close

An example of the initialization sequence can be found in the sample application code. It consists of the following steps:

1.Setup structure struct xran_fh_init according to configuration.

2.Call xran_init() to instantiate the O-RAN lib memory model and threads. The function returns a pointer to O-RAN handle which is used for consecutive configuration functions.

3.Initialize memory buffers used for L1 and O-RAN exchange of information.

4.Assign callback functions for (one) TTI event and for the reception of half of the slot of symbols (7 symbols) and Full slot of symbols 14 symbols).

5.Call xran_open() to initialize PRACH configuration, initialize DPDK, and launch xRAN timing thread,

6.Call xran_start() to start processing O-RAN packets for DL and UL.

After this is complete 5G L1 runs with O-RAN Front haul interface. During run time for every TTI event, the corresponding call back is called. For packet reception on UL direction, the corresponding call back is called. OTA time information such as frame id, subframe id, and slot id can be obtained as result synchronization of the L1 pipeline to GPS time is performed.

To stop and close the interface, perform this sequence of steps:

1.Call xran_stop() to stop the processing of DL and UL

2.Call xran_close() to remove usage of xRAN resources

3.Call xran_mm_destroy() to destroy memory management subsystem

After this session is complete, a restart of the full L1 application is required. The current version of the library does not support multiple sessions without a restart of the full L1 application.

Configuration

The O-RAN library configuration is provided in the set of structures, such as struct xran_fh_init and struct xran_fh_config. The sample application gives an example of a test configuration used for LTE and 5GNR mmWave and Sub 6. Sample application folder /app/usecase/ contains set of examples for different Radio Access technology (LTE|5G NR), different category (A|B) and list of numerologies (0,1,3) and list of bandwidths (5,10,20,100Mhz).

Note: Some configuration options are not used in the f release and are reserved for future use.

The following options are available:

Structure struct xran_fh_init:

  • Number of CC and corresponding settings for each

  • Core allocation for O-RAN

  • Ethernet port allocation

  • O-DU and RU Ethernet Mac address

  • Timing constraints of O-DU and 0-RU

  • Debug features

Structure struct xran_fh_config:

  • Number of eAxC

  • TTI Callback function and parameters

  • PRACH 5G NR specific settings

  • TDD frame configuration

  • BBU specific configuration

  • RU specific configuration

From an implementation perspective:

The xran_init() performs init of the O-RAN FHI library and interface according to struct xran_fh_init information as per the start of application configuration:

  • Init DPDK with corresponding networking ports and core assignment

  • Init mbuf pools

  • Init DPDK timers and DPDK rings for internal packet processing

  • Instantiates ORAH FH thread doing

    • Timing processing (xran_timing_source_thread())

    • ETH PMD (process_dpdk_io())

    • IO XRAN-PHY exchange (ring_processing_func())

xran_open() performs additional configuration as per run scenario:

  • PRACH configuration

  • C-plane initialization

The Function xran_close() performs free of resources and allows potential restart of front haul interface with a different scenario.

Start/Stop

The Functions xran_start()/xran_stop() enable/disable packet processing for both DL and UL. This triggers execution of callbacks into the L1 application.

Data Exchange

Exchange of IQ samples, as well as C-plane specific information, is performed using a set of buffers allocated by xRAN library from DPDK memory and shared with the l1 application. Buffers are allocated as a standard mbuf structure, and DPDK pools are used to manage the allocation and free resources. Shared buffers are allocated at the init stage and are expected to be reused within 80 TTIs (10 ms).

The O-RAN protocol requires U-plane IQ data to be transferred in network byte order, and the L1 application handles IQ sample data in CPU byte order, requiring a swap. The PHY BBU pooling tasks perform copy and byte order swap during packet processing.

C-plane Information Settings

The interface between the O-RAN library and PHY is defined via struct xran_prb_map and similar to the data plane. The same mbuf memory is used to allocate memory map of PRBs for each TTI.:

/*\* Beamforming waights for single stream for each PRBs given number of
Antenna elements \*/
struct xran_cp_bf_weight{

int16_t nAntElmTRx; /**< num TRX for this allocation \*/

int16_t ext_section_sz; /**< extType section size \*/

int8_t\* p_ext_start; /**< pointer to start of buffer for full C-plane
packet \*/

int8_t\* p_ext_section; /**< pointer to form extType \*/

/\* For ext 11 \*/

uint8_t bfwCompMeth; /\* Compression Method for BFW \*/

uint8_t bfwIqWidth; /\* Bitwidth of BFW \*/

uint8_t numSetBFWs; /\* Total number of beam forming weights set (L) \*/

uint8_t numBundPrb; /\* The number of bundled PRBs, 0 means to use ext1
\*/

uint8_t RAD;

uint8_t disableBFWs;

int16_t maxExtBufSize; /\* Maximum space of external buffer \*/

struct xran_ext11_bfw_info bfw[XRAN_MAX_SET_BFWS]

};

/*\* PRB element structure \*/

struct xran_prb_elm {

int16_t nRBStart; /**< start RB of RB allocation \*/

int16_t nRBSize; /**< number of RBs used \*/

int16_t nStartSymb; /**< start symbol ID \*/

int16_t numSymb; /**< number of symbols \*/

int16_t nBeamIndex; /**< beam index for given PRB \*/

int16_t bf_weight_update; /*\* need to update beam weights or not \*/

int16_t compMethod; /**< compression index for given PRB \*/

int16_t iqWidth; /**< compression bit width for given PRB \*/

uint16_t ScaleFactor; /**< scale factor for modulation compression \*/

int16_t reMask; /**< 12-bit RE Mask for modulation compression \*/

int16_t BeamFormingType; /**< index based, weights based or attribute
based beam forming*/

int16_t nSecDesc[XRAN_NUM_OF_SYMBOL_PER_SLOT]; /**< number of section
descriptors per symbol \*/

struct xran_section_desc \*
p_sec_desc[XRAN_NUM_OF_SYMBOL_PER_SLOT][XRAN_MAX_FRAGMENT]; /**< section
desctiptors to U-plane data given RBs \*/

struct xran_cp_bf_weight bf_weight; /**< beam forming information
relevant for given RBs \*/

union {

struct xran_cp_bf_attribute bf_attribute;

struct xran_cp_bf_precoding bf_precoding;

};

};

/*\* PRB map structure \*/

struct xran_prb_map {

uint8_t dir; /**< DL or UL direction \*/

uint8_t xran_port; /**< O-RAN id of given RU [0-(XRAN_PORTS_NUM-1)] \*/

uint16_t band_id; /**< O-RAN band id \*/

uint16_t cc_id; /**< component carrier id [0 - (XRAN_MAX_SECTOR_NR-1)]
\*/

uint16_t ru_port_id; /**< RU device antenna port id [0 -
(XRAN_MAX_ANTENNA_NR-1) \*/

uint16_t tti_id; /**< O-RAN slot id [0 - (max tti-1)] \*/

uint8_t start_sym_id; /**< start symbol Id [0-13] \*/

uint32_t nPrbElm; /**< total number of PRB elements for given map [0-
(XRAN_MAX_SECTIONS_PER_SLOT-1)] \*/

struct xran_prb_elm prbMap[XRAN_MAX_SECTIONS_PER_SLOT];

};

C-plane sections are expected to be provided by the L1 pipeline. If 100% of the RBs are used they are always allocated as a single element RB map, that is expected to be allocated across all symbols. Dynamic RB allocation is performed based on C-plane configuration.

The O-RAN library will require that the content of the PRB map should be sorted in increasing order of PRB first and then symbols.

Memory Management

Memory used for the exchange of IQ data as well as control information, is controlled by the O-RAN library. L1 application at the init stage performs:

  • init memory management subsystem

  • init buffer management subsystem (via DPDK pools)

  • allocate buffers (mbuf) for each CC, antenna, symbol, and direction (DL, UL, PRACH) for XRAN_N_FE_BUF_LEN TTIs.

  • buffers are reused for every XRAN_N_FE_BUF_LEN TTIs

After the session is completed, the application can free buffers and destroy the memory management subsystem.

From an implementation perspective, the O-RAN library uses a standard mbuf primitive and allocates a pool of buffers for each sector. This function is performed using rte_pktmbuf_pool_create(), rte_pktmbuf_alloc(), and rte_pktmbuf_append() to allocate one buffer per symbol for the mmWave case. More information on mbuf and DPDK pools can be found in the DPDK documentation.

In the current implementation, mbuf, the number of buffers shared with the L1 application is the same number of buffers used to send to and receive from the Ethernet port. Memory copy operations are not required if the packet size is smaller than or equal to MTU. Future versions of the O-RAN library are required to remove the memory copy requirement for packets where the size larger than MTU.

External Interface Memory

The O-RAN library header file defines a set of structures to simplify access to memory buffers used for IQ data.::

struct xran_flat_buffer {

   uint32_t nElementLenInBytes;

   uint32_t nNumberOfElements;

   uint32_t nOffsetInBytes;

   uint32_t nIsPhyAddr;

   uint8_t \*pData;

   void \*pCtrl;

};

struct xran_buffer_list {

   uint32_t nNumBuffers;

   struct xran_flat_buffer \*pBuffers;

   void \*pUserData;

   void \*pPrivateMetaData;

};

struct xran_io_buf_ctrl {

/\* -1-this subframe is not used in current frame format

0-this subframe can be transmitted, i.e., data is ready

1-this subframe is waiting transmission, i.e., data is not ready

10 - DL transmission missing deadline. When FE needs this subframe data
but bValid is still 1,

set bValid to 10.

\*/

int32_t bValid ; // when UL rx, it is subframe index.

int32_t nSegToBeGen;

int32_t nSegGenerated; // how many date segment are generated by DL LTE
processing or received from FE

// -1 means that DL packet to be transmitted is not ready in BS

int32_t nSegTransferred; // number of data segments has been transmitted
or received

struct rte_mbuf \*pData[N_MAX_BUFFER_SEGMENT]; // point to DPDK
allocated memory pool

struct xran_buffer_list sBufferList;

};

There is no explicit requirement for user to organize a set of buffers in this particular way. From a compatibility
perspective it is useful to follow the existing design of the 5G NR l1app used for Front Haul FPGA and define structures shared between l1 and O-RAN lib as shown:

struct bbu_xran_io_if {

void\* nInstanceHandle[XRAN_PORTS_NUM][XRAN_MAX_SECTOR_NR]; /**<
instance per O-RAN port per CC \*/

uint32_t
nBufPoolIndex[XRAN_PORTS_NUM][XRAN_MAX_SECTOR_NR][MAX_SW_XRAN_INTERFACE_NUM];
/**< unique buffer pool \*/

uint16_t nInstanceNum[XRAN_PORTS_NUM]; /**< instance is equivalent to CC
\*/

uint16_t DynamicSectionEna;

uint32_t nPhaseCompFlag;

int32_t num_o_ru;

int32_t num_cc_per_port[XRAN_PORTS_NUM];

int32_t map_cell_id2port[XRAN_PORTS_NUM][XRAN_MAX_SECTOR_NR];

struct xran_io_shared_ctrl ioCtrl[XRAN_PORTS_NUM]; /**< for each O-RU
port \*/

struct xran_cb_tag RxCbTag[XRAN_PORTS_NUM][XRAN_MAX_SECTOR_NR];

struct xran_cb_tag PrachCbTag[XRAN_PORTS_NUM][XRAN_MAX_SECTOR_NR];

struct xran_cb_tag SrsCbTag[XRAN_PORTS_NUM][XRAN_MAX_SECTOR_NR];

};

struct xran_io_shared_ctrl {

/\* io struct \*/

struct xran_io_buf_ctrl
sFrontHaulTxBbuIoBufCtrl[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR];

struct xran_io_buf_ctrl
sFrontHaulTxPrbMapBbuIoBufCtrl[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR];

struct xran_io_buf_ctrl
sFrontHaulRxBbuIoBufCtrl[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR];

struct xran_io_buf_ctrl
sFrontHaulRxPrbMapBbuIoBufCtrl[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR];

struct xran_io_buf_ctrl
sFHPrachRxBbuIoBufCtrl[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR];

/\* Cat B \*/

struct xran_io_buf_ctrl
sFHSrsRxBbuIoBufCtrl[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANT_ARRAY_ELM_NR];

struct xran_io_buf_ctrl
sFHSrsRxPrbMapBbuIoBufCtrl[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANT_ARRAY_ELM_NR];

/\* buffers lists \*/

struct xran_flat_buffer
sFrontHaulTxBuffers[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR][XRAN_NUM_OF_SYMBOL_PER_SLOT];

struct xran_flat_buffer
sFrontHaulTxPrbMapBuffers[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR];

struct xran_flat_buffer
sFrontHaulRxBuffers[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR][XRAN_NUM_OF_SYMBOL_PER_SLOT];

struct xran_flat_buffer
sFrontHaulRxPrbMapBuffers[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR];

struct xran_flat_buffer
sFHPrachRxBuffers[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANTENNA_NR][XRAN_NUM_OF_SYMBOL_PER_SLOT];

/\* Cat B SRS buffers \*/

struct xran_flat_buffer
sFHSrsRxBuffers[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANT_ARRAY_ELM_NR][XRAN_MAX_NUM_OF_SRS_SYMBOL_PER_SLOT];

struct xran_flat_buffer
sFHSrsRxPrbMapBuffers[XRAN_N_FE_BUF_LEN][XRAN_MAX_SECTOR_NR][XRAN_MAX_ANT_ARRAY_ELM_NR];

};

The Doxygen file and xran_fh_o_du.h provides more details on the definition and usage of these structures. Refer to Table 2, for FlexRAN 5G NR Reference Solution RefPHY (Doxygen).(Not available in the community version).

O-RAN Specific Functionality

Front haul interface implementation in the general case is abstracted away using the interface defined in xran_fh_o_du.h

The L1 application is not required to access O-RAN protocol primitives (eCPRI header, application header, and others) directly. It is recommended to use the interface to remove dependencies between different software modules such as the l1 pipeline and O-RAN library.

External API

The U-plane and C-plane APIs can be used directly from the application if such an option is required. The set of header files can be exported and called directly.:

xran_fh_o_du.h – O-RAN main header file for O-DU scenario

xran_cp_api.h – Control plane functions

xran_pkt_cp.h – O-RAN control plane packet definition

xran_pkt.h – O-RAN packet definition

xran_pkt_up.h – O-RAN User plane packet definition

xran_sync_api.h – api functions to check PTP status

xran_timer.h – API for timing

xran_transport.h – eCPRI transport layer definition and api

xran_up_api.h – user plane functions and definitions

xran_compression.h – interface to compression/decompression functions

Source code comments can provide more details on functions and structures available.

C-plane

Implementation of the C-plane set of functions is defined in xran_cp_api.c and is used to prepare the content of C-plane packets according to the given configuration. Users can enable/disable generation of C-plane messages using enableCP field in struct xran_fh_init structure during the initialization of O-RAN front haul. The time of generation of C-plane message for DL and UL is done “Slot-based,” and timing can be controlled using O-DU settings according to Table 4.

The C-plane module contains:

  • initialization of C-plane database to keep track of allocation of resources

  • Code to prepare C-plane packet for TX (O-DU) - eCPRI header - append radio application header - append control section header - append control section

  • Parser of C-plane packet for RX (O-RU emulation)

  • parses and checks Section 1 and Section 3 packet content

Sending and receiving packets is performed using O-RAN ethdi sublayer functions.

More information on function arguments and parameters can be found in the comments for the corresponding source code.

Creating a C-Plane Packet
  1. API and Data Structures

A C-Plane message can be composed using the following API::

int xran_prepare_ctrl_pkt(struct rte_mbuf \*mbuf,

   struct xran_cp_gen_params \*params,

   uint8_t CC_ID, uint8_t Ant_ID, uint8_t seq_id);

mbuf is the pointer of a DPDK packet buffer, which is allocated from the caller.

params are the pointer of the structure which has the parameters to create the message.

CC_ID is the parameter to specify component carrier index, Ant_ID is the parameters to specify the antenna port index (RU port index).

seq_id is the sequence index for the message.

params, the parameters to create a C-Plane message are defined as the structure of xran_cp_gen_params with an
example given below::

struct xran_cp_gen_params {

   uint8_t dir;

   uint8_t sectionType;

   uint16_t numSections;

   struct xran_cp_header_params hdr;

   struct xran_section_gen_info \*sections;

};

dir is the direction of the C-Plane message to be generated. Available parameters are defined as XRAN_DIR_UL and XRAN_DIR_DL.

sectionType is the section type for C-Plane message to generate, as O-RAN specification defines all sections in a C-Plane message shall have the same section type. If different section types are required, they shall be sent with separate C-Plane messages. Available types of sections are defined as XRAN_CP_SECTIONTYPE_x. Refer to Table 2, O-RAN Specification, Table 5-2 Section Types.

numSections is the total number of sections to generate, i.e., the number of the array in sections (struct xran_section_gen_info).

hdr is the structure to hold the information to generate the radio application and section header in the C-Plane message. It is defined as the structure of xran_cp_header_params. Not all parameters in this structure are used for the generation, and the required parameters are slightly different by the type of section, as described in Table 10 and References in the remarks column are corresponding Chapter numbers in the O-RAN FrontHaul Working Group Control, User, and Synchronization Plane Specification in Table 2.

Table 10. struct xran_cp_header_params – Common Radio Application Header

Description

Remarks

filterIdx

Filter Index. Available values are defined as XRAN_FILTERINDEX_xxxxx.

5.4.4.3

frameId

Frame Index. It is modulo 256 of frame number.

5.4.4.4

subframeId

Sub-frame Index.

5.4.4.5

slotId

Slot Index. The maximum number is 15, as defined in the specification.

5.4.4.6

startSymId

Start Symbol Index.

5.4.4.7

Table 11. struct xran_cp_header_params – Section Specific Parameters

Description

Section Type applicable

Remarks

0

1

3

5

6

7

fftSize

FFT size
in frame
structure.
Available
values
are
defined
as
XRAN_FFT
SIZE_xxxx

X

X

5.4.4.13

Scs

Subcarrier
Spacing
in the
frame
structure.
Available
values
are
defined
as
XRAN_SCS_xxxx

X

X

5.4.4.13

iqWidth

I/Q bit
width in
user
data
compression
header.
Should
be set
by zero
for
16bits

X

X

X

5.4.4.10

6.3.3.13

compMeth

Compression
Method
in user
data
compression
header.
Available
values
are
defined
as
X-RAN_COMP
METHOD_x
xxx

X

X

X

5.4.4.10

6.3.3.13

numUEs

Number
of UEs.
Applies
to
section
type 6
and not
supported
in this
release.

X

5.4.4.11

timeOffset

Time
Offset.
Time
offset
from the
start of
the slot
to start
of
Cyclic
Prefix.

X

X

5.4.4.12

cpLength

Cyclic
Prefix
Length.

X

X

5.4.4.14

Note:

1.Only sections types 1 and 3 are supported in the current release.

2.References in the remarks column are corresponding Chapter numbers in the O-RAN Fronthaul Working Group Control, User, and Synchronization Plane Specification in Table 2.

Sections are the pointer to the array of structure which has the parameters for section(s) and it is defined as below::

struct xran_section_gen_info {

   struct xran_section_info info;

      uint32_t exDataSize;

      struct {

      uint16_t type;

      uint16_t len;

      void \*data;

   } exData[XRAN_MAX_NUM_EXTENSIONS];

};

info is the structure to hold the information to generate section and it is defined as the structure of xran_section_info. Like xran_cp_header_params, all parameters are not required to generate section and Table 12 describes which parameters are required for each section.

Table 12. Parameters for Sections

Description

Section Type applicable

Remarks

0

1

3

5

6

Id

Section
Identifier.

X

X

X

X

X

5.4.5.1

Rb

Res
ource
Block
Indicator.
Available
values
are
defined
as
X-RAN
_RBIND
_xxxx

X

X

X

X

X

5.4.5.2

symInc

Symbol
number
Increment
command.
Available
values
are
defined
as
XRAN_SYM
BOL
NUMBER
_xxxx

X

X

X

X

X

5.4.5.3

startPrbc
Starting
PRB of data
section
description.

X

X

X

X

X

5.4.5.4

numPrbc
The number
of
contiguous
PRBs
per
data
section
description.
When
numPrbc
is
greater
than 255,
it will be
converted
to zero
by the macro
(XRAN_CONVERT
_NUM
PRBC)

X

X

X

X

X

5.4.5.6

reMask

ResourceElementMask.

X

X

X

X

5.4.5.5

numSymbol

Number of Symbols.

X

X

X

X

5.4.5.7

beamId

BeamIdentifier.

X

X

5.4.5.9

freqOffset

FrequencyOffset.

X

5.4.5.11

ueId

UEidentifier.
Not
supported
in this
release.

X

X

5.4.5.10

regFactor

Regularization
Factor.
Not
supported
in this release

X

5.4.5.12

Ef

Extension Flag.
Not
supported
in this release.

X

X

X

X

5.4.5.8

Note:

1.Only sections types 1 and 3 are supported in the current release.

2.References in the remarks column are corresponding Chapter numbers in the O-RAN FrontHaul Working Group Control, User, and Synchronization Plane Specification in Table 2.

Note: xran_section_info has more parameters – type, startSymId, iqWidth, compMeth. These are the same parameters as those of radio application or section header but need to be copied into this structure again for the section data base.

exDataSize and exData are used to add section extensions for the section.

exDataSize is the number of elements in the exData array. The maximum number of elements is defined as XRAN_MAX_NUM_EXTENSIONS and it is defined by four in this release with the assumption that four different types of section extensions can be added to a section (section extension type 3 is excluded since it is not supported). exData.type is the type of section extension and exData.len is the length of structure of section extension parameter in exData.data. exData.data is the pointer to the structure of section extensions and different structures are used by the type of section extensions like below.:

struct xran_sectionext1_info {

   uint16_t rbNumber; /* number RBs to ext1 chain \*/

   uint16_t bfwNumber; /* number of bf weights in this section \*/

   uint8_t bfwiqWidth;

   uint8_t bfwCompMeth;

   int16_t \*p_bfwIQ; /* pointer to formed section extention \*/

   int16_t bfwIQ_sz; /* size of buffer with section extention information
   \*/

   union {

      uint8_t exponent;

      uint8_t blockScaler;

      uint8_t compBitWidthShift;

      uint8_t activeBeamspaceCoeffMask[XRAN_MAX_BFW_N]; /\* ceil(N/8)*8,
      should be multiple of 8 \*/

   } bfwCompParam;

};

For section extension type 1, the structure of xran_sectionext1_info is used.

Note: The O-RAN library will use beamforming weight (bfwIQ) as-is, i.e., O-RAN library will not perform the compression, so the user should provide proper data to bfwIQ.:

struct xran_sectionext2_info {

   uint8_t bfAzPtWidth;

   uint8_t bfAzPt;

   uint8_t bfZePtWidth;

   uint8_t bfZePt;

   uint8_t bfAz3ddWidth;

   uint8_t bfAz3dd;

   uint8_t bfZe3ddWidth;

   uint8_t bfZe3dd;

   uint8_t bfAzSI;

   uint8_t bfZeSI;

};

For section extension type 2, the structure of xran_sectionext2_info is used. Each parameter will be packed as specified bit width.:

struct xran_sectionext3_info {

   uint8_t codebookIdx;

   uint8_t layerId;

   uint8_t numLayers;

   uint8_t txScheme;

   uint16_t crsReMask;

   uint8_t crsShift;

   uint8_t crsSymNum;

   uint16_t numAntPort;

   uint16_t beamIdAP1;

   uint16_t beamIdAP2;

   uint16_t beamIdAP3;

};

For section extension type 3, the structure of xran_sectionext3_info is used.:

struct xran_sectionext4_info {

   uint8_t csf;

   uint8_t pad0;

   uint16_t modCompScaler;

};

For section extension type 4, the structure of xran_sectionext4_info is used.:

struct xran_sectionext5_info {

   uint8_t num_sets;

   struct {

   uint16_t csf;

   uint16_t mcScaleReMask;

   uint16_t mcScaleOffset;

   } mc[XRAN_MAX_MODCOMP_ADDPARMS];

};

For section extension type 5, the structure of xran_sectionext5_info is used.

Note: Current implementation supports maximum two sets of additional parameters.:

struct xran_sectionext6_info {

   uint8_t rbgSize;

   uint8_t pad;

   uint16_t symbolMask;

   uint32_t rbgMask;

};

For section extension type 6, the structure of xran_sectionext6_info is used.:

struct xran_sectionext10_info {

   uint8_t numPortc;

   uint8_t beamGrpType;

   uint16_t beamID[XRAN_MAX_NUMPORTC_EXT10];

};

For section extension type 10, the structure of xran_sectionext10_info is used.:

struct xran_sectionext11_info {

   uint8_t RAD;

   uint8_t disableBFWs;

   uint8_t numBundPrb;

   uint8_t numSetBFWs; /\* Total number of beam forming weights set (L) \*/

   uint8_t bfwCompMeth;

   uint8_t bfwIqWidth;

   int totalBfwIQLen;

   int maxExtBufSize; /\* Maximum space of external buffer \*/

   uint8_t \*pExtBuf; /\* pointer to start of external buffer \*/

   void \*pExtBufShinfo; /\* Pointer to rte_mbuf_ext_shared_info \*/

};

For section extension type 11, the structure of xran_sectionext11_info is used.

To minimize memory copy for beamforming weights, when section extension 11 is required to send beamforming weights(BFWs), external flat buffer is being used in current release. If extension 11 is used, it will be used instead of mbufs that pre-allocated external buffers which BFWs have been prepared already. BFW can be prepared by xran_cp_prepare_ext11_bfws() and the example usage can be found from app_init_xran_iq_content() from sample-app.c.

Detail Procedures in API

The xran_prepare_ctrl_pkt() has several procedures to compose a C-Plane packet.

  1. Append transport header:

  • Reserve eCPRI header space in the packet buffer

  • eCPRI version is fixed by XRAN_ECPRI_VER (0x0001)

  • Concatenation and transport layer fragmentation is not supported.

    ecpri_concat=0, ecpri_seq_id.sub_seq_id=0 and ecpri_seq_id.e_bit=1

  • The caller needs to provide a component carrier index, antenna index, and message identifier through function arguments.

    CC_ID, Ant_ID and seq_id

  • ecpriRtcid (ecpri_xtc_id) is composed with CC_ID and Ant_ID by xran_compose_cid.

  • DU port ID and band sector ID are fixed by zero in this release.

  • The output of xran_compose_cid is stored in network byte order.

  • The length of the payload is initialized by zero.

  1. Append radio application header:

  • The xran_append_radioapp_header() checks the type of section through params->sectionType and determines proper function to append remaining header components.

  • Only section type 1 and 3 are supported, returns XRAN_STATUS_INVALID_PARAM for other types.

  • Each section uses a different function to compose the remaining header and size to calculate the total length in the transport header.

  • For section type 1, xran_prepare_section1_hdr() and sizeof(struct xran_cp_radioapp_section1_header)

  • For section type 3, xran_prepare_section3_hdr() and sizeof(struct xran_cp_radioapp_section3_header)

  • Reserves the space of common radio application header and composes header by xran_prepare_radioapp_common_header().

    The header is stored in network byte order.

  • Appends remaining header components by the selected function above

    The header is stored in network byte order

  1. Append section header and section:

  • The xran_append_control_section() determines proper size and function to append section header and contents.

  • For section type 1, xran_prepare_section1() and sizeof(struct xran_cp_radioapp_section1)

  • For section type 3, xran_prepare_section3() and sizeof(struct xran_cp_radioapp_section3)

  • Appends section header and section(s) by selected function above.

  • If multiple sections are configured, then those will be added.

  • Since fragmentation is not considered in this implementation, the total length of a single C-Plane message shall not exceed MTU size.

  • The header and section(s) are stored in network byte order.

  • Appends section extensions if it is set (ef=1)

  • The xran_append_section_extensions() adds all configured extensions by its type.

  • The xran_prepare_sectionext_x() (x = 1,2,4,5) will be called by the type from and these functions will create extension field.

Example Usage of API

There are two reference usages of API to generate C-Plane messages:

  • xran_cp_create_and_send_section() in xran_main.c

  • generate_cpmsg_prach() in xran_common.c

The xran_cp_create_and_send_section() is to generate the C-Plane message with section type 1 for DL or UL symbol data scheduling.

This function has hardcoded values for some parameters such as:

  • The filter index is fixed to XRAN_FILTERINDEX_STANDARD.

  • RB indicator is fixed to XRAN_RBIND_EVERY.

  • Symbol increment is not used (XRAN_SYMBOLNUMBER_NOTINC)

  • Resource Element Mask is fixed to 0xfff

If section extensions include extension 1 or 11, direct mbuf will not be allocated/used and pre-allocated flat buffer will be attached to indirect mbuf. This external buffer will be used to compose C-Plane message and should have BFWs already by xran_cp_populate_section_ext_1() or xran_cp_prepare_ext11_bfws().

Since current implementation uses single section single C-Plane message, if multi sections are present, this function will generate same amount of C-Plane messages with the number of sections.

After C-Plane message generation, it will send generated packet to TX ring after adding an Ethernet header and also will add section information of generated C-Plane packet to section database, to generate U-plane message by C-Plane configuration.

The generate_cpmsg_prach()is to generate the C-Plane message with section type 3 for PRACH scheduling.

This functions also has some hardcoded values for the following parameters:

  • RB indicator is fixed to XRAN_RBIND_EVERY.

  • Symbol increment is not used (XRAN_SYMBOLNUMBER_NOTINC).

  • Resource Element Mask is fixed to 0xfff.

This function does not send generated packet, send_cpmsg() should be called after this function call. The example can be found from tx_cp_ul_cb() in xran_main.c. Checking and parsing received PRACH symbol data by section information from the C-Plane are not implemented in this release.

Example Configuration of C-Plane Messages

C-Plane messages can be composed through API, and the sample application shows several reference usages of the configuration for different numerologies.

Below are the examples of C-Plane message configuration with a sample application for mmWave – numerology 3, 100 MHz bandwidth, TDD (DDDS)

C-Plane Message – downlink symbol data for a downlink slot

  • Single CP message with the single section of section type 1

  • Configures single CP message for all consecutive downlink symbols

  • Configures whole RBs (66) for a symbol

  • Compression and beamforming are not used

Common Header Fields:

- dataDirection = XRAN_DIR_DL
- payloadVersion = XRAN_PAYLOAD_VER
- filterIndex = XRAN_FILTERINDEX_STANDARD
- frameId = [0..99]
- subframeId = [0..9]
- slotID = [0..9]
- startSymbolid = 0
- numberOfsections = 1
- sectionType = XRAN_CP_SECTIONTYPE_1
- udCompHdr.idIqWidth = 0
- udCompHdr.udCompMeth = XRAN_COMPMETHOD_NONE
- reserved = 0

Section Fields:

- sectionId = [0..4095]
- rb = XRAN_RBIND_EVERY
- symInc = XRAN_SYMBOLNUMBER_NOTINC
- startPrbc = 0
- numPrbc = 66
- reMask = 0xfff
- numSymbol = 14
- ef = 0
- beamId = 0

C-Plane Message – uplink symbol data for uplink slot

  • Single CP message with the single section of section type 1

  • Configures single CP message for all consecutive uplink symbols (UL symbol starts from 3)

  • Configures whole RBs (66) for a symbol

  • Compression and beamforming are not used

Common Header Fields:

- dataDirection = XRAN_DIR_UL
- payloadVersion = XRAN_PAYLOAD_VER
- filterIndex = XRAN_FILTERINDEX_STANDARD
- frameId = [0..99]
- subframeId = [0..9]
- slotID = [0..9]
- startSymbolid = 3
- numberOfsections = 1
- sectionType = XRAN_CP_SECTIONTYPE_1
- udCompHdr.idIqWidth = 0
- udCompHdr.udCompMeth = XRAN_COMPMETHOD_NONE
- reserved = 0

Section Fields:

- sectionId = [0..4095]
- rb = XRAN_RBIND_EVERY
- symInc = XRAN_SYMBOLNUMBER_NOTINC
- startPrbc = 0
- numPrbc = 66
- reMask = 0xfff
- numSymbol = 11
- ef = 0
- beamId = 0

C-Plane Message – PRACH

  • Single CP message with the single section of section type 3 including repetition

  • Configures PRACH format A3, config index 81, and detail parameters are:

  • Filter Index : 3

  • CP length : 0

  • Time offset : 2026

  • FFT size : 1024

  • Subcarrier spacing : 120KHz

  • Start symbol index : 7

  • Number of symbols : 6

  • Number of PRBCs : 12

  • Frequency offset : -792

  • Compression and beamforming are not used

Common Header Fields:

-  dataDirection = XRAN_DIR_UL
-  payloadVersion = XRAN_PAYLOAD_VER
-  filterIndex = XRAN_FILTERINDEPRACH_ABC
-  frameId = [0,99]
-  subframeId = [0,3]
-  slotID = 3 or 7
-  startSymbolid = 7
-  numberOfSections = 1
-  sectionType = XRAN_CP_SECTIONTYPE_3
-  timeOffset = 2026
-  frameStructure.FFTSize = XRAN_FFTSIZE_1024
-  frameStructure.u = XRAN_SCS_120KHZ
-  cpLength = 0
-  udCompHdr.idIqWidth = 0
-  udCompHdr.udCompMeth = XRAN_COMPMETHOD_NONE

Section Fields:

- sectionId = [0..4095]
- rb = XRAN_RBIND_EVERY
- symInc = XRAN_SYMBOLNUMBER_NOTINC
- startPrbc = 0
- numPrbc = 12
- reMask = 0xfff
- numSymbol = 6
- ef = 0
- beamId = 0
- frequencyOffset = -792
- reserved
Functions to Store/Retrieve Section Information

There are several functions to store/retrieve section information of C-Plane messages. Since U-plane messages must be generated by the information in the sections of a C-Plane message, it is required to store and retrieve section information.

APIs and Data Structure

APIs for initialization and release storage are:

  • int xran_cp_init_sectiondb(void *pHandle);

  • int xran_cp_free_sectiondb(void *pHandle);

APIs to store and retrieve section information are:

  • int xran_cp_add_section_info(void *pHandle, uint8_t dir, uint8_t cc_id, uint8_t ruport_id, uint8_t ctx_id, struct xran_section_info *info);

  • int xran_cp_add_multisection_info(void *pHandle, uint8_t cc_id, uint8_t ruport_id, uint8_t ctx_id, struct xran_cp_gen_params *gen_info);

  • struct xran_section_info *xran_cp_find_section_info(void *pHandle, uint8_t dir, uint8_t cc_id, uint8_t ruport_id, uint8_t ctx_id, uint16_t section_id);

  • struct xran_section_info *xran_cp_iterate_section_info(void *pHandle, uint8_t dir, uint8_t cc_id, uint8_t ruport_id, uint8_t ctx_id, uint32_t *next);

  • int xran_cp_getsize_section_info(void *pHandle, uint8_t dir, uint8_t cc_id, uint8_t ruport_id, uint8_t ctx_id);

APIs to reset the storage for a new slot are:

  • int xran_cp_reset_section_info(void *pHandle, uint8_t dir, uint8_t cc_id, uint8_t ruport_id, uint8_t ctx_id);

The structure of xran_section_info is used to store/retrieve information. This is the same structure used to generate a C-Plane message. Refer to Section 1, API and Data Structures for more details.

The storage for section information is declared as a multi-dimensional array and declared as a local static variable to limit direct access. Each item is defined as the structure of xran_sectioninfo_db, and it has the number of stored section information items (cur_index) and the array of the information (list), as shown below.

/* * This structure to store the section information of C-Plane * in order to generate and parse corresponding U-Plane */

struct xran_sectioninfo_db { uint32_t cur_index; /* Current index to store for this eAXC*/struct xran_section_info list[XRAN_MAX_NUM_SECTIONS]; /* The array of section information */

};

static struct xran_sectioninfo_db sectiondb[XRAN_MAX_SECTIONDB_CTX][XRAN_DIR_MAX][XRAN_COMPONENT_CARRIERS_MAX][XRAN_MAX_ANTENNA_NR*2 + XRAN_MAX_ANT_ARRAY_ELM_NR];

The maximum size of the array can be adjusted if required by system configuration. Since transmission and reception window of U-Plane can be overlapped with the start of new C-Plane for next slot, functions have context index to identify and protect the information. Currently the maximum number of context is defined by two and it can be adjusted if needed.

Note. Since the context index is not managed by the library and APIs are expecting it from the caller as a parameter, the caller shall consider a proper method to manage it to avoid corruption. The current reference implementation uses a slot and subframe index to calculate the context index.

Example Usage of APIs

There are references to show the usage of APIs as below.

  • Initialization and release:

    xran_cp_init_sectiondb(): xran_open() in lib/src/xran_main.c
    
    xran_cp_free_sectiondb(): xran_close() in lib/src/xran_main.c
    
  • Store section information:

    xran_cp_add_section_info(): send_cpmsg_dlul() and
    send_cpmsg_prach()in lib/src/xran_main.c
    
  • Retrieve section information:

    xran_cp_iterate_section_info(): xran_process_tx_sym() in
    lib/src/xran_main.c
    
    xran_cp_getsize_section_info(): xran_process_tx_sym() in
    lib/src/xran_main.c
    
  • Reset the storage for a new slot:

    xran_cp_reset_section_info(): tx_cp_dl_cb() and tx_cp_ul_cb() in
    lib/src/xran_main.c
    
Function for RU emulation and Debug

xran_parse_cp_pkt() is a function which can be utilized for RU emulation or debug. It is defined below:

int xran_parse_cp_pkt(struct rte_mbuf \*mbuf,
  struct xran_cp_recv_params \*result,
  struct xran_recv_packet_info \*pkt_info);

It parses a received C-Plane packet and retrieves the information from its headers and sections.

The retrieved information is stored in the structures:

struct xran_cp_recv_params: section information from received C-Plane packet

struct xran_recv_packet_info: transport layer header information (eCPRI header)

These functions can be utilized to debug or RU emulation purposes.

U-plane

Single Section is the default mode of O-RAN packet creation. It assumes that there is only one section per packet, and all IQ samples are attached to it. Compression is not supported.

A message is built in the mbuf space given as a parameter. The library builds eCPRI header filling structure fields by taking the IQ sample size and populating a particular packet length and sequence number.

With block floating point compression, supported IQ bit widths are 8,9,10,12,14. With modulation compression, supported IQ bit widths are defined according to modulation order as in section A.5 of O-RAN spec..

Implementation of a U-plane set of functions is defined in xran_up_api.c and is used to prepare U-plane packet content according to the given configuration.

The following list of functions is implemented for U-plane:

  • Build eCPRI header

  • Build application header

  • Build section header

  • Append IQ samples to packet

  • Prepare full symbol of O-RAN data for single eAxC

  • Process RX packet per symbol.

The time of generation of a U-plane message for DL and UL is “symbol-based” and can be controlled using O-DU settings (O-RU), according to Table 4.

For more information on function arguments and parameters refer to corresponding source cod*e.

Supporting Code

The O-RAN library has a set of functions used to assist in packet processing and data exchange not directly used for O-RAN packet processing.

Timing

The sense of time for the O-RAN protocol is obtained from system time, where the system timer is synchronized to GPS time via PTP protocol using the Linux PHP package. On the software side, a simple polling loop is utilized to get time up to nanosecond precision and particular packet processing jobs are scheduled via the DPDK timer.::

long poll_next_tick(int interval)

{

  struct timespec start_time;

  struct timespec cur_time;

  long target_time;

  long delta;

  clock_gettime(CLOCK_REALTIME, &start_time);

  target_time = (start_time.tv_sec \* NSEC_PER_SEC + start_time.tv_nsec +
  interval \* NSEC_PER_USEC) / (interval \* NSEC_PER_USEC) \* interval;

  while(1)

  {

      clock_gettime(CLOCK_REALTIME, &cur_time);

      delta = (cur_time.tv_sec \* NSEC_PER_SEC + cur_time.tv_nsec) -
      target_time \* NSEC_PER_USEC;

      if(delta > 0 \|\| (delta < 0 && abs(delta) < THRESHOLD))

      {

        break;

      }

  }

  return delta;

}

Polling is used to achieve the required precision of symbol time. For example, in the mmWave scenario, the symbol time is 125µs/14=~8.9µs. Small deterministic tasks can be executed within the polling interval provided. It’s smaller than the symbol interval time.

Current O-RAN library supports multiple O-RU of multiple numerologies, thus the sense of timing is based on the O-RU with highest numerology (smallest symbol time). It is required to configure the O-RU0 with highest numerology in the O-RAN configuration.

DPDK Timers

DPDK provides sets of primitives (struct rte_rimer) and functions (rte_timer_reset_sync() rte_timer_manage()) to
schedule processing of function as timer. The timer is based on the TSC clock and is not synchronized to PTP time. As a
result, this timer cannot be used as a periodic timer because the TSC clock can drift substantially relative to the system timer which in turn is synchronized to PTP (GPS)

Only single-shot timers are used to schedule processing based on events such as symbol time. The packet
processing function calls rte_timer_manage() in the loop, and the resulting execution of timer function happens right
after the timer was “armed”.

O-RAN Ethernet

xran_init_port() function performs initialization of DPDK ETH port. Standard port configuration is used as per reference example from DPDK.

Jumbo Frames are used by default. Mbufs size is extended to support 9600 bytes packets.

Configurable MTU size is supported starting from E release.

MAC address and VLAN tag are expected to be configured by Infrastructure software. Refer to A.4, Install and Configure Sample Application.

From an implementation perspective, modules provide functions to handle:

  • Ethernet headers

  • VLAN tag

  • Send and Receive mbuf.

xRAN Ethdi

Ethdi provides functionality to work with the content of an Ethernet packet and dispatch processing to/from the xRAN layer. Ethdi instantiates a main PMD driver thread and dispatches packets between the ring and RX/TX using rte_eth_rx_burst() and rte_eth_tx_burst() DPDK functions.

For received packets, it maintains a set of handlers for ethertype handlers and xRAN layer register one O-RAN ethtype
0xAEFE, resulting in packets with this ethertype being routed to the xRAN processing function. This function checks the message type of the eCPRI header and dispatches packet to either C-plane processing or U-plane processing.

Initialization of memory pools, allocation, and freeing of the mbuf for Ethernet packets occur in this layer.

O-RAN One Way Delay Measurements

The support for the eCPRI one- way delay measurements which are specified by the O-RAN to be used with the Measured Transport support per Section 2.3.3.3 of the O-RAN-WG4.CUS.0-v4.00 specification and section 3.2.4.6 of the eCPRI_v2.0 specification is implemented in the file xran_delay_measurement.c. Structure definitions used by the owd measurement functions are in the file xran_fh_o_du.h for common data and port specific variables and parameters.

The implementation of this feature has been done under the assumption that the requestor is the O-DU and the recipient is the O-RU. All of the action_types per the eCPRI 2.0 have been implemented. In the current version the timestamps are obtained using the linux function clock_gettime using CLOCK_REALTIME as the clock_id argument.

The implementation supports both the O-RU and the O-DU side in order to do the unit test in loopback mode.

The one-delay measurements are enabled at configuration time and run right after the xran_start() function is executed. The total number of consecutive measurements per port should be a power of 2 and in order to minimize the system startup it is advisable that the number is 16 or below.

The following functions can be found in the xran_delay_measurement.c:

xran_ecpri_one_way_delay_measurement_transmitter() which is invoked from the process_dpdk_io() function if the one-way delay measurements are enabled. This is the main function for the owd transmitter.

xran_generate_delay_meas() is a general function used by the transmitter to send the appropriate messages based on actionType and filling up all the details for the ethernet and ecpri layers.

Process_delay_meas() this function is invoked from the handle_ecpri_ethertype() function when the ecpri message type is ECPRI_DELAY_MEASUREMENT. This is the main owd receiver function.

From the Process_delay_meas() and depending on the message received we can execute one of the following functions

xran_process_delmeas_request() If we received a request message.

xran_process_delmeas_request_w_fup() If we received a request with follow up message.

xran_process_delmeas_response() If we received a response message.

xran_process_delmeas_rem_request() If we received a remote request message

xran_delmeas_rem_request_w_fup() If we received a remote request with follow up message.

All of the receiver functions also can generate the appropriate send message by using the DPDK function rte_eth_tx_burst() to minimize the response delay.

Additional utility functions used by the owd implementation for managing of timestamps and time measurements are:

xran_ptp_ts_to_ns() that takes a TimeStamp argument from a received owd ecpri packet and places it in host order and returns the value in nanoseconds.

xran_timespec_to_ns() that takes an argument in timespec format like the return value from the linux function clock_gettime() and returns a value in nanoseconds.

xran_ns_to_timespec() that takes an argument in nanoseconds and returns a value by reference in timespec format.

xran_compute_and_report_delay_estimate() This function takes an average of the computed one way delay measurements and prints out the average value to the console expressed in nanoseconds. Currently we exclude the first 2 measurements from the average.

Utility functions in support of the owd ecpri packet formulation are:

xran_build_owd_meas_ecpri_hdr() Builds the ecpri header with message type ECPRI_DELAY_MEASUREMENT and writes the payload size in network order.

xran_add_at_and_measId_to_header() This function is used to write the action Type and MeasurementID to the eCPRI owd header.

The current implementation of the one way delay measurements only supports a fixed message size. The message is defined in the xran_pkt.h in the structure xran_ecpri_delay_meas_pl.

The one-way delay measurements have been tested with the sample-app for the Front Haul Interface Library and have not yet been integrated with the L1 Layer functions.

Sample Application

Figure 26 illustrates a sample xRAN application.

Figure 26. Sample Application

Figure 26. Sample Application

The sample application was created to execute test scenarios with features of the O-RAN library and test external API as well as timing. The sample application is named sample-app, and depending on configuration file settings can act as O-DU or simplified simulation of O-RU. The first O-DU should be run on the machine that acts as O-DU and the second as O-RU. Both machines are connected via ETH. The sample application on both sides executes using a constant configuration according to settings in corresponding config files (./app/usecase/mu0_10mhz/config_file_o_du.dat and ./app/usecase/mu0_10mhz/config_file_o_ru.dat) and uses binary files (ant.bin) with IQ samples as input. Multiple-use cases for different numerologies and different BW are available as examples. Configuration files provide descriptions of each parameter, and in general, those are related to M-plane level settings as per the O-RAN Fronthaul specification, refer to Table 2.

From the start of the process, the application (O-DU) sends DL packets for the U-plane and C-plane and receives U-plane UL packets. Synchronization of O-DU and O-RU sides is achieved via IEEE 1588.

U-plane packets for UL and DL direction are constructed the same way except for the direction field.

Examples of default configurations used with the sample application for v20.04 release provided below:

1 Cell mmWave 100MHz TDD DDDS:

  • Numerology 3 (mmWave)

  • TTI period 125 µs

  • 100 MHz Bandwidth: 792 subcarriers (all 66 RB utilized at all times)

  • 4x4 MIMO

  • No beamforming

  • 1 Component carrier

  • Jumbo Frame for Ethernet (up to 9728 bytes)

  • Front haul throughput ~11.5 Gbps.

12 Cells Sub6 10MHz FDD:

  • Numerology 0 (Sub-6)

  • TTI period 1000 µs

  • 10 MHz Bandwidth: 624 subcarriers (all 52 RB utilized at all times)

  • 4x4 MIMO

  • No beamforming

  • 12 Component carrier

  • Jumbo Frame for Ethernet (up to 9728 bytes)

  • Front haul throughput ~13.7Gbps.

1 Cell Sub6 100 MHz TDD

  • Numerology 1 (Sub-6)

  • TTI period 500 µs

  • 100 MHz Bandwidth: 3276 subcarriers (all 273 RB utilized at all times)

  • 4x4 MIMO

  • No beamforming

  • 1 Component carrier

  • Jumbo Frame for Ethernet (up to 9728 bytes)

  • Front haul throughput ~11.7 Gbps.

1 Cell Sub6 100 MHz TDD (Category B):

  • Numerology 1 (Sub-6)

  • TTI period 500 µs

  • 100 MHz Bandwidth: 3276 subcarriers (all 273 RB utilized at all times). 8 UEs per TTI per layer

  • 8DL /4UL MIMO Layers

  • Digital beamforming with 32T32R

  • 1 Component carrier

  • Jumbo Frame for Ethernet (up to 9728 bytes)

  • Front haul throughput ~23.5 Gbps.

3 Cell Sub6 100MHz TDD Massive MIMO (Category B):

  • Numerology 1 (Sub-6)

  • TTI period 500 µs

  • 100 Mhz Bandwidth: 3276 subcarriers (all 273 RB utilized at all times). 8 UEs per TTI per layer

  • 16DL /8UL MIMO Layers

  • Digital beamforming with 64T64R

  • 1 Component carrier for each Cell

  • Jumbo Frame for Ethernet (up to 9728 bytes)

  • Front haul throughput ~44 Gbps.

Other configurations can be constructed by modifying the config files (see app/usecase/)

One_way Delay Measurements:

There are 4 usecases defined that are based on cat a, numerology 0 and 20 MHz Bw:

Common to all cases the following parameters are needed in the usecase_xu.cfg files where x=r for ORU and x=d for ODU.

oXuOwdmNumSamps=8 # Run 8 samples per port

oXuOwdmFltrType=0 # Simple average

oXuOwdmRespTimeOut=10000000 # 10 ms expressed in ns (Currently not enforced)

oXuOwdmMeasState=0 # Measurement state is INIT

oXuOwdmMeasId=0 # Measurement Id seed

oXuOwdmEnabled=1 # Measurements are enabled

oXuOwdmPlLength= n # with 40 <= n <= 1400 bytes

For the ORU

oXuOwdmInitEn=0 #O-RU is always the recipient

For the ODU

oXuOwdmInitEn=1 #O-DU is always initiator

20 Corresponds to the Request/Response usecase with Payload Size 40 bytes

oXuOwdmMeasMeth=0 # Measurement Method REQUEST

21 Corresponds to the Remote Request usecase with Payload Size 512 bytes

oXuOwdmMeasMeth=1 # Measurement Method REM_REQ

22 Corresponds to the Request with Follow Up usecase with Payload Size 1024 bytes

oXuOwdmMeasMeth=2 # Measurement Method REQUESTwFUP

23 Corresponds to the Remote Request with Follow Up usecase with default Payload Size

oXuOwdmMeasMeth=3 # Measurement Method REM_REQ_WFUP

Setup Configuration

A.1 Setup Configuration

The configuration shown in Figure 26 shows how to set up a test environment to execute O-RAN scenarios where O-DU and 0-RU are simulated using the sample application. This setup allows development and prototyping as well as testing of O-RAN specific functionality. The O-DU side can be instantiated with a full 5G NR L1 reference as well. The configuration differences of the 5G NR l1app configuration are provided below. Steps for running the sample application on the O-DU side and 0-RU side are the same, except configuration file options may be different.

Figure 27. Setup for O-RAN Testing

Figure 27. Setup for O-RAN Testing

Figure 28. Setup for O-RAN Testing with PHY and Configuration C3

Figure 28. Setup for O-RAN Testing with PHY and Configuration C3

Figure 29. Setup for O-RAN Testing with PHY and Configuration C3 for

Figure 29. Setup for O-RAN Testing with PHY and Configuration C3 for Massive MIMO

A.2 Prerequisites

Each server in Figure 27 requires the following:

  • Wolfpass server according to recommended BOM for FlexRAN such as Intel® Xeon® Skylake Gold 6148 FC-LGA3647 2.4 GHz 27.5 MB 150W 20 cores (two sockets) or higher

  • Wilson City or Coyote Pass server with Intel® Xeon® Icelake CPU for Massive-MIMO with L1 pipeline testing

  • BIOS settings:

    • Intel® Virtualization Technology Enabled

    • Intel® VT for Directed I/O - Enabled

    • ACS Control - Enabled

    • Coherency Support - Disabled

  • Front Haul networking cards:

    • Intel® Ethernet Converged Network Adapter XL710-QDA2

    • Intel® Ethernet Converged Network Adapter XXV710-DA2

    • Intel® Ethernet Converged Network Adapter E810-CQDA2

    • Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000

  • Back (Mid) Haul networking card can be either:

    • Intel® Ethernet Connection X722 for 10GBASE-T

    • Intel® 82599ES 10-Gigabit SFI/SFP+ Network Connection

    • Other networking cards capable of HW timestamping for PTP synchronization.

    • Both Back (mid) Haul and Front Haul NIC require support for PTP HW timestamping.

The recommended configuration for NICs is:

ethtool -i enp33s0f0
driver: i40e
version: 2.14.13
firmware-version: 8.20 0x80009bd4 1.2879.0
expansion-rom-version:
bus-info: 0000:21:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
ethtool -T enp33s0f0
Time stamping parameters for enp33s0f0:
Capabilities:
    hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
    software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
    hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
    software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
    software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
    hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 4
Hardware Transmit Timestamp Modes:
    off (HWTSTAMP_TX_OFF)
    on (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
    none (HWTSTAMP_FILTER_NONE)
    ptpv1-l4-sync (HWTSTAMP_FILTER_PTP_V1_L4_SYNC)
    ptpv1-l4-delay-req (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ)
    ptpv2-l4-event (HWTSTAMP_FILTER_PTP_V2_L4_EVENT)
    ptpv2-l4-sync (HWTSTAMP_FILTER_PTP_V2_L4_SYNC)
    ptpv2-l4-delay-req (HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ)
    ptpv2-l2-event (HWTSTAMP_FILTER_PTP_V2_L2_EVENT)
    ptpv2-l2-sync (HWTSTAMP_FILTER_PTP_V2_L2_SYNC)
    ptpv2-l2-delay-req (HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ)
    ptpv2-event (HWTSTAMP_FILTER_PTP_V2_EVENT)
    ptpv2-sync (HWTSTAMP_FILTER_PTP_V2_SYNC)
    ptpv2-delay-req (HWTSTAMP_FILTER_PTP_V2_DELAY_REQ)

The recommended configuration for Columbiaville NICs (base on Intel® Ethernet 800 Series (Columbiaville) CVL 2.3 release is:

ethtool -i enp81s0f0
driver: ice
version: 1.3.2
firmware-version: 2.3 0x80005D18
expansion-rom-version:
bus-info: 0000:51:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
ethtool -T enp81s0f0
Time stamping parameters for enp81s0f0:
Capabilities:
    hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
    software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
    hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE)
    software-receive (SOF_TIMESTAMPING_RX_SOFTWARE)
    software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
    hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE)
PTP Hardware Clock: 1
Hardware Transmit Timestamp Modes:
    off (HWTSTAMP_TX_OFF)
    on (HWTSTAMP_TX_ON)
Hardware Receive Filter Modes:
    none (HWTSTAMP_FILTER_NONE)
    all (HWTSTAMP_FILTER_ALL)

Recommended version of
iavf driver 4.0.2
ICE COMMS Package version 1.3.24.0

Note. If your firmware version does not match with the ones in the output images, you can download the correct version from the Intel Download Center. It is Intel’s repository for the latest software and drivers for Intel products. The NVM Update Packages for Windows*, Linux*, ESX*, FreeBSD*, and EFI/EFI2 are located at:

https://downloadcenter.intel.com/download/24769 (700 series)

https://downloadcenter.intel.com/download/29736 (E810 series)

PTP Grand Master is required to be available in the network to provide synchronization of both O-DU and RU to GPS time.

The software package includes Linux* CentOS* operating system and RT patch according to FlexRAN Reference Solution Cloud-Native Setup document (refer to Table 2). Only real-time HOST is required.

  1. Install Intel® C++ Compiler v19.0.3 or OneAPI compiler (preferred)

  2. Download DPDK v20.11.3

  3. Patch DPDK with FlexRAN BBDev patch as per given release.

4. Double check that FlexRAN DPDK patch includes changes below relevant to O-RAN Front haul:

For Fortville:
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 85a6a86..236fbe0 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -2207,7 +2207,7 @@ void i40e_flex_payload_reg_set_default(struct i40e_hw *hw)
    /* Map queues with MSIX interrupt */
    main_vsi->nb_used_qps = dev->data->nb_rx_queues -
        pf->nb_cfg_vmdq_vsi * RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM;
-       i40e_vsi_queues_bind_intr(main_vsi, I40E_ITR_INDEX_DEFAULT);
+       i40e_vsi_queues_bind_intr(main_vsi, I40E_ITR_INDEX_NONE);
    i40e_vsi_enable_queues_intr(main_vsi);

    /* Map VMDQ VSI queues with MSIX interrupt */
@@ -2218,6 +2218,10 @@ void i40e_flex_payload_reg_set_default(struct i40e_hw *hw)
        i40e_vsi_enable_queues_intr(pf->vmdq[i].vsi);
    }
+       i40e_aq_debug_write_global_register(hw,
+                                       0x0012A504,
+                                       0, NULL);
+
    /* enable FDIR MSIX interrupt */
    if (pf->fdir.fdir_vsi) {
        i40e_vsi_queues_bind_intr(pf->fdir.fdir_vsi,
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 001c301..6f9ffdb 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -640,7 +640,7 @@ struct rte_i40evf_xstats_name_off {

    map_info = (struct virtchnl_irq_map_info *)cmd_buffer;
    map_info->num_vectors = 1;
-       map_info->vecmap[0].rxitr_idx = I40E_ITR_INDEX_DEFAULT;
+       map_info->vecmap[0].rxitr_idx = I40E_ITR_INDEX_NONE;
    map_info->vecmap[0].vsi_id = vf->vsi_res->vsi_id;
    /* Alway use default dynamic MSIX interrupt */
    map_info->vecmap[0].vector_id = vector_id;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 26b1927..018eb8f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3705,7 +3705,7 @@ static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
        * except for 82598EB, which remains constant.
        */
        if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
-                               hw->mac.type != ixgbe_mac_82598EB)
+                               hw->mac.type != ixgbe_mac_82598EB && hw->mac.type != ixgbe_mac_82599EB)
            dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
    }
    dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
diff --git a/lib/librte_eal/common/include/rte_dev.h b/lib/librte_eal/common/include/rte_dev.h
old mode 100644
new mode 100755

for Columbiaville
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index de189daba..d9aff341c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2604,8 +2604,13 @@ __vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect,

                PMD_DRV_LOG(INFO, "queue %d is binding to vect %d",
                            base_queue + i, msix_vect);
-               /* set ITR0 value */
-               ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x10);
+               /* set ITR0 value
+                * Empirical configuration for optimal real time latency
+                * reduced interrupt throttling to 2 ms
+                * Columbiaville pre-PRQ : local patch subject to change
+                */
+               ICE_WRITE_REG(hw, GLINT_ITR(0, msix_vect), 0x1);
+               ICE_WRITE_REG(hw, QRX_ITR(base_queue + i), QRX_ITR_NO_EXPR_M);
                ICE_WRITE_REG(hw, QINT_RQCTL(base_queue + i), val);
                ICE_WRITE_REG(hw, QINT_TQCTL(base_queue + i), val_tx);
        }

5.Build and install the DPDK:

See https://doc.dpdk.org/guides/prog_guide/build-sdk-meson.html

6.Make below file changes in dpdk that assure i40e to get best latency of packet processing:

--- i40e.h      2018-11-30 11:27:00.000000000 +0000
+++ i40e_patched.h      2019-03-06 15:49:06.877522427 +0000
@@ -451,7 +451,7 @@

#define I40E_QINT_RQCTL_VAL(qp, vector, nextq_type) \
    (I40E_QINT_RQCTL_CAUSE_ENA_MASK | \
-       (I40E_RX_ITR << I40E_QINT_RQCTL_ITR_INDX_SHIFT) | \
+       (I40E_ITR_NONE << I40E_QINT_RQCTL_ITR_INDX_SHIFT) | \
    ((vector) << I40E_QINT_RQCTL_MSIX_INDX_SHIFT) | \
    ((qp) << I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) | \
    (I40E_QUEUE_TYPE_##nextq_type << I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT))

--- i40e_main.c 2018-11-30 11:27:00.000000000 +0000
+++ i40e_main_patched.c 2019-03-06 15:46:13.521518062 +0000
@@ -15296,6 +15296,9 @@
        pf->hw_features |= I40E_HW_HAVE_CRT_RETIMER;
    /* print a string summarizing features */
    i40e_print_features(pf);
+
+       /* write to this register to clear rx descriptor */
+       i40e_aq_debug_write_register(hw, 0x0012A504, 0, NULL);

    return 0;

A.3 Configuration of System

1.Boot Linux with the following arguments:

cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-1062.12.1.rt56.1042.el7.x86_64 root=/dev/mapper/centos-root ro
crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap intel_iommu=on iommu=pt
usbcore.autosuspend=-1 selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0
intel_pstate=disable cgroup_memory=1 cgroup_enable=memory mce=off idle=poll
hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=0 default_hugepagesz=1G
isolcpus=1-19,21-39 rcu_nocbs=1-19,21-39 kthread_cpus=0,20 irqaffinity=0,20
nohz_full=1-19,21-39
  1. Boot Linux with the following arguments for Icelake CPU:

    cat /proc/cmdline
    BOOT_IMAGE=/vmlinuz-3.10.0-957.10.1.rt56.921.el7.x86_64
    root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root
    rd.lvm.lv=centos/swap rhgb quiet intel_iommu=off usbcore.autosuspend=-1
    selinux=0 enforcing=0 nmi_watchdog=0 softlockup_panic=0 audit=0
    intel_pstate=disable cgroup_disable=memory mce=off hugepagesz=1G
    hugepages=40 hugepagesz=2M hugepages=0 default_hugepagesz=1G
    isolcpus=1-23,25-47 rcu_nocbs=1-23,25-47 kthread_cpus=0 irqaffinity=0
    nohz_full=1-23,25-47
    

3. Download from Intel Website and install updated version of i40e driver if needed. The current recommended version of i40e is 2.14.13. However, any latest version of i40e after 2.9.21 expected to be functional for O-RAN FH.

4. For Columbiaville download Intel® Ethernet 800 Series (Columbiaville) CVL2.3 B0/C0 Sampling Sample Validation Kit (SVK) from Intel Customer Content Library. The current recommended version of ICE driver is 1.3.2 with ICE COMMS Package version 1.3.24.0. IAVF recommended version 4.0.2

  1. Identify PCIe Bus address of the Front Haul NIC (Fortville):

    lspci|grep Eth
    86:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
    86:00.1 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
    88:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
    88:00.1 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
    
  2. Identify PCIe Bus address of the Front Haul NIC (Columbiaville):

    lspci \|grep Eth
    18:00.0 Ethernet controller: Intel Corporation Device 1593 (rev 02)
    18:00.1 Ethernet controller: Intel Corporation Device 1593 (rev 02)
    18:00.2 Ethernet controller: Intel Corporation Device 1593 (rev 02)
    18:00.3 Ethernet controller: Intel Corporation Device 1593 (rev 02)
    51:00.0 Ethernet controller: Intel Corporation Device 1593 (rev 02)
    51:00.1 Ethernet controller: Intel Corporation Device 1593 (rev 02)
    51:00.2 Ethernet controller: Intel Corporation Device 1593 (rev 02)
    51:00.3 Ethernet controller: Intel Corporation Device 1593 (rev 02)
    
  3. Identify the Ethernet device name:

    ethtool -i enp33s0f0
    driver: i40e
    version: 2.14.13
    firmware-version: 8.20 0x80009bd4 1.2879.0
    expansion-rom-version:
    bus-info: 0000:21:00.0
    supports-statistics: yes
    supports-test: yes
    supports-eeprom-access: yes
    supports-register-dump: yes
    supports-priv-flags: yesEnable
    

or

ethtool -i enp81s0f0
driver: ice
version: 1.3.2
firmware-version: 2.3 0x80005D18
expansion-rom-version:
bus-info: 0000:51:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

8. Enable 3 virtual functions (VFs) on the each of two ports of each NIC:

#!/bin/bash

echo 0 > /sys/bus/pci/devices/0000\:88\:00.0/sriov_numvfs
echo 0 > /sys/bus/pci/devices/0000\:88\:00.1/sriov_numvfs

echo 0 > /sys/bus/pci/devices/0000\:86\:00.0/sriov_numvfs
echo 0 > /sys/bus/pci/devices/0000\:86\:00.1/sriov_numvfs

modprobe -r iavf
modprobe iavf

echo 3 > /sys/bus/pci/devices/0000\:88\:00.0/sriov_numvfs
echo 3 > /sys/bus/pci/devices/0000\:88\:00.1/sriov_numvfs

echo 3 > /sys/bus/pci/devices/0000\:86\:00.0/sriov_numvfs
echo 3 > /sys/bus/pci/devices/0000\:86\:00.1/sriov_numvfs

a=8

if [ -z "$1" ]
then
b=0
elif [ $1 -lt $a ]
then
b=$1
else
echo " Usage $0 qos with 0<= qos <= 7 with 0 as a default if no qos is provided"
exit 1
fi

#O-DU
ip link set enp136s0f0 vf 0 mac 00:11:22:33:00:00 vlan 1 qos $b
ip link set enp136s0f1 vf 0 mac 00:11:22:33:00:10 vlan 1 qos $b

ip link set enp136s0f0 vf 1 mac 00:11:22:33:01:00 vlan 2 qos $b
ip link set enp136s0f1 vf 1 mac 00:11:22:33:01:10 vlan 2 qos $b

ip link set enp136s0f0 vf 2 mac 00:11:22:33:02:00 vlan 3 qos $b
ip link set enp136s0f1 vf 2 mac 00:11:22:33:02:10 vlan 3 qos $b

#O-RU
ip link set enp134s0f0 vf 0 mac 00:11:22:33:00:01 vlan 1 qos $b
ip link set enp134s0f1 vf 0 mac 00:11:22:33:00:11 vlan 1 qos $b

ip link set enp134s0f0 vf 1 mac 00:11:22:33:01:01 vlan 2 qos $b
ip link set enp134s0f1 vf 1 mac 00:11:22:33:01:11 vlan 2 qos $b

ip link set enp134s0f0 vf 2 mac 00:11:22:33:02:01 vlan 3 qos $b
ip link set enp134s0f1 vf 2 mac 00:11:22:33:02:11 vlan 3 qos $b

where output is next:

ip link show
...
9: enp134s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 3c:fd:fe:b9:f9:60 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:11:22:33:00:01, vlan 1, spoof checking on, link-state auto, trust off
    vf 1 MAC 00:11:22:33:01:01, vlan 2, spoof checking on, link-state auto, trust off
    vf 2 MAC 00:11:22:33:02:01, vlan 3, spoof checking on, link-state auto, trust off
11: enp134s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 3c:fd:fe:b9:f9:61 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:11:22:33:00:11, vlan 1, spoof checking on, link-state auto, trust off
    vf 1 MAC 00:11:22:33:01:11, vlan 2, spoof checking on, link-state auto, trust off
    vf 2 MAC 00:11:22:33:02:11, vlan 3, spoof checking on, link-state auto, trust off
12: enp136s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 3c:fd:fe:b9:f8:b4 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:11:22:33:00:00, vlan 1, spoof checking on, link-state auto, trust off
    vf 1 MAC 00:11:22:33:01:00, vlan 2, spoof checking on, link-state auto, trust off
    vf 2 MAC 00:11:22:33:02:00, vlan 3, spoof checking on, link-state auto, trust off
14: enp136s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 3c:fd:fe:b9:f8:b5 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:11:22:33:00:10, vlan 1, spoof checking on, link-state auto, trust off
    vf 1 MAC 00:11:22:33:01:10, vlan 2, spoof checking on, link-state auto, trust off
    vf 2 MAC 00:11:22:33:02:10, vlan 3, spoof checking on, link-state auto, trust off
...

More information about VFs supported by Intel NICs can be found at https://doc.dpdk.org/guides/nics/intel_vf.html.

The resulting configuration can look like the listing below, where six new VFs were added for each O-DU and O-RU port::

lspci|grep Eth
86:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
86:00.1 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
86:02.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:02.1 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:02.2 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.1 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
86:0a.2 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
88:00.1 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
88:02.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:02.1 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:02.2 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.1 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
88:0a.2 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
  1. Example where O-DU and O-RU simulation run on the same system:

O-DU::

cat ./run_o_du.sh
#! /bin/bash

ulimit -c unlimited
echo 1 > /proc/sys/kernel/core_uses_pid

./build/sample-app --usecasefile ./usecase/cat_b/mu1_100mhz/301/usecase_du.cfg --num_eth_vfs 6 \
--vf_addr_o_xu_a "0000:88:02.0,0000:88:0a.0" \
--vf_addr_o_xu_b "0000:88:02.1,0000:88:0a.1" \
--vf_addr_o_xu_c "0000:88:02.2,0000:88:0a.2"

O-RU:

cat ./run_o_ru.sh
#! /bin/bash
ulimit -c unlimited
echo 1 > /proc/sys/kernel/core_uses_pid

./build/sample-app --usecasefile ./usecase/cat_b/mu1_100mhz/301/usecase_ru.cfg --num_eth_vfs 6 \
--vf_addr_o_xu_a "0000:86:02.0,0000:86:0a.0" \
--vf_addr_o_xu_b "0000:86:02.1,0000:86:0a.1" \
--vf_addr_o_xu_c "0000:86:02.2,0000:86:0a.2"

Install and Configure Sample Application

To install and configure the sample application:

  1. Set up the environment(shown for icc change for icx):

    For Skylake and Cascadelake
    export GTEST_ROOT=pwd/gtest-1.7.0
    export RTE_SDK=pwd/dpdk-20.11.3
    export RTE_TARGET=x86_64-native-linuxapp-icc
    export DIR_WIRELESS_SDK_ROOT=pwd/wireless_sdk
    export WIRELESS_SDK_TARGET_ISA=avx512
    export SDK_BUILD=build-${WIRELESS_SDK_TARGET_ISA}-icc
    export DIR_WIRELESS_SDK=${DIR_WIRELESS_SDK_ROOT}/${SDK_BUILD}
    export MLOG_DIR=`pwd`/flexran_l1_sw/libs/mlog
    export XRAN_DIR=`pwd`/flexran_xran
    
    for Icelake
    export GTEST_ROOT=`pwd`/gtest-1.7.0
    export RTE_SDK=`pwd`/dpdk-20.11
    export RTE_TARGET=x86_64-native-linuxapp-icc
    export DIR_WIRELESS_SDK_ROOT=`pwd`/wireless_sdk
    export WIRELESS_SDK_TARGET_ISA=snc
    export SDK_BUILD=build-${WIRELESS_SDK_TARGET_ISA}-icc
    export DIR_WIRELESS_SDK=${DIR_WIRELESS_SDK_ROOT}/${SDK_BUILD}
    export MLOG_DIR=`pwd`/flexran_l1_sw/libs/mlog
    export XRAN_DIR=`pwd`/flexran_xran
    
  2. export FLEXRAN_SDK=${DIR_WIRELESS_SDK}/install Compile mlog library:

    [turner@xran home]$ cd $MLOG_DIR
    [turner@xran xran]$ ./build.sh
    
  3. Compile O-RAN library and test the application:

    [turner@xran home]$ cd $XRAN_DIR
    [turner@xran xran]$ ./build.sh
    
  4. Configure the sample app.

IQ samples can be generated using Octave* and script libs/xran/app/gen_test.m. (CentOS* has octave-3.8.2-20.el7.x86_64 compatible with get_test.m)

Other IQ sample test vectors can be used as well. The format of IQ samples is binary int16_t I and Q for N slots of the OTA RF signal. For example, for mmWave, it corresponds to 792RE*2*14symbol*8slots*10 ms = 3548160 bytes per antenna. Refer to comments in gen_test.m to correctly specify the configuration for IQ test vector generation.

Update usecase_du.dat (or usecase_ru.cfg) with a suitable configuration for your scenario.

Update config_file_o_du.dat (or config_file_o_ru.dat) with a suitable configuration for your scenario.

Update run_o_du.sh (run_o_ru.sh) with PCIe bus address of VF0 and VF1 used for U-plane and C-plane correspondingly.

  1. Run the application using run_o_du.sh (run_o_ru.sh).

Install and Configure FlexRAN 5G NR L1 Application

The 5G NR layer 1 application can be used for executing the scenario for mmWave with either the RU sample application or just the O-DU side. The current release supports the constant configuration of the slot pattern and RB allocation on the PHY side. The build process follows the same basic steps as for the sample application above and is similar to compiling 5G NR l1app for mmWave with Front Haul FPGA. Please follow the general build process in the FlexRAN 5G NR Reference Solution L1 User Guide (refer to Table 2.) (For information only as a FlexRAN binary blob is delivered to the community)

  1. O-RAN library is enabled by default l1 application

  2. Get the FlexRAN L1 binary from https://github.com/intel/FlexRAN. Look for the l1/bin/nr5g/gnb/l1 folder for the l1app binary and the corresponding phycfg and xrancfg files.

3. Configure the L1app using bin/nr5g/gnb/l1/phycfg_xran.xml and xrancfg_sub6.xml (or other xml if it is mmW or massive MIMO).

<XranConfig>
<version>oran_f_release_v1.0</version>
<!-- numbers of O-RU connected to O-DU. All O-RUs are the same
capabilities. Max O-RUs is per XRAN_PORTS_NUM i.e. 4 -->
<oRuNum>1</oRuNum>
<!-- # 10G,25G,40G,100G speed of Physical connection on O-RU -->
<oRuEthLinkSpeed>25</oRuEthLinkSpeed>
<!-- # 1, 2, 3 total number of links per O-RU (Fronthaul Ethernet link
in IOT spec) -->
<oRuLinesNumber>1</oRuLinesNumber>

<!-- O-RU 0 -->
<PciBusAddoRu0Vf0>0000:51:01.0</PciBusAddoRu0Vf0>
<PciBusAddoRu0Vf1>0000:51:01.1</PciBusAddoRu0Vf1>
<PciBusAddoRu0Vf2>0000:51:01.2</PciBusAddoRu0Vf2>
<PciBusAddoRu0Vf3>0000:51:01.3</PciBusAddoRu0Vf3>

<!-- O-RU 1 -->
<PciBusAddoRu1Vf0>0000:51:01.4</PciBusAddoRu1Vf0>
<PciBusAddoRu1Vf1>0000:51:01.5</PciBusAddoRu1Vf1>
<PciBusAddoRu1Vf2>0000:51:01.6</PciBusAddoRu1Vf2>
<PciBusAddoRu1Vf3>0000:51:01.7</PciBusAddoRu1Vf3>

<!-- O-RU 2 -->
<PciBusAddoRu2Vf0>0000:51:02.0</PciBusAddoRu2Vf0>
<PciBusAddoRu2Vf1>0000:51:02.1</PciBusAddoRu2Vf1>
<PciBusAddoRu2Vf2>0000:51:02.2</PciBusAddoRu2Vf2>
<PciBusAddoRu2Vf3>0000:51:02.3</PciBusAddoRu2Vf3>

<!-- O-RU 4 -->
<PciBusAddoRu3Vf0>0000:00:00.0</PciBusAddoRu3Vf0>
<PciBusAddoRu3Vf1>0000:00:00.0</PciBusAddoRu3Vf1>
<PciBusAddoRu3Vf2>0000:00:00.0</PciBusAddoRu3Vf2>
<PciBusAddoRu3Vf3>0000:00:00.0</PciBusAddoRu3Vf3>

<!-- remote O-RU 0 Eth Link 0 VF0, VF1-->
<oRuRem0Mac0>00:11:22:33:00:01<oRuRem0Mac0>
<oRuRem0Mac1>00:11:22:33:00:11<oRuRem0Mac1>
<!-- remote O-RU 0 Eth Link 1 VF2, VF3 -->
<oRuRem0Mac2>00:11:22:33:00:21<oRuRem0Mac2>
<oRuRem0Mac3>00:11:22:33:00:31<oRuRem0Mac3>

<!-- remote O-RU 1 Eth Link 0 VF4, VF5-->
<oRuRem1Mac0>00:11:22:33:01:01<oRuRem1Mac0>
<oRuRem1Mac1>00:11:22:33:01:11<oRuRem1Mac1>
<!-- remote O-RU 1 Eth Link 1 VF6, VF7 -->
<oRuRem1Mac2>00:11:22:33:01:21<oRuRem1Mac2>
<oRuRem1Mac3>00:11:22:33:01:31<oRuRem1Mac3>

<!-- remote O-RU 2 Eth Link 0 VF8, VF9 -->
<oRuRem2Mac0>00:11:22:33:02:01<oRuRem2Mac0>
<oRuRem2Mac1>00:11:22:33:02:11<oRuRem2Mac1>
<!-- remote O-RU 2 Eth Link 1 VF10, VF11-->
<oRuRem2Mac2>00:11:22:33:02:21<oRuRem2Mac2>
<oRuRem2Mac3>00:11:22:33:02:31<oRuRem2Mac3>

<!-- remote O-RU 2 Eth Link 0 VF12, VF13 -->
<oRuRem3Mac0>00:11:22:33:03:01<oRuRem3Mac0>
<oRuRem3Mac1>00:11:22:33:03:11<oRuRem3Mac1>
<!-- remote O-RU 2 Eth Link 1 VF14, VF15-->
<oRuRem3Mac2>00:11:22:33:03:21<oRuRem3Mac2>
<oRuRem3Mac3>00:11:22:33:03:31<oRuRem3Mac3>

<!--  Number of cells (CCs) running on this O-RU  [1 - Cell , 2 - Cells, 3 - Cells , 4 - Cells ] -->
<oRu0NumCc>12</oRu0NumCc>
<!-- First Phy instance ID mapped to this O-RU CC0  -->
<oRu0Cc0PhyId>0</oRu0Cc0PhyId>
<!-- Second Phy instance ID mapped to this O-RU CC1 -->
<oRu0Cc1PhyId>1</oRu0Cc1PhyId>
<!-- Third Phy instance ID mapped to this O-RU CC2  -->
<oRu0Cc2PhyId>2</oRu0Cc2PhyId>
<!-- Forth Phy instance ID mapped to this O-RU CC3  -->
<oRu0Cc3PhyId>3</oRu0Cc3PhyId>
<!-- First Phy instance ID mapped to this O-RU CC0  -->
<oRu0Cc4PhyId>4</oRu0Cc4PhyId>
<!-- Second Phy instance ID mapped to this O-RU CC1 -->
<oRu0Cc5PhyId>5</oRu0Cc5PhyId>
<!-- Third Phy instance ID mapped to this O-RU CC2  -->
<oRu0Cc6PhyId>6</oRu0Cc6PhyId>
<!-- Forth Phy instance ID mapped to this O-RU CC3  -->
<oRu0Cc7PhyId>7</oRu0Cc7PhyId>
<!-- First Phy instance ID mapped to this O-RU CC0  -->
<oRu0Cc8PhyId>8</oRu0Cc8PhyId>
<!-- Second Phy instance ID mapped to this O-RU CC1 -->
<oRu0Cc9PhyId>9</oRu0Cc9PhyId>
<!-- Third Phy instance ID mapped to this O-RU CC2  -->
<oRu0Cc10PhyId>10</oRuCc10PhyId>
<!-- Forth Phy instance ID mapped to this O-RU CC3  -->
<oRu0Cc11PhyId>11</oRu0Cc11PhyId>

<!--  Number of cells (CCs) running on this O-RU  [1 - Cell , 2 - Cells, 3 - Cells , 4 - Cells ] -->
<oRu1NumCc>1</oRu1NumCc>
<!-- First Phy instance ID mapped to this O-RU CC0  -->
<oRu1Cc0PhyId>1</oRu1Cc0PhyId>
<!-- Second Phy instance ID mapped to this O-RU CC1 -->
<oRu1Cc1PhyId>1</oRu1Cc1PhyId>
<!-- Third Phy instance ID mapped to this O-RU CC2  -->
<oRu1Cc2PhyId>2</oRu1Cc2PhyId>
<!-- Forth Phy instance ID mapped to this O-RU CC3  -->
<oRu1Cc3PhyId>3</oRu1Cc3PhyId>

<!--  Number of cells (CCs) running on this O-RU  [1 - Cell , 2 - Cells, 3 - Cells , 4 - Cells ] -->
<oRu2NumCc>1</oRu2NumCc>
<!-- First Phy instance ID mapped to this O-RU CC0  -->
<oRu2Cc0PhyId>2</oRu2Cc0PhyId>
<!-- Second Phy instance ID mapped to this O-RU CC1 -->
<oRu2Cc1PhyId>1</oRu2Cc1PhyId>
<!-- Third Phy instance ID mapped to this O-RU CC2  -->
<oRu2Cc2PhyId>2</oRu2Cc2PhyId>
<!-- Forth Phy instance ID mapped to this O-RU CC3  -->
<oRu2Cc3PhyId>3</oRu2Cc3PhyId>

<!-- XRAN Thread (core where the XRAN polling function is pinned: Core, priority, Policy [0: SCHED_FIFO 1: SCHED_RR] -->
<xRANThread>19, 96, 0</xRANThread>

<!-- core mask for XRAN Packets Worker (core where the XRAN packet processing is pinned): Core, priority, Policy [0: SCHED_FIFO 1: SCHED_RR] -->
<xRANWorker>0x8000000000, 96, 0</xRANWorker>
<xRANWorker_64_127>0x0000000000, 96, 0</xRANWorker_64_127>
<!-- XRAN: Category of O-RU 0 - Category A, 1 - Category B -->
<Category>0</Category>
<!-- Slot setup processing offload to pipeline BBU cores: [0: USE XRAN CORES 1: USE BBU CORES] -->
<xRANOffload>0</xRANOffload>
<!-- XRAN MLOG: [0: DISABLE 1: ENABLE] -->
<xRANMLog>0</xRANMLog>

<!-- XRAN: enable sleep on PMD cores -->
<xranPmdSleep>0</xranPmdSleep>

<!-- RU Settings -->
<Tadv_cp_dl>25</Tadv_cp_dl>
<!-- Reception Window C-plane DL-->
<T2a_min_cp_dl>285</T2a_min_cp_dl>
<T2a_max_cp_dl>429</T2a_max_cp_dl>
<!-- Reception Window C-plane UL-->
<T2a_min_cp_ul>285</T2a_min_cp_ul>
<T2a_max_cp_ul>429</T2a_max_cp_ul>
<!-- Reception Window U-plane -->
<T2a_min_up>71</T2a_min_up>
<T2a_max_up>428</T2a_max_up>
<!-- Transmission Window U-plane -->
<Ta3_min>20</Ta3_min>
<Ta3_max>32</Ta3_max>

<!-- O-DU Settings -->
<!-- MTU size -->
<MTU>9600</MTU>
<!-- VLAN Tag used for C-Plane -->
<c_plane_vlan_tag>1</c_plane_vlan_tag>
<u_plane_vlan_tag>2</u_plane_vlan_tag>

<!-- Transmission Window Fast C-plane DL -->
<T1a_min_cp_dl>258</T1a_min_cp_dl>
<T1a_max_cp_dl>470</T1a_max_cp_dl>
<!-- Transmission Window Fast C-plane UL -->
<T1a_min_cp_ul>285</T1a_min_cp_ul>
<T1a_max_cp_ul>429</T1a_max_cp_ul>
<!-- Transmission Window U-plane -->
<T1a_min_up>50</T1a_min_up>
<T1a_max_up>196</T1a_max_up>
<!-- Reception Window U-Plane-->
<Ta4_min>0</Ta4_min>
<Ta4_max>75</Ta4_max>

<!-- Enable Control Plane -->
<EnableCp>1</EnableCp>

<DynamicSectionEna>0</DynamicSectionEna>
<!-- Enable Dynamic section allocation for UL -->
<DynamicSectionEnaUL>0</DynamicSectionEnaUL>
<!-- Enable muti section for C-Plane -->
<DynamicMultiSectionEna>0</DynamicMultiSectionEna>

<xRANSFNWrap>1</xRANSFNWrap>
<!-- Total Number of DL PRBs per symbol (starting from RB 0) that is transmitted (used for testing. If 0, then value is used from PHY_CONFIG_API) -->
<xRANNumDLPRBs>0</xRANNumDLPRBs>
<!-- Total Number of UL PRBs per symbol (starting from RB 0) that is received (used for testing. If 0, then value is used from PHY_CONFIG_API) -->
<xRANNumULPRBs>0</xRANNumULPRBs>
<!-- refer to alpha as defined in section 9.7.2 of ORAN spec. this value should be alpha*(1/1.2288ns), range 0 - 1e7 (ns) -->
<Gps_Alpha>0</Gps_Alpha>
<!-- beta value as defined in section 9.7.2 of ORAN spec. range -32767 ~ +32767 -->
<Gps_Beta>0</Gps_Beta>

<!-- XRAN: Compression mode on O-DU <-> O-RU 0 - no comp 1 - BFP -->
<xranCompMethod>1</xranCompMethod>
<!-- XRAN: Uplane Compression Header type 0 - dynamic 1 - static -->
<xranCompHdrType>0</xranCompHdrType>
<!-- XRAN: iqWidth when DynamicSectionEna and BFP Compression enabled -->
<xraniqWidth>9</xraniqWidth>
<!-- Whether Modulation Compression mode is enabled or not for DL only -->
<xranModCompEna>0</xranModCompEna>
<!-- XRAN: Prach Compression mode on O-DU <-> O-RU 0 - no comp 1 - BFP -->
<xranPrachCompMethod>0</xranPrachCompMethod>
<!-- Whether Prach iqWidth when DynamicSectionEna and BFP Compression enabled -->
<xranPrachiqWidth>16</xranPrachiqWidth>

<oRu0MaxSectionsPerSlot>6</oRu0MaxSectionsPerSlot>
<oRu0MaxSectionsPerSymbol>6</oRu0MaxSectionsPerSymbol>
<oRu0nPrbElemDl>1</oRu0nPrbElemDl>
<!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
<!-- weight base beams -->
<oRu0PrbElemDl0>0,273,0,14,0,0,1,8,0,0,0</oRu0PrbElemDl0>
<oRu0PrbElemDl1>50,25,0,14,1,1,0,16,1,0,0</oRu0PrbElemDl1>
<oRu0PrbElemDl2>72,36,0,14,3,1,1,9,1,0,0</oRu0PrbElemDl2>
<oRu0PrbElemDl3>144,48,0,14,4,1,1,9,1,0,0</oRu0PrbElemDl3>
<oRu0PrbElemDl4>144,36,0,14,5,1,1,9,1,0,0</oRu0PrbElemDl4>
<oRu0PrbElemDl5>180,36,0,14,6,1,1,9,1,0,0</oRu0PrbElemDl5>
<oRu0PrbElemDl6>216,36,0,14,7,1,1,9,1,0,0</oRu0PrbElemDl6>
<oRu0PrbElemDl7>252,21,0,14,8,1,1,9,1,0,0</oRu0PrbElemDl7>


<oRu0nPrbElemUl>1</oRu0nPrbElemUl>
<!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask-->
<!-- weight base beams -->
<oRu0PrbElemUl0>0,273,0,14,0,0,1,8,0,0,0</oRu0PrbElemUl0>
<oRu0PrbElemUl1>0,273,0,14,0,0,1,8,0,0,0</oRu0PrbElemUl1>
<oRu0PrbElemUl2>72,36,0,14,3,1,1,9,1,0,0</oRu0PrbElemUl2>
<oRu0PrbElemUl3>108,36,0,14,4,1,1,9,1,0,0</oRu0PrbElemUl3>
<oRu0PrbElemUl4>144,36,0,14,5,1,1,9,1,0,0</oRu0PrbElemUl4>
<oRu0PrbElemUl5>180,36,0,14,6,1,1,9,1,0,0</oRu0PrbElemUl5>
<oRu0PrbElemUl6>216,36,0,14,7,1,1,9,1,0,0</oRu0PrbElemUl6>
<oRu0PrbElemUl7>252,21,0,14,8,1,1,9,1,0,0</oRu0PrbElemUl7>


<oRu1MaxSectionsPerSlot>6</oRu1MaxSectionsPerSlot>
<oRu1MaxSectionsPerSymbol>6</oRu1MaxSectionsPerSymbol>
<oRu1nPrbElemDl>1</oRu1nPrbElemDl>
<oRu1PrbElemDl0>0,273,0,14,0,0,1,8,0,0,0</oRu1PrbElemDl0>
<oRu1PrbElemDl1>53,53,0,14,2,1,1,8,1,0,0</oRu1PrbElemDl1>
<oRu1nPrbElemUl>1</oRu1nPrbElemUl>
<oRu1PrbElemUl0>0,273,0,14,0,0,1,8,0,0,0</oRu1PrbElemUl0>
<oRu1PrbElemUl1>53,53,0,14,2,1,1,8,1,0,0</oRu1PrbElemUl1>

<oRu2MaxSectionsPerSlot>6</oRu2MaxSectionsPerSlot>
<oRu2MaxSectionsPerSymbol>6</oRu2MaxSectionsPerSymbol>
<oRu2nPrbElemDl>1</oRu2nPrbElemDl>
<oRu2PrbElemDl0>0,273,0,14,0,0,1,8,0,0,0</oRu2PrbElemDl0>
<oRu2PrbElemDl1>53,53,0,14,2,1,1,8,1,0,0</oRu2PrbElemDl1>
<oRu2nPrbElemUl>1</oRu2nPrbElemUl>
<oRu2PrbElemUl0>0,273,0,14,0,0,1,8,0,0,0</oRu2PrbElemUl0>
<oRu2PrbElemUl1>53,53,0,14,2,1,1,8,1,0,0</oRu2PrbElemUl1>


</XranConfig>
  1. Modify l1/bin/nr5g/gnb/l1/dpdk.sh (change PCIe addresses from VFs).

    $RTE_SDK/usertools/dpdk-devbind.py --bind=vfio-pci 0000:21:02.0
    $RTE_SDK/usertools/dpdk-devbind.py --bind=vfio-pci 0000:21:02.1
    
  2. Use configuration of test mac per:

    l1//bin/nr5g/gnb.testmac/cascade_lake-sp/csxsp_mu1_100mhz_mmimo_hton_xran.cfg (info only N/A)
    phystart 4 0 40200
    <!--   mmWave mu 3 100MHz                -->
    TEST_FD, 1002, 1, fd/mu3_100mhz/2/fd_testconfig_tst2.cfg
    
  3. To execute l1app with O-DU functionality according to O-RAN Fronthaul specification, enter:

    [root@xran flexran] cd ./l1/bin/nr5g/gnb/l1
    [root@xran l1]#./l1.sh –xran
    
  4. To execute testmac with O-DU functionality according to O-RAN Fronthaul specification, enter:

    [root@xran flexran] cd ./l1/bin/nr5g/gnb/testmac
    
  5. To execute test case type (info only as file not available):

    ./l2.sh
    --testfile=./cascade_lake-sp/csxsp_mu1_100mhz_mmimo_hton_xran.cfg
    

Configure FlexRAN 5G NR L1 Application for multiple O-RUs with multiple numerologies

The 5G NR layer 1 application can be used for executing the scenario for multiple cells with multiple numerologies. The current release supports the constant configuration of different numerologies on different O-RU ports. It is required that the first O-RU (O-RU0) to be configured with highest numerology. The configuration procedure is similar as described in above section. Please refer to the configuration file located in binnr5ggnbl1orancfgsub3_mu0_20mhz_sub6_mu1_100mhz_4x4gnbxrancfg_sub6_oru.xml

Install and Configure FlexRAN 5G NR L1 Application for Massive - MIMO

The 5G NR layer 1 application can be used for executing the scenario for Massive-MIMO with either the RU sample application or just the O-DU side. 3 cells scenario with 64T64R Massive MIMO is targeted for Icelake system with Columbiavile NIC. The current release supports the constant configuration of the slot pattern and RB allocation on the PHY side. Please follow the general build process in the FlexRAN 5G NR Reference Solution L1 User Guide (refer to Table 2.)

  1. O-RAN library is enabled by default l1 application

  2. 5G NR L1 application available from https://github.com/intel/FlexRAN. Look for the l1/bin/nr5g/gnb/l1 folder for the l1app binary and the corresponding phycfg and xrancfg files.

  3. Configure the L1app using bin/nr5g/gnb/l1/xrancfg_sub6_mmimo.xml.:

    <XranConfig>
    <version>oran_f_release_v1.0<</version>
    <!-- numbers of O-RU connected to O-DU. All O-RUs are the same capabilities. Max O-RUs is per XRAN_PORTS_NUM i.e. 4 -->
    <oRuNum>3</oRuNum>
    <!--  # 10G,25G,40G,100G speed of Physical connection on O-RU -->
    <oRuEthLinkSpeed>25</oRuEthLinkSpeed>
    <!--  # 1, 2, 3 total number of links per O-RU (Fronthaul Ethernet link in IOT spec) -->
    <oRuLinesNumber>2</oRuLinesNumber>
    <!--  (1) - C- plane and U-plane on the same set of VFs. (0) - C-plane and U-Plane use dedicated VFs -->
    <oRuCUon1Vf>1</oRuCUon1Vf>
    
    <!-- O-RU 0 -->
    <PciBusAddoRu0Vf0>0000:51:01.0</PciBusAddoRu0Vf0>
    <PciBusAddoRu0Vf1>0000:51:09.0</PciBusAddoRu0Vf1>
    <PciBusAddoRu0Vf2>0000:51:01.2</PciBusAddoRu0Vf2>
    <PciBusAddoRu0Vf3>0000:51:01.3</PciBusAddoRu0Vf3>
    
    <!-- O-RU 1 -->
    <PciBusAddoRu1Vf0>0000:51:11.0</PciBusAddoRu1Vf0>
    <PciBusAddoRu1Vf1>0000:51:19.0</PciBusAddoRu1Vf1>
    <PciBusAddoRu1Vf2>0000:51:01.6</PciBusAddoRu1Vf2>
    <PciBusAddoRu1Vf3>0000:51:01.7</PciBusAddoRu1Vf3>
    
    <!-- O-RU 2 -->
    <PciBusAddoRu2Vf0>0000:18:01.0</PciBusAddoRu2Vf0>
    <PciBusAddoRu2Vf1>0000:18:09.0</PciBusAddoRu2Vf1>
    <PciBusAddoRu2Vf2>0000:51:02.2</PciBusAddoRu2Vf2>
    <PciBusAddoRu2Vf3>0000:51:02.3</PciBusAddoRu2Vf3>
    
    <!-- O-RU 4 -->
    <PciBusAddoRu3Vf0>0000:00:00.0</PciBusAddoRu3Vf0>
    <PciBusAddoRu3Vf1>0000:00:00.0</PciBusAddoRu3Vf1>
    <PciBusAddoRu3Vf2>0000:00:00.0</PciBusAddoRu3Vf2>
    <PciBusAddoRu3Vf3>0000:00:00.0</PciBusAddoRu3Vf3>
    
    <!-- remote O-RU 0 Eth Link 0 VF0, VF1-->
    <oRuRem0Mac0>00:11:22:33:00:01<oRuRem0Mac0>
    <oRuRem0Mac1>00:11:22:33:00:11<oRuRem0Mac1>
    <!-- remote O-RU 0 Eth Link 1 VF2, VF3 -->
    <oRuRem0Mac2>00:11:22:33:00:21<oRuRem0Mac2>
    <oRuRem0Mac3>00:11:22:33:00:31<oRuRem0Mac3>
    
    <!-- remote O-RU 1 Eth Link 0 VF4, VF5-->
    <oRuRem1Mac0>00:11:22:33:01:01<oRuRem1Mac0>
    <oRuRem1Mac1>00:11:22:33:01:11<oRuRem1Mac1>
    <!-- remote O-RU 1 Eth Link 1 VF6, VF7 -->
    <oRuRem1Mac2>00:11:22:33:01:21<oRuRem1Mac2>
    <oRuRem1Mac3>00:11:22:33:01:31<oRuRem1Mac3>
    
    <!-- remote O-RU 2 Eth Link 0 VF8, VF9 -->
    <oRuRem2Mac0>00:11:22:33:02:01<oRuRem2Mac0>
    <oRuRem2Mac1>00:11:22:33:02:11<oRuRem2Mac1>
    <!-- remote O-RU 2 Eth Link 1 VF10, VF11-->
    <oRuRem2Mac2>00:11:22:33:02:21<oRuRem2Mac2>
    <oRuRem2Mac3>00:11:22:33:02:31<oRuRem2Mac3>
    
    <!-- remote O-RU 2 Eth Link 0 VF12, VF13 -->
    <oRuRem3Mac0>00:11:22:33:03:01<oRuRem3Mac0>
    <oRuRem3Mac1>00:11:22:33:03:11<oRuRem3Mac1>
    <!-- remote O-RU 2 Eth Link 1 VF14, VF15-->
    <oRuRem3Mac2>00:11:22:33:03:21<oRuRem3Mac2>
    <oRuRem3Mac3>00:11:22:33:03:31<oRuRem3Mac3>
    
    <!--  Number of cells (CCs) running on this O-RU  [1 - Cell , 2 - Cells, 3 - Cells , 4 - Cells ] -->
    <oRu0NumCc>1</oRu0NumCc>
    <!-- First Phy instance ID mapped to this O-RU CC0  -->
    <oRu0Cc0PhyId>0</oRu0Cc0PhyId>
    <!-- Second Phy instance ID mapped to this O-RU CC1 -->
    <oRu0Cc1PhyId>1</oRu0Cc1PhyId>
    <!-- Third Phy instance ID mapped to this O-RU CC2  -->
    <oRu0Cc2PhyId>2</oRu0Cc2PhyId>
    <!-- Forth Phy instance ID mapped to this O-RU CC3  -->
    <oRu0Cc3PhyId>3</oRu0Cc3PhyId>
    
    <!--  Number of cells (CCs) running on this O-RU  [1 - Cell , 2 - Cells, 3 - Cells , 4 - Cells ] -->
    <oRu1NumCc>1</oRu1NumCc>
    <!-- First Phy instance ID mapped to this O-RU CC0  -->
    <oRu1Cc0PhyId>1</oRu1Cc0PhyId>
    <!-- Second Phy instance ID mapped to this O-RU CC1 -->
    <oRu1Cc1PhyId>1</oRu1Cc1PhyId>
    <!-- Third Phy instance ID mapped to this O-RU CC2  -->
    <oRu1Cc2PhyId>2</oRu1Cc2PhyId>
    <!-- Forth Phy instance ID mapped to this O-RU CC3  -->
    <oRu1Cc3PhyId>3</oRu1Cc3PhyId>
    
    <!--  Number of cells (CCs) running on this O-RU  [1 - Cell , 2 - Cells, 3 - Cells , 4 - Cells ] -->
    <oRu2NumCc>1</oRu2NumCc>
    <!-- First Phy instance ID mapped to this O-RU CC0  -->
    <oRu2Cc0PhyId>2</oRu2Cc0PhyId>
    <!-- Second Phy instance ID mapped to this O-RU CC1 -->
    <oRu2Cc1PhyId>1</oRu2Cc1PhyId>
    <!-- Third Phy instance ID mapped to this O-RU CC2  -->
    <oRu2Cc2PhyId>2</oRu2Cc2PhyId>
    <!-- Forth Phy instance ID mapped to this O-RU CC3  -->
    <oRu2Cc3PhyId>3</oRu2Cc3PhyId>
    
    <!-- XRAN Thread (core where the XRAN polling function is pinned: Core, priority, Policy [0: SCHED_FIFO 1: SCHED_RR] -->
    <xRANThread>22, 96, 0</xRANThread>
    
    <!-- core mask for XRAN Packets Worker (core where the XRAN packet processing is pinned): Core, priority, Policy [0: SCHED_FIFO 1: SCHED_RR] -->
    <xRANWorker>0x3800000, 96, 0</xRANWorker>
    <!-- XRAN: Category of O-RU 0 - Category A, 1 - Category B -->
    <Category>1</Category>
    
    <!-- XRAN: enable sleep on PMD cores -->
    <xranPmdSleep>0</xranPmdSleep>
    
    <!-- RU Settings -->
    <Tadv_cp_dl>25</Tadv_cp_dl>
    <!-- Reception Window C-plane DL-->
    <T2a_min_cp_dl>285</T2a_min_cp_dl>
    <T2a_max_cp_dl>429</T2a_max_cp_dl>
    <!-- Reception Window C-plane UL-->
    <T2a_min_cp_ul>285</T2a_min_cp_ul>
    <T2a_max_cp_ul>429</T2a_max_cp_ul>
    <!-- Reception Window U-plane -->
    <T2a_min_up>71</T2a_min_up>
    <T2a_max_up>428</T2a_max_up>
    <!-- Transmission Window U-plane -->
    <Ta3_min>20</Ta3_min>
    <Ta3_max>32</Ta3_max>
    
    <!-- O-DU Settings -->
    <!-- MTU size -->
    <MTU>9600</MTU>
    <!-- VLAN Tag used for C-Plane -->
    <c_plane_vlan_tag>1</c_plane_vlan_tag>
    <u_plane_vlan_tag>2</u_plane_vlan_tag>
    
    <!-- Transmission Window Fast C-plane DL -->
    <T1a_min_cp_dl>258</T1a_min_cp_dl>
    <T1a_max_cp_dl>429</T1a_max_cp_dl>
    <!-- Transmission Window Fast C-plane UL -->
    <T1a_min_cp_ul>285</T1a_min_cp_ul>
    <T1a_max_cp_ul>300</T1a_max_cp_ul>
    <!-- Transmission Window U-plane -->
    <T1a_min_up>96</T1a_min_up>
    <T1a_max_up>196</T1a_max_up>
    <!-- Reception Window U-Plane-->
    <Ta4_min>0</Ta4_min>
    <Ta4_max>75</Ta4_max>
    
    <!-- Enable Control Plane -->
    <EnableCp>1</EnableCp>
    
    <DynamicSectionEna>0</DynamicSectionEna>
    <!-- Enable Dynamic section allocation for UL -->
    <DynamicSectionEnaUL>0</DynamicSectionEnaUL>
    <xRANSFNWrap>1</xRANSFNWrap>
    <!-- Total Number of DL PRBs per symbol (starting from RB 0) that is transmitted (used for testing. If 0, then value is used from PHY_CONFIG_API) -->
    <xRANNumDLPRBs>0</xRANNumDLPRBs>
    <!-- Total Number of UL PRBs per symbol (starting from RB 0) that is received (used for testing. If 0, then value is used from PHY_CONFIG_API) -->
    <xRANNumULPRBs>0</xRANNumULPRBs>
    <!-- refer to alpha as defined in section 9.7.2 of ORAN spec. this value should be alpha*(1/1.2288ns), range 0 - 1e7 (ns) -->
    <Gps_Alpha>0</Gps_Alpha>
    <!-- beta value as defined in section 9.7.2 of ORAN spec. range -32767 ~ +32767 -->
    <Gps_Beta>0</Gps_Beta>
    
    <!-- XRAN: Compression mode on O-DU <-> O-RU 0 - no comp 1 - BFP -->
    <xranCompMethod>1</xranCompMethod>
    <!-- XRAN: iqWidth when DynamicSectionEna and BFP Compression enabled -->
    <xraniqWidth>9</xraniqWidth>
    
    <!-- M-plane values of O-RU configuration  -->
    <oRu0MaxSectionsPerSlot>6<oRu0MaxSectionsPerSlot>
    <oRu0MaxSectionsPerSymbol>6<oRu0MaxSectionsPerSymbol>
    
    <oRu0nPrbElemDl>6</oRu0nPrbElemDl>
    <!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
    <!-- weight base beams -->
    <oRu0PrbElemDl0>0,48,0,14,1,1,1,9,1,0,0</oRu0PrbElemDl0>
    <oRu0PrbElemDl1>48,48,0,14,2,1,1,9,1,0,0</oRu0PrbElemDl1>
    <oRu0PrbElemDl2>96,48,0,14,2,1,1,9,1,0,0</oRu0PrbElemDl2>
    <oRu0PrbElemDl3>144,48,0,14,4,1,1,9,1,0,0</oRu0PrbElemDl3>
    <oRu0PrbElemDl4>192,48,0,14,5,1,1,9,1,0,0</oRu0PrbElemDl4>
    <oRu0PrbElemDl5>240,33,0,14,6,1,1,9,1,0,0</oRu0PrbElemDl5>
    <oRu0PrbElemDl6>240,33,0,14,7,1,1,9,1,0,0</oRu0PrbElemDl6>
    <oRu0PrbElemDl7>252,21,0,14,8,1,1,9,1,0,0</oRu0PrbElemDl7>
    
    <!-- extType = 11 -->
    <oRu0ExtBfwDl0>2,24,0,0,9,1</oRu0ExtBfwDl0>
    <oRu0ExtBfwDl1>2,24,0,0,9,1</oRu0ExtBfwDl1>
    <oRu0ExtBfwDl2>2,24,0,0,9,1</oRu0ExtBfwDl2>
    <oRu0ExtBfwDl3>2,24,0,0,9,1</oRu0ExtBfwDl3>
    <oRu0ExtBfwDl4>2,24,0,0,9,1</oRu0ExtBfwDl4>
    <oRu0ExtBfwDl5>2,17,0,0,9,1</oRu0ExtBfwDl5>
    
    <oRu0nPrbElemUl>6</oRu0nPrbElemUl>
    <!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
    <!-- weight base beams -->
    <oRu0PrbElemUl0>0,48,0,14,1,1,1,9,1,0,0</oRu0PrbElemUl0>
    <oRu0PrbElemUl1>48,48,0,14,2,1,1,9,1,0,0</oRu0PrbElemUl1>
    <oRu0PrbElemUl2>96,48,0,14,2,1,1,9,1,0,0</oRu0PrbElemUl2>
    <oRu0PrbElemUl3>144,48,0,14,4,1,1,9,1,0,0</oRu0PrbElemUl3>
    <oRu0PrbElemUl4>192,48,0,14,5,1,1,9,1,0,0</oRu0PrbElemUl4>
    <oRu0PrbElemUl5>240,33,0,14,6,1,1,9,1,0,0</oRu0PrbElemUl5>
    <oRu0PrbElemUl6>240,33,0,14,7,1,1,9,1,0,0</oRu0PrbElemUl6>
    <oRu0PrbElemUl7>252,21,0,14,8,1,1,9,1,0,0</oRu0PrbElemUl7>
    
    <!-- extType = 11 -->
    <oRu0ExtBfwUl0>2,24,0,0,9,1</oRu0ExtBfwUl0>
    <oRu0ExtBfwUl1>2,24,0,0,9,1</oRu0ExtBfwUl1>
    <oRu0ExtBfwUl2>2,24,0,0,9,1</oRu0ExtBfwUl2>
    <oRu0ExtBfwUl3>2,24,0,0,9,1</oRu0ExtBfwUl3>
    <oRu0ExtBfwUl4>2,24,0,0,9,1</oRu0ExtBfwUl4>
    <oRu0ExtBfwUl5>2,17,0,0,9,1</oRu0ExtBfwUl5>
    
    <oRu0nPrbElemSrs>1</oRu0nPrbElemSrs>
    <!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
    <!-- weight base beams -->
    <oRu0PrbElemSrs0>0,273,0,14,1,1,1,9,1,0,0</oRu0PrbElemSrs0>
    <oRu0PrbElemSrs1>0,273,0,14,1,1,1,9,1,0,0</oRu0PrbElemSrs1>
    
    <!-- M-plane values of O-RU configuration  -->
    <oRu10MaxSectionsPerSlot>6<oRu1MaxSectionsPerSlot>
    <oRu1MaxSectionsPerSymbol>6<oRu1MaxSectionsPerSymbol>
    
    <oRu1nPrbElemDl>2</oRu1nPrbElemDl>
    <!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
    <!-- weight base beams -->
    <oRu1PrbElemDl0>0,48,0,14,0,1,1,9,1,0,0</oRu1PrbElemDl0>
    <oRu1PrbElemDl1>48,48,0,14,2,1,1,9,1,0,0</oRu1PrbElemDl1>
    <oRu1PrbElemDl2>96,48,0,14,3,1,1,9,1,0,0</oRu1PrbElemDl2>
    <oRu1PrbElemDl3>144,48,0,14,4,1,1,9,1,0,0</oRu1PrbElemDl3>
    <oRu1PrbElemDl4>144,36,0,14,5,1,1,9,1,0,0</oRu1PrbElemDl4>
    <oRu1PrbElemDl5>180,36,0,14,6,1,1,9,1,0,0</oRu1PrbElemDl5>
    <oRu1PrbElemDl6>216,36,0,14,7,1,1,9,1,0,0</oRu1PrbElemDl6>
    <oRu1PrbElemDl7>252,21,0,14,8,1,1,9,1,0,0</oRu1PrbElemDl7>
    
    <!-- extType = 11 -->
    <oRu1ExtBfwDl0>2,24,0,0,9,1</oRu1ExtBfwDl0>
    <oRu1ExtBfwDl1>2,24,0,0,9,1</oRu1ExtBfwDl1>
    
    <oRu1nPrbElemUl>2</oRu1nPrbElemUl>
    <!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
    <!-- weight base beams -->
    <oRu1PrbElemUl0>0,48,0,14,1,1,1,9,1,0,0</oRu1PrbElemUl0>
    <oRu1PrbElemUl1>48,48,0,14,2,1,1,9,1,0,0</oRu1PrbElemUl1>
    <oRu1PrbElemUl2>72,36,0,14,3,1,1,9,1,0,0</oRu1PrbElemUl2>
    <oRu1PrbElemUl3>108,36,0,14,4,1,1,9,1,0,0</oRu1PrbElemUl3>
    <oRu1PrbElemUl4>144,36,0,14,5,1,1,9,1,0,0</oRu1PrbElemUl4>
    <oRu1PrbElemUl5>180,36,0,14,6,1,1,9,1,0,0</oRu1PrbElemUl5>
    <oRu1PrbElemUl6>216,36,0,14,7,1,1,9,1,0,0</oRu1PrbElemUl6>
    <oRu1PrbElemUl7>252,21,0,14,8,1,1,9,1,0,0</oRu1PrbElemUl7>
    
    <!-- extType = 11 -->
    <oRu1ExtBfwUl0>2,24,0,0,9,1</oRu1ExtBfwUl0>
    <oRu1ExtBfwUl1>2,24,0,0,9,1</oRu1ExtBfwUl1>
    
    <oRu1nPrbElemSrs>1</oRu1nPrbElemSrs>
    <!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
    <!-- weight base beams -->
    <oRu1PrbElemSrs0>0,273,0,14,1,1,1,9,1,0,0</oRu1PrbElemSrs0>
    <oRu1PrbElemSrs1>0,273,0,14,1,1,1,9,1,0,0</oRu1PrbElemSrs1>
    
    <!-- M-plane values of O-RU configuration  -->
    <oRu20MaxSectionsPerSlot>6<oRu2MaxSectionsPerSlot>
    <oRu2MaxSectionsPerSymbol>6<oRu2MaxSectionsPerSymbol>
    
    <oRu2nPrbElemDl>2</oRu2nPrbElemDl>
    <!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
    <!-- weight base beams -->
    <oRu2PrbElemDl0>0,48,0,14,1,1,1,9,1,0,0</oRu2PrbElemDl0>
    <oRu2PrbElemDl1>48,48,0,14,2,1,1,9,1,0,0</oRu2PrbElemDl1>
    <oRu2PrbElemDl2>96,48,0,14,3,1,1,9,1,0,0</oRu2PrbElemDl2>
    <oRu2PrbElemDl3>144,48,0,14,4,1,1,9,1,0,0</oRu2PrbElemDl3>
    <oRu2PrbElemDl4>144,36,0,14,5,1,1,9,1,0,0</oRu2PrbElemDl4>
    <oRu2PrbElemDl5>180,36,0,14,6,1,1,9,1,0,0</oRu2PrbElemDl5>
    <oRu2PrbElemDl6>216,36,0,14,7,1,1,9,1,0,0</oRu2PrbElemDl6>
    <oRu2PrbElemDl7>252,21,0,14,8,1,1,9,1,0,0</oRu2PrbElemDl7>
    
    <!-- extType = 11 -->
    <oRu2ExtBfwDl0>2,24,0,0,9,1</oRu2ExtBfwDl0>
    <oRu2ExtBfwDl1>2,24,0,0,9,1</oRu2ExtBfwDl1>
    
    <oRu2nPrbElemUl>2</oRu2nPrbElemUl>
    <!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
    <!-- weight base beams -->
    <oRu2PrbElemUl0>0,48,0,14,1,1,1,9,1,0,0</oRu2PrbElemUl0>
    <oRu2PrbElemUl1>48,48,0,14,2,1,1,9,1,0,0</oRu2PrbElemUl1>
    <oRu2PrbElemUl2>72,36,0,14,3,1,1,9,1,0,0</oRu2PrbElemUl2>
    <oRu2PrbElemUl3>108,36,0,14,4,1,1,9,1,0,0</oRu2PrbElemUl3>
    <oRu2PrbElemUl4>144,36,0,14,5,1,1,9,1,0,0</oRu2PrbElemUl4>
    <oRu2PrbElemUl5>180,36,0,14,6,1,1,9,1,0,0</oRu2PrbElemUl5>
    <oRu2PrbElemUl6>216,36,0,14,7,1,1,9,1,0,0</oRu2PrbElemUl6>
    <oRu2PrbElemUl7>252,21,0,14,8,1,1,9,1,0,0</oRu2PrbElemUl7>
    
    <!-- extType = 11 -->
    <oRu2ExtBfwUl0>2,24,0,0,9,1</oRu2ExtBfwUl0>
    <oRu2ExtBfwUl1>2,24,0,0,9,1</oRu2ExtBfwUl1>
    
    <oRu2nPrbElemSrs>1</oRu2nPrbElemSrs>
    <!--nRBStart, nRBSize, nStartSymb, numSymb, nBeamIndex, bf_weight_update, compMethod, iqWidth, BeamFormingType, Scalefactor, REMask -->
    <!-- weight base beams -->
    <oRu2PrbElemSrs0>0,273,0,14,1,1,1,9,1,0,0</oRu2PrbElemSrs0>
    <oRu2PrbElemSrs1>0,273,0,14,1,1,1,9,1,0,0</oRu2PrbElemSrs1>
    
    </XranConfig>
    
  4. Modify ./bin/nr5g/gnb/l1/dpdk.sh (change PCIe addresses from VFs).

    ethDevice0=0000:51:01.0
    ethDevice1=0000:51:01.1
    ethDevice2=0000:51:01.2
    ethDevice3=0000:51:01.3
    ethDevice4=0000:51:01.4
    ethDevice5=0000:51:01.5
    ethDevice6=
    ethDevice7=
    ethDevice8=
    ethDevice9=
    ethDevice10=
    ethDevice11=
    fecDevice0=0000:92:00.0
    
  5. Use configuration of test mac per:

    (Info only as these files not avilable)
    /bin/nr5g/gnb/testmac/icelake-sp/icxsp_mu1_100mhz_mmimo_64x64_hton_xran.cfg
    phystart 4 0 100200
    TEST_FD, 3370, 3, fd/mu1_100mhz/376/fd_testconfig_tst376.cfg,
    fd/mu1_100mhz/377/fd_testconfig_tst377.cfg,
    fd/mu1_100mhz/377/fd_testconfig_tst377.cfg
    
  6. To execute l1app with O-DU functionality according to O-RAN Fronthaul specification, enter:

    [root@xran flexran] cd ./l1/bin/nr5g/gnb/l1
    ./l1.sh -xranmmimo
    Radio mode with XRAN - Sub6 100Mhz Massive-MIMO (CatB)
    
  7. To execute testmac with O-DU functionality according to O-RAN Fronthaul specification, enter:

    [root@xran flexran] cd ./l1/bin/nr5g/gnb/testmac
    
  8. To execute test case type:

    (Info only as file not available)
    ./l2.sh --testfile=./cascade_lake-sp/csxsp_mu1_100mhz_mmimo_hton_xran.cfg
    

PTP Configuration

PTP Synchronization

Precision Time Protocol (PTP) provides an efficient way to synchronize time on the network nodes. This protocol uses Master-Slave architecture. Grandmaster Clock (Master) is a reference clock for the other nodes, which adapt their clocks to the master.

Using Physical Hardware Clock (PHC) from the Grandmaster Clock, NIC port precision timestamp packets can be served for other network nodes. Slave nodes adjust their PHC to the master following the IEEE 1588 specification.

There are existing implementations of PTP protocol that are widely used in the industry. One of them is PTP for Linux, which is a set of tools providing necessary PTP functionality. There is no need to re-implement the 1588 protocol because PTP for Linux is precise and efficient enough to be used out of the box.

To meet O-RAN requirements, two tools from PTP for Linux package are required: ptp4l and phc2sys.

PTP for Linux* Requirements

PTP for Linux* introduces some software and hardware requirements. The machine on which the tools will be run needs to use at least a 3.10 Kernel version (built-in PTP support). Several Kernel options need to be enabled in Kernel configuration:

  • CONFIG_PPS

  • CONFIG_NETWORK_PHY_TIMESTAMPING

  • PTP_1588_CLOCK

Be sure that the Kernel is compiled with these options.

For the best precision, PTP uses hardware timestamping. NIC has its own clock, called Physical Hardware Clock (PHC), to read current time just a moment before the packet is sent to minimalize the delays added by the Kernel processing the packet. Not every NIC supports that feature. To confirm that currently attached NIC support Hardware Timestamps, use ethtool with the command:

ethtool -T eth0

Where the eth0 is the potential PHC port. The output from the command should say that there is Hardware Timestamps support.

To set up PTP for Linux*:

1.Download source code:

git clone http://git.code.sf.net/p/linuxptp/code linuxptp
git checkout v2.0

Note Apply patch (this is required to work around an issue with some of the GM PTP packet sizes.)

diff --git a/msg.c b/msg.c
old mode 100644
new mode 100755
index d1619d4..40d1538
--- a/msg.c
+++ b/msg.c
@@ -399,9 +399,11 @@ int msg_post_recv(struct ptp_message *m, int cnt)
port_id_post_recv(&m->pdelay_resp.requestingPortIdentity);
break;
case FOLLOW_UP:
+ cnt -= 4;
timestamp_post_recv(m, &m->follow_up.preciseOriginTimestamp);
break;
case DELAY_RESP:
+ cnt -= 4;
timestamp_post_recv(m, &m->delay_resp.receiveTimestamp);
port_id_post_recv(&m->delay_resp.requestingPortIdentity);
break;
  1. Build and install ptp41.

    # make && make install
    

3. Modify configs/default.cfg to control frequency of Sync interval to 0.0625 s.

logSyncInterval -4

ptp4l

This tool handles all PTP traffic on the provided NIC port and updated PHC. It also determines the Grandmaster Clock and tracks synchronization status. This tool can be run as a daemon or as a regular Linux* application. When the synchronization is reached, it gives output on the screen for precision tracking. The configuration file of ptp4l contains many options that can be set to get the best synchronization precision. Although, even with default.cfg the synchronization quality is excellent.

To start the synchronization process run:

cd linuxptp
./ptp4l -f ./configs/default.cfg -2 -i <if_name> -m

The output below shows what the output on non-master node should look like when synchronization is started. This means that PHC on this machine is synchronized to the master PHC.

ptp4l[1434165.358]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[1434165.358]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[1434166.384]: port 1: new foreign master fcaf6a.fffe.029708-1
ptp4l[1434170.352]: selected best master clock fcaf6a.fffe.029708
ptp4l[1434170.352]: updating UTC offset to 37
ptp4l[1434170.352]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE
ptp4l[1434171.763]: master offset -5873 s0 freq -18397 path delay 2778
ptp4l[1434172.763]: master offset -6088 s2 freq -18612 path delay 2778
ptp4l[1434172.763]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l[1434173.763]: master offset -5886 s2 freq -24498 path delay 2732
ptp4l[1434174.763]: master offset 221 s2 freq -20157 path delay 2728
ptp4l[1434175.763]: master offset 1911 s2 freq -18401 path delay 2724
ptp4l[1434176.763]: master offset 1774 s2 freq -17964 path delay 2728
ptp4l[1434177.763]: master offset 1198 s2 freq -18008 path delay 2728
ptp4l[1434178.763]: master offset 746 s2 freq -18101 path delay 2755
ptp4l[1434179.763]: master offset 218 s2 freq -18405 path delay 2792
ptp4l[1434180.763]: master offset 103 s2 freq -18454 path delay 2792
ptp4l[1434181.763]: master offset -13 s2 freq -18540 path delay 2813
ptp4l[1434182.763]: master offset 9 s2 freq -18521 path delay 2813
ptp4l[1434183.763]: master offset 11 s2 freq -18517 path delay 2813

phc2sys

The PHC clock is independent from the system clock. Synchronizing only PHC does not make the system clock exactly the same as the master. The xRAN library requires use of the system clock to determine a common point in time on two machines (O-DU and RU) to start transmission at the same moment and keep time frames defined by O-RAN Fronthaul specification.

This application keeps the system clock updated to PHC. It makes it possible to use POSIX timers as a time reference in xRAN application.

Run phc2sys with the command:

cd linuxptp
./phc2sys -s enp25s0f0 -w -m -R 8

Command output will look like:

ptp4l[1434165.342]: selected /dev/ptp4 as PTP
phc2sys[1434344.651]: CLOCK_REALTIME phc offset       450 s2 freq  -39119 delay   1354
phc2sys[1434344.776]: CLOCK_REALTIME phc offset       499 s2 freq  -38620 delay   1344
phc2sys[1434344.902]: CLOCK_REALTIME phc offset       485 s2 freq  -38484 delay   1347
phc2sys[1434345.027]: CLOCK_REALTIME phc offset       476 s2 freq  -38348 delay   1346
phc2sys[1434345.153]: CLOCK_REALTIME phc offset       392 s2 freq  -38289 delay   1340
phc2sys[1434345.278]: CLOCK_REALTIME phc offset       319 s2 freq  -38244 delay   1340
phc2sys[1434345.404]: CLOCK_REALTIME phc offset       278 s2 freq  -38190 delay   1349
phc2sys[1434345.529]: CLOCK_REALTIME phc offset       221 s2 freq  -38163 delay   1343
phc2sys[1434345.654]: CLOCK_REALTIME phc offset        97 s2 freq  -38221 delay   1342
phc2sys[1434345.780]: CLOCK_REALTIME phc offset        67 s2 freq  -38222 delay   1344
phc2sys[1434345.905]: CLOCK_REALTIME phc offset        68 s2 freq  -38201 delay   1341
phc2sys[1434346.031]: CLOCK_REALTIME phc offset       104 s2 freq  -38144 delay   1340
phc2sys[1434346.156]: CLOCK_REALTIME phc offset        58 s2 freq  -38159 delay   1340
phc2sys[1434346.281]: CLOCK_REALTIME phc offset        12 s2 freq  -38188 delay   1343
phc2sys[1434346.407]: CLOCK_REALTIME phc offset       -36 s2 freq  -38232 delay   1342
phc2sys[1434346.532]: CLOCK_REALTIME phc offset      -103 s2 freq  -38310 delay   1348

Configuration C3

Configuration C3 27 can be simulated for O-DU using a separate server acting as Fronthaul Network and O-RU at the same time. O-RU server can be configured to relay PTP and act as PTP master for O-DU. Settings below can be used to instantiate this scenario. The difference is that on the O-DU side, the Fronthaul port can be used as the source of PTP as well as for U-plane and C-plane traffic.

1. Follow the steps in Appendix B.1.1, PTP for Linux* Requirements to install PTP on the O-RU server.

2. Copy configs/default.cfg to configs/default_slave.cfg and modify the copied file as below:

diff --git a/configs/default.cfg b/configs/default.cfg
old mode 100644
new mode 100755
index e23dfd7..f1ecaf1
--- a/configs/default.cfg
+++ b/configs/default.cfg
@@ -3,26 +3,26 @@
# Default Data Set
#
twoStepFlag 1
-slaveOnly 0
+slaveOnly 1
priority1 128
-priority2 128
+priority2 255
domainNumber 0
#utc_offset 37
-clockClass 248
+clockClass 255
clockAccuracy 0xFE
offsetScaledLogVariance 0xFFFF
free_running 0
freq_est_interval 1
dscp_event 0
dscp_general 0
-dataset_comparison ieee1588
+dataset_comparison G.8275.x
G.8275.defaultDS.localPriority 128
maxStepsRemoved 255
#
# Port Data Set
#
logAnnounceInterval 1
-logSyncInterval 0
+logSyncInterval -4
operLogSyncInterval 0
logMinDelayReqInterval 0
logMinPdelayReqInterval 0
@@ -37,7 +37,7 @@ G.8275.portDS.localPriority 128
asCapable auto
BMCA ptp
inhibit_announce 0
-inhibit_pdelay_req 0
+#inhibit_pdelay_req 0
ignore_source_id 0
#
# Run time options
  1. Start slave port toward PTP GM:

    ./ptp4l -f ./configs/default_slave.cfg -2 -i enp25s0f0 –m
    

Example of output:

./ptp4l -f ./configs/default_slave.cfg -2 -i enp25s0f0 -m
ptp4l[3904470.256]: selected /dev/ptp6 as PTP clock
ptp4l[3904470.274]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[3904470.275]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[3904471.085]: port 1: new foreign master fcaf6a.fffe.029708-1
ptp4l[3904475.053]: selected best master clock fcaf6a.fffe.029708
ptp4l[3904475.053]: updating UTC offset to 37
ptp4l[3904475.053]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE
ptp4l[3904477.029]: master offset        196 s0 freq  -18570 path delay      1109
ptp4l[3904478.029]: master offset        212 s2 freq  -18554 path delay      1109
ptp4l[3904478.029]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l[3904479.029]: master offset         86 s2 freq  -18468 path delay      1109
ptp4l[3904480.029]: master offset         23 s2 freq  -18505 path delay      1124
ptp4l[3904481.029]: master offset          3 s2 freq  -18518 path delay      1132
ptp4l[3904482.029]: master offset       -169 s2 freq  -18689 path delay      1141
  1. Synchronize local timer clock on O-RU for sample application

    ./phc2sys -s enp25s0f0 -w -m -R 8
    

Example of output:

./phc2sys -s enp25s0f0 -w -m -R 8
 phc2sys[3904510.892]: CLOCK_REALTIME phc offset   343 s0 freq  -38967 delay   1530
 phc2sys[3904511.017]: CLOCK_REALTIME phc offset   368 s2 freq  -38767 delay   1537
 phc2sys[3904511.142]: CLOCK_REALTIME phc offset   339 s2 freq  -38428 delay   1534
 phc2sys[3904511.267]: CLOCK_REALTIME phc offset   298 s2 freq  -38368 delay   1532
 phc2sys[3904511.392]: CLOCK_REALTIME phc offset   239 s2 freq  -38337 delay   1534
 phc2sys[3904511.518]: CLOCK_REALTIME phc offset   145 s2 freq  -38360 delay   1530
 phc2sys[3904511.643]: CLOCK_REALTIME phc offset   106 s2 freq  -38355 delay   1527
 phc2sys[3904511.768]: CLOCK_REALTIME phc offset   -30 s2 freq  -38459 delay   1534
 phc2sys[3904511.893]: CLOCK_REALTIME phc offset   -92 s2 freq  -38530 delay   1530
 phc2sys[3904512.018]: CLOCK_REALTIME phc offset  -173 s2 freq  -38639 delay   1528
 phc2sys[3904512.143]: CLOCK_REALTIME phc offset  -246 s2 freq  -38764 delay   1530
 phc2sys[3904512.268]: CLOCK_REALTIME phc offset  -300 s2 freq  -38892 delay   1532
  1. Modify configs/default.cfg as shown below to run PTP master on Fronthaul of O-RU.

    diff --git a/configs/default.cfg b/configs/default.cfg
    old mode 100644
    new mode 100755
    index e23dfd7..c9e9d4c
    --- a/configs/default.cfg
    +++ b/configs/default.cfg
    @@ -15,14 +15,14 @@ free_running 0
    freq_est_interval 1
    dscp_event 0
    dscp_general 0
    -dataset_comparison ieee1588
    +dataset_comparison G.8275.x
    G.8275.defaultDS.localPriority 128
    maxStepsRemoved 255
    #
    # Port Data Set
    #
    logAnnounceInterval 1
    -logSyncInterval 0
    +logSyncInterval -4
    operLogSyncInterval 0
    logMinDelayReqInterval 0
    logMinPdelayReqInterval 0
    @@ -37,7 +37,7 @@ G.8275.portDS.localPriority 128
    asCapable auto
    BMCA ptp
    inhibit_announce 0
    -inhibit_pdelay_req 0
    +#inhibit_pdelay_req 0
    ignore_source_id 0
    #
    # Run time options
    
  2. Start PTP master toward O-DU:

    ./ptp4l -f ./configs/default.cfg -2 -i enp175s0f1 –m
    

Example of output:

./ptp4l -f ./configs/default.cfg -2 -i enp175s0f1 -m
 ptp4l[3903857.249]: selected /dev/ptp3 as PTP clock
 ptp4l[3903857.266]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
 ptp4l[3903857.267]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE
 ptp4l[3903863.734]: port 1: LISTENING to MASTER on ANNOUNCE_RECEIPT_TIMEOUT_EXPIRES
 ptp4l[3903863.734]: selected local clock 3cfdfe.fffe.bd005d as best master
 ptp4l[3903863.734]: assuming the grand master role

7.Synchronize local NIC PTP master clock to local NIC PTP slave clock.

./phc2sys -c enp175s0f1 -s enp25s0f0 -w -m -R 8

Example of output:

./phc2sys -c enp175s0f1 -s enp25s0f0 -w -m -R 8
phc2sys[3904600.332]: enp175s0f1 phc offset      2042 s0 freq   -2445 delay   4525
phc2sys[3904600.458]: enp175s0f1 phc offset      2070 s2 freq   -2223 delay   4506
phc2sys[3904600.584]: enp175s0f1 phc offset 2125 s2 freq -98 delay 4505
phc2sys[3904600.710]: enp175s0f1 phc offset 1847 s2 freq +262 delay 4518
phc2sys[3904600.836]: enp175s0f1 phc offset 1500 s2 freq +469 delay 4515
phc2sys[3904600.961]: enp175s0f1 phc offset 1146 s2 freq +565 delay 4547
phc2sys[3904601.086]: enp175s0f1 phc offset 877 s2 freq +640 delay 4542
phc2sys[3904601.212]: enp175s0f1 phc offset 517 s2 freq +543 delay 4517
phc2sys[3904601.337]: enp175s0f1 phc offset 189 s2 freq +370 delay 4510
phc2sys[3904601.462]: enp175s0f1 phc offset -125 s2 freq +113 delay 4554
phc2sys[3904601.587]: enp175s0f1 phc offset -412 s2 freq -212 delay 4513
phc2sys[3904601.712]: enp175s0f1 phc offset -693 s2 freq -617 delay 4519
phc2sys[3904601.837]: enp175s0f1 phc offset      -878 s2 freq   -1009 delay   4515
phc2sys[3904601.962]: enp175s0f1 phc offset      -965 s2 freq   -1360 delay   4518
phc2sys[3904602.088]: enp175s0f1 phc offset     -1048 s2 freq   -1732 delay   4510
phc2sys[3904602.213]: enp175s0f1 phc offset     -1087 s2 freq   -2086 delay   4531
phc2sys[3904602.338]: enp175s0f1 phc offset     -1014 s2 freq   -2339 delay   4528
phc2sys[3904602.463]: enp175s0f1 phc offset     -1009 s2 freq   -2638 delay   4531

8. On O-DU Install PTP for Linux tools from source code the same way as on O-RU above but no need to apply the patch for msg.c

9. Start slave port toward PTP master from O-RU using the same default_slave.cfg as on O-RU (see above):

./ptp4l -f ./configs/default_slave.cfg -2 -i enp181s0f0 –m

Example of output:

./ptp4l -f ./configs/default_slave.cfg -2 -i enp181s0f0 -m
ptp4l[809092.918]: selected /dev/ptp6 as PTP clock
ptp4l[809092.934]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[809092.934]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[809092.949]: port 1: new foreign master 3cfdfe.fffe.bd005d-1
ptp4l[809096.949]: selected best master clock 3cfdfe.fffe.bd005d
ptp4l[809096.950]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE
ptp4l[809098.363]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l[809099.051]: rms 38643 max 77557 freq   +719 +/- 1326 delay  1905 +/-   0
ptp4l[809100.051]: rms 1134 max 1935 freq -103 +/- 680 delay 1891 +/- 4
ptp4l[809101.051]: rms 453 max 855 freq +341 +/- 642 delay 1888 +/- 0
ptp4l[809102.052]: rms 491 max 772 freq +1120 +/- 752 delay 1702 +/- 0
ptp4l[809103.052]: rms 423 max 654 freq +1352 +/- 653 delay 1888 +/- 0
ptp4l[809104.052]: rms 412 max 579 freq +1001 +/- 672 delay 1702 +/- 0
ptp4l[809105.053]: rms 441 max 672 freq +807 +/- 709 delay 1826 +/- 88
ptp4l[809106.053]: rms 422 max 607 freq +1353 +/- 636 delay 1702 +/- 0
ptp4l[809107.054]: rms 401 max 466 freq +946 +/- 646 delay 1702 +/- 0
ptp4l[809108.055]: rms 401 max 502 freq +912 +/- 659

10. Synchronize local clock on O-DU for sample application or l1 application.

./phc2sys -s enp181s0f0 -w -m -R 8

Example of output:

./phc2sys -s enp181s0f0 -w -m -R 8
 phc2sys[809127.123]: CLOCK_REALTIME phc offset    675 s0 freq  -37379 delay   1646
 phc2sys[809127.249]: CLOCK_REALTIME phc offset    696 s2 freq  -37212 delay   1654
 phc2sys[809127.374]: CLOCK_REALTIME phc offset    630 s2 freq  -36582 delay   1648
 phc2sys[809127.500]: CLOCK_REALTIME phc offset    461 s2 freq  -36562 delay   1642
 phc2sys[809127.625]: CLOCK_REALTIME phc offset    374 s2 freq  -36510 delay   1643
 phc2sys[809127.751]: CLOCK_REALTIME phc offset    122 s2 freq  -36650 delay   1649
 phc2sys[809127.876]: CLOCK_REALTIME phc offset     34 s2 freq  -36702 delay   1650
 phc2sys[809128.002]: CLOCK_REALTIME phc offset   -112 s2 freq  -36837 delay   1645
 phc2sys[809128.127]: CLOCK_REALTIME phc offset   -160 s2 freq  -36919 delay   1643
 phc2sys[809128.252]: CLOCK_REALTIME phc offset   -270 s2 freq  -37077 delay   1657
 phc2sys[809128.378]: CLOCK_REALTIME phc offset   -285 s2 freq  -37173 delay   1644
 phc2sys[809128.503]: CLOCK_REALTIME phc offset   -349 s2 freq  -37322 delay   1644
 phc2sys[809128.629]: CLOCK_REALTIME phc offset   -402 s2 freq  -37480 delay   1641
 phc2sys[809128.754]: CLOCK_REALTIME phc offset   -377 s2 freq  -37576 delay   1648
 phc2sys[809128.879]: CLOCK_REALTIME phc offset   -467 s2 freq  -37779 delay   1650
 phc2sys[809129.005]: CLOCK_REALTIME phc offset   -408 s2 freq  -37860 delay   1648
 phc2sys[809129.130]: CLOCK_REALTIME phc offset   -480 s2 freq  -38054 delay   1655
 phc2sys[809129.256]: CLOCK_REALTIME phc offset   -350 s2 freq  -38068 delay   1650

Support in xRAN Library

The xRAN library provides an API to check whether PTP for Linux is running correctly. There is a function called xran_is_synchronized(). It checks if ptp4l and phc2sys are running in the system by making PMC tool requests for the current port state and comparing it with the expected value. This verification should be done before initialization.

  • “SLAVE” is the only expected value in this release; only a non-master scenario is supported currently.

eCPRI DDP Profile for Columbiaville (Experimental Feature)

Introduction

The Intel® Ethernet 800 Series is the next generation of Intel® Ethernet Controllers and Network Adapters. The Intel® Ethernet 800 Series is designed with an enhanced programmable pipeline, allowing deeper and more diverse protocol header processing. This on-chip capability is called Dynamic Device Personalization (DDP). In the Intel® Ethernet 800 Series, a DDP profile is loaded dynamically on driver load per device.

A general-purpose DDP package is automatically installed with all supported Intel® Ethernet 800 Series drivers on Windows*, ESX*, FreeBSD*, and Linux* operating systems, including those provided by the Data Plane Development Kit (DPDK). This general-purpose DDP package is known as the OS-default package.

For more information on DDP technology in the Intel® Ethernet 800 Series products and the OS-default package, refer to the Intel® Ethernet Controller E810 Dynamic Device Personalization (DDP) Technology Guide, published here: https://cdrdv2.intel.com/v1/dl/getContent/617015.

This document describes an optional DDP package targeted towards the needs of Wireless and Edge (Wireless Edge) customers. This Wireless Edge DDP package (v1.3.22.101) adds support for eCPRI protocols in addition to the protocols in the OS-default package. The Wireless Edge DDP package is supported by DPDK.

Starting from DPDK 21.02 drivers and in the future will also be supported by the Intel® Ethernet 800 Series ice driver. on Linux operating systems. The Wireless DDP Package can be loaded on all Intel® Ethernet 800 Series devices, or different packages can be selected via serial number per device.

Software/Firmware Requirements

The specific DDP package requires certain firmware and DPDK versions and Intel® Ethernet 800 Series firmware/NVM versions. Support for eCPRI DDP profile included starting from Columbiaville (CVL)release 2.4 or later. This section is for general information purposes as the binaries provided for this FlexRan release in github.com are built with DPDK 20.11.3 and the mix and match of binaries is not supported. The required DPDK version contains the support of loading the specific Wireless Edge DDP package.

  • Intel® Ethernet 800 Series Linux Driver (ice) — 1.4.0 (or later)

  • Wireless Edge DDP Package version (ice_wireless_edge) — 1.3.22.101

  • Intel® Ethernet 800 Series firmware version — 1.5.4.2 (or later)

  • Intel® Ethernet 800 Series NVM version — 2.4 (or later)

  • DPDK version— 21.02 (or later)

  • For FlexRAN release oran_f_release_v1.0, corresponding support of CVL 2.4 driver pack and DPDK 21.02 is “experimental” and subject to additional testing and potential changes.

DDP Package Setup

The Intel® Ethernet 800 Series Comms DDP package supports only Linux-based operating systems currently.

Currently, the eCPRI is fully supported only by DPDK 21.02. It can be loaded either by DPDK or the Intel® Ethernet 800 Series Linux base driver.

Wireless Edge DDP Package

For details on how to set up DPDK, refer to Intel® Ethernet Controller E810 Data Plane Development Kit (DPDK) Configuration Guide (Doc ID: 633514).

There are two methods where DDP package can be loaded and used under DPDK (see Section C.3.2 and Section C.3.2 ). For both methods, the user must obtain the ice_wireless_edge-1.3.22.101.pkg or later from Intel (please contact your Intel representative for more information)

Option 1: ice Linux Base Driver

The first option is to have the ice Linux base driver load the package.

The ice Linux base driver looks for the symbolic link intel/ice/ddp/ice.pkg under the default firmware search path, checking the following folders in order:

  • /lib/firmware/updates/

  • /lib/firmware/

To install the Comms package, copy the extracted .pkg file and its symbolic link to /lib/firmware/updates/intel/ice/ddp as follows, and reload the ice driver:

# cp /usr/tmp/ice_wireless_edge-1.3.22.101.pkg /lib/firmware/updates/intel/ice/ddp/
# ln -sf /lib/firmware/updates/intel/ice/ddp/ice_wireless_edge-1.3.22.101.pkg /lib/firmware/updates/intel/ice/ddp/ice.pkg
# modprobe -r irdma
# modprobe -r ice
# modprobe ice

The kernel message log (dmesg) indicates status of package loading in the system. If the driver successfully finds and loads the DDP package, dmesg indicates that the DDP package successfully loaded. If not, the driver transitions to safe mode.

Once the driver loads the package, the user can unbind the ice driver from a desired port on the device so that DPDK can utilize the port.

The following example unbinds Port 0 and Port 1 of device on Bus 6, Device 0. Then, the port is bound to either igb_uio or vfio-pci.

# ifdown <interface>
# dpdk-devbind -u 06:00.0
# dpdk-devbind -u 06:00.1
# dpdk-devbind -b igb_uio 06:00.0 06:00.1

Option 2: DPDK Driver Only

The second method is if the system does not have the ice driver installed. In this case, the user can download the DDP package from the Intel download center and extract the zip file to obtain the package (.pkg) file. Similar to the Linux base driver, the DPDK driver looks for the intel/ddp/ice.pkg symbolic link in the kernel default firmware search path /lib/firmware/updates and /lib/firmware/.

Copy the extracted DDP .pkg file and its symbolic link to /lib/firmware/intel/ice/ddp, as follows.

# cp /usr/tmp/ice_wireless_edge-1.3.22.101 /lib/firmware/intel/ice/ddp/
# cp /usr/tmp/ice.pkg /lib/firmware/intel/ice/ddp/

When DPDK driver loads, it looks for ice.pkg to load. If the file exists, the driver downloads it into the device. If not, the driver transitions into safe mode.

Loading DDP Package to a Specific Intel® Ethernet 800 Series Device

On a host system running with multiple Intel® Ethernet 800 Series devices, there is sometimes a need to load a specific DDP package on a selected device while loading a different package on the remaining devices.

The Intel® Ethernet 800 Series Linux base driver and DPDK driver can both load a specific DDP package to a selected adapter based on the device’s serial number. The driver does this by looking for a specific symbolic link package filename containing the selected device’s serial number.

The following example illustrates how a user can load a specific package (e.g., ice_wireless_edge-1.3.22.101) on the device of Bus 6.

  1. Find device serial number.

To view bus, device, and function of all Intel® Ethernet 800 Series Network Adapters in the system::

# lspci | grep -i Ethernet | grep -i Intel
06:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 01)
06:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 01)
82:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for SFP (rev 01)
82:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for SFP (rev 01)
82:00.2 Ethernet controller: Intel Corporation Ethernet Controller E810-C for SFP (rev 01)
82:00.3 Ethernet controller: Intel Corporation Ethernet Controller E810-C for SFP (rev 01)

Use the lspci command to obtain the selected device serial number:

# lspci -vv -s 06:00.0 \| grep -i Serial
Capabilities: [150 v1] Device Serial Number 35-11-a0-ff-ff-ca-05-68

Or, fully parsed without punctuation:

# lspci -vv -s 06:00.0 \|grep Serial \|awk '{print $7}'|sed s/-//g
3511a0ffffca0568
  1. Rename the package file with the device serial number in the name.

Copy the specific package over to /lib/firmware/updates/intel/ice/ddp (or /lib/firmware/intel/ice/ ddp) and create a symbolic link with the serial number linking to the package, as shown. The specific symbolic link filename starts with “ice-” followed by the device serial in lower case without dash (‘-‘).

# ln -s
/lib/firmware/updates/intel/ice/ddp/ice_wireless_edge-1.3.22.101.pkg
/lib/firmware/updates/intel/ice/ddp/ice-3511a0ffffca0568.pkg

3. If using Linux kernel driver (ice), reload the base driver (not required if using only DPDK driver).

# rmmod ice
# modprobe ice

The driver loads the specific package to the selected device and the OS-default package to the remaining Intel® Ethernet 800 Series devices in the system.

  1. Verify.

For kernel driver:

Example of output of successful load of Wireless Edge Package to all devices:

# dmesg | grep -i "ddp \| safe"
[606960.921404] ice 0000:18:00.0: The DDP package was successfully loaded: ICE Wireless Edge Package version 1.3.22.101
[606961.672999] ice 0000:18:00.1: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606962.439067] ice 0000:18:00.2: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606963.198305] ice 0000:18:00.3: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606964.252076] ice 0000:51:00.0: The DDP package was successfully loaded: ICE Wireless Edge Package version 1.3.22.101
[606965.017082] ice 0000:51:00.1: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606965.802115] ice 0000:51:00.2: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606966.576517] ice 0000:51:00.3: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606960.921404] ice 0000:18:00.0: The DDP package was successfully loaded: ICE Wireless Edge Package version 1.3.22.101
[606961.672999] ice 0000:18:00.1: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606962.439067] ice 0000:18:00.2: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606963.198305] ice 0000:18:00.3: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606964.252076] ice 0000:51:00.0: The DDP package was successfully loaded: ICE Wireless Edge Package version 1.3.22.101
[606965.017082] ice 0000:51:00.1: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606965.802115] ice 0000:51:00.2: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101
[606966.576517] ice 0000:51:00.3: DDP package already present on device: ICE Wireless Edge Package version 1.3.22.101

If using only DPDK driver:

Verify using DPDK’s testpmd application to indicate the status and version of the loaded DDP package.

Example of eCPRI config with dpdk-testpmd

16 O-RAN eCPRI IQ streams mapped to 16 independent HW queues each.:

#./dpdk-testpmd -l 22-25 -n 4 -a 0000:af:01.0 -- -i  --rxq=16 --txq=16 --cmdline-file=/home/flexran_xran/ddp.txt

cat /home/flexran_xran/ddp.txt
port stop 0
port config mtu 0 9600
port config 0 rx_offload vlan_strip on
port start 0
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0000 / end actions queue index 0 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0001 / end actions queue index 1 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0002 / end actions queue index 2 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0003 / end actions queue index 3 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0004 / end actions queue index 4 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0005 / end actions queue index 5 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0006 / end actions queue index 6 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0007 / end actions queue index 7 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0008 / end actions queue index 8 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x0009 / end actions queue index 9 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x000a / end actions queue index 10 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x000b / end actions queue index 11 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x000c / end actions queue index 12 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x000d / end actions queue index 13 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x000e / end actions queue index 14 / mark / end
flow create 0 ingress pattern eth / ecpri common type iq_data pc_id is 0x000f / end actions queue index 15 / mark / end
set fwd rxonly
start
show fwd stats all

O-RAN Front haul eCPRI

Intel® Ethernet 800 Series DDP capabilities support several functionalities important for the O-RAN FH.

  • RSS for packet steering based on ecpriMessage

  • RSS for packet steering based on ecpriRtcid/ecpriPcid

  • Queue mapping based on ecpriRtcid/ecpriPcid

  • Queue mapping based on ecpriMessage

Figure . O-RAN FH VNF

Figure 30. O-RAN FH VNF

Table 13. Patterns & Input Sets for Flow Director and RSS (DPDK 21.02)

Pattern

Input Set

ETH / VLAN / eCPRI

ecpriMessage | ecpriRtcid/ecpriPcid

ETH / VLAN /IPv4(6)/UDP/eCPRI

ecpriMessage | ecpriRtcid/ecpriPcid (*)

Note: * IP/UDP is not used with FlexRAN

Limitations

DPDK 21.02 allows up to 1024 queues per VF and RSS across up to 64 receive queues.

RTE Flow API

The DPDK Generic flow API (rte_flow) will be used to the configure the Intel® Ethernet 800 Series to match specific ingress traffic and forward it to specified queues.

For further information, please refer to section 11 of the DPDK Programmers guide <https://doc.dpdk.org/guides/prog_guide/rte_flow.html>.

The specific ingress traffic is identified by a matching pattern which is composed of one or more Pattern items (represented by struct rte_flow_item). Once a match has been determined one or more associated Actions (represented by struct rte_flow_action) will be performed.

A number of flow rules can be combined such that one rule directs traffic to a queue group based on ecpriMessage/ ecpriRtcid/ecpriPcid etc. and a second rule distributes matching packets within that queue group using RSS.

The following subset of the RTE Flow API functions can be used to validate, create and destroy RTE Flow rules.

RTE Flow Rule Validation

A RTE Flow rule is created via a call to the function rte_flow_validate. This can be used to check the rule for correctness and whether it would be accepted by the device given sufficient resources.:

int   rte_flow_validate(uint16_t port_id,
      const struct rte_flow_attr *attr,
      const struct rte_flow_item pattern[],
      const struct rte_flow_action *actions[]
      struct rte_flow_error *error);

port_id : port identifier of Ethernet device

attr : flow rule attributes(ingress/egress)

pattern : pattern specification (list terminated by the END pattern item).

action : associated actions (list terminated by the END action).

error : perform verbose error reporting if not NULL.

0 is returned upon success, negative errno otherwise.

RTE Flow Rule Creation

A RTE Flow rule is created via a call to the function rte_flow_create.:

struct rte_flow * rte_flow_create(uint16_t port_id,
        const struct rte_flow_attr *attr,
        const struct rte_flow_item pattern[],
        const struct rte_flow_action *actions[]
        struct rte_flow_error *error);

port_id : port identifier of Ethernet device

attr : flow rule attributes(ingress/egress)

pattern : pattern specification (list terminated by the END pattern item).

action : associated actions (list terminated by the END action).

error : perform verbose error reporting if not NULL.

A valid handle is returned upon success, NULL otherwise.

RTE Flow Rule Destruction

A RTE Flow rule is destroyed via a call to the function rte_flow_destroy.:

int rte_flow_destroy(uint16_t port_id,
  struct rte_flow \*flow,
  struct rte_flow_error \*error);

port_id : port identifier of Ethernet device

flow : flow rule handle to destroy.

error : perform verbose error reporting if not NULL.

0 is returned upon success, negative errno otherwise.

RTE Flow Flush

All flow rule handles associated with a port can be released using rte_flow_flush. They are released as with successive calls to function rte_flow_destroy.:

int rte_flow_flush(uint16_t port_id,
  struct rte_flow_error \*error);

port_id : port identifier of Ethernet device

error : perform verbose error reporting if not NULL.

0 is returned upon success, negative errno otherwise.

RTE Flow Query

A RTE Flow rule is queried via a call to the function rte_flow_query.:

int rte_flow_query(uint16_t port_id,
                struct rte_flow *flow,
                const struct rte_flow_action *action,
                void *data,
                struct rte_flow_error *error);

port_id : port identifier of Ethernet device

flow : flow rule handle to query

action : action to query, this must match prototype from flow rule.

data : pointer to storage for the associated query data type

error : perform verbose error reporting if not NULL.

0 is returned upon success, negative errno otherwise.

RTE Flow Rules

A flow rule is the combination of attributes with a matching pattern and a list of actions. Each flow rules consists of:

  • Attributes (represented by struct rte_flow_attr): properties of a flow rule such as its direction (ingress or egress) and priority.

  • Pattern Items (represented by struct rte_flow_item): is part of a matching pattern that either matches specific packet data or traffic properties.

  • Matching pattern: traffic properties to look for, a combination of any number of items.

  • Actions (represented by struct rte_flow_action): operations to perform whenever a packet is matched by a pattern.

Attributes

Flow rule patterns apply to inbound and/or outbound traffic. For the purposes described in later sections the rules apply to ingress only. For further information, please refer to section 11 of the DPDK Programmers guide <https://doc.dpdk.org/guides/prog_guide/rte_flow.html>.:

*struct*\ rte_flow_attr <https://doc.dpdk.org/api/structrte__flow__attr.html>\ *{*
*uint32_t*\ group <https://doc.dpdk.org/api/structrte__flow__attr.html#a0d20c78ce80e301ed514bd4b4dec9ec0>\ *;*
*uint32_t*\ priority <https://doc.dpdk.org/api/structrte__flow__attr.html#a90249de64da5ae5d7acd34da7ea1b857>\ *;*
*uint32_t*\ ingress <https://doc.dpdk.org/api/structrte__flow__attr.html#ae4d19341d5298a2bc61f9eb941b1179c>\ *:1;*
*uint32_t*\ egress <https://doc.dpdk.org/api/structrte__flow__attr.html#a33bdc3cfc314d71f3187a8186bc570a9>\ *:1;*
*uint32_t*\ transfer <https://doc.dpdk.org/api/structrte__flow__attr.html#a9371183486f590ef35fef41dec806fef>\ *:1;*
*uint32_t*\ reserved <https://doc.dpdk.org/api/structrte__flow__attr.html#aa43c4c21b173ada1b6b7568956f0d650>\ *:29;*
*};*

Pattern items

For the purposes described in later sections Pattern items are primarily for matching protocol headers and packet data, usually associated with a specification structure. These must be stacked in the same order as the protocol layers to match inside packets, starting from the lowest.

Item specification structures are used to match specific values among protocol fields (or item properties).

Up to three structures of the same type can be set for a given item:

  • spec: values to match (e.g. a given IPv4 address).

  • last: upper bound for an inclusive range with corresponding fields in spec.

  • mask: bit-mask applied to both spec and last whose purpose is to distinguish the values to take into account and/or partially mask them out (e.g. in order to match an IPv4 address prefix).

Table 14. Example RTE FLOW Item Types

Item Type*

Description

Specification Structure

END

End marker for item lists

None

VOID

Used as a placeholder for convenience

None

ETH

Matches an Ethernet header

rte_flow_item_eth

VLAN

Matches an 802.1Q/ad VLAN tag.

rte_flow_item_vlan

IPV4

Matches an IPv4 header

rte_flow_item_ipv4

IPV6

Matches an IPv6 header

rte_flow_item_ipv6

ICMP

Matches an ICMP header.

rte_flow_item_icmp

UDP

Matches an UDP header.

rte_flow_item_udp

TCP

Matches a TCP header.

rte_flow_item_tcp

SCTP

Matches a SCTP header.

rte_flow_item_sctp

VXLAN

Matches a VXLAN header.

rte_flow_item_vxlan

NVGRE

Matches a NVGRE header.

rte_flow_item_nvgre

ECPRI

Matches ECPRI Header

rte_flow_item_ecpri

RTE_FLOW_ITEM_TYPE_ETH

struct rte_flow_item_eth {
        struct rte_ether_addr dst; /**< Destination MAC. */
        struct rte_ether_addr src; /**< Source MAC. > */
        rte_be16_t type; /**< EtherType or TPID.> */
};

struct rte_ether_addr {
        uint8_t addr_bytes[RTE_ETHER_ADDR_LEN]; /**< Addr bytes in tx order */
}
RTE_FLOW_ITEM_TYPE_IPV4

struct rte_flow_item_ipv4 {
        struct rte_ipv4_hdr hdr; /**< IPv4 header definition. */
};

struct rte_ipv4_hdr {
        uint8_t  version_ihl;           /**< version and header length */
        uint8_t  type_of_service;       /**< type of service */
        rte_be16_t total_length;        /**< length of packet */
        rte_be16_t packet_id;           /**< packet ID */
        rte_be16_t fragment_offset;     /**< fragmentation offset */
        uint8_t  time_to_live;          /**< time to live */
        uint8_t  next_proto_id;         /**< protocol ID */
        rte_be16_t hdr_checksum;        /**< header checksum */
        rte_be32_t src_addr;            /**< source address */
        rte_be32_t dst_addr;            /**< destination address */
}

RTE_FLOW_ITEM_TYPE_UDP

struct rte_flow_item_udp {
        struct rte_udp_hdr hdr; /**< UDP header definition. */
};

struct rte_udp_hdr {
        rte_be16_t src_port;    /**< UDP source port. */
        rte_be16_t dst_port;    /**< UDP destination port. */
        rte_be16_t dgram_len;   /**< UDP datagram length */
        rte_be16_t dgram_cksum; /**< UDP datagram checksum */
}

RTE_FLOW_ITEM_TYPE_ECPRI

struct rte_flow_item_ecpri {
  struct rte_ecpri_combined_msg_hdr hdr;
};

struct rte_ecpri_combined_msg_hdr {
  struct rte_ecpri_common_hdr common;
  union {
    struct rte_ecpri_msg_iq_data type0;
    struct rte_ecpri_msg_bit_seq type1;
    struct rte_ecpri_msg_rtc_ctrl type2;
    struct rte_ecpri_msg_bit_seq type3;
    struct rte_ecpri_msg_rm_access type4;
    struct rte_ecpri_msg_delay_measure type5;
    struct rte_ecpri_msg_remote_reset type6;
    struct rte_ecpri_msg_event_ind type7;
    rte_be32_t dummy[3];
  };
};
struct rte_ecpri_common_hdr {
  union {
    rte_be32_t u32;   /**< 4B common header in BE */
    struct {
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
      uint32_t size:16; /**< Payload Size */
      uint32_t type:8; /**< Message Type */
      uint32_t c:1; /**< Concatenation Indicator */
      uint32_t res:3; /**< Reserved */
      uint32_t revision:4; /**< Protocol Revision */
#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
      uint32_t revision:4; /**< Protocol Revision */
      uint32_t res:3; /**< Reserved */
      uint32_t c:1;  /**< Concatenation Indicator */
      uint32_t type:8; /**< Message Type */
      uint32_t size:16; /**< Payload Size */
#endif
    };
  };
};
/**
* eCPRI Message Header of Type #0: IQ Data
*/
struct rte_ecpri_msg_iq_data {
  rte_be16_t pc_id;           /**< Physical channel ID */
  rte_be16_t seq_id;          /**< Sequence ID */
};

/**
* eCPRI Message Header of Type #1: Bit Sequence
*/
struct rte_ecpri_msg_bit_seq {
  rte_be16_t pc_id;           /**< Physical channel ID */
  rte_be16_t seq_id;          /**< Sequence ID */
};

/**
* eCPRI Message Header of Type #2: Real-Time Control Data
*/
struct rte_ecpri_msg_rtc_ctrl {
  rte_be16_t rtc_id;          /**< Real-Time Control Data ID */
  rte_be16_t seq_id;          /**< Sequence ID */
};

/**
* eCPRI Message Header of Type #3: Generic Data Transfer
*/
struct rte_ecpri_msg_gen_data {
  rte_be32_t pc_id;           /**< Physical channel ID */
  rte_be32_t seq_id;          /**< Sequence ID */
};

/**
* eCPRI Message Header of Type #4: Remote Memory Access
*/
RTE_STD_C11
struct rte_ecpri_msg_rm_access {
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
  uint32_t ele_id:16;         /**< Element ID */
  uint32_t rr:4;                      /**< Req/Resp */
  uint32_t rw:4;                      /**< Read/Write */
  uint32_t rma_id:8;          /**< Remote Memory Access ID */
#elif RTE_BYTE_ORDER == RTE_BIG_ENDIAN
  uint32_t rma_id:8;          /**< Remote Memory Access ID */
  uint32_t rw:4;                      /**< Read/Write */
  uint32_t rr:4;                      /**< Req/Resp */
  uint32_t ele_id:16;         /**< Element ID */
#endif
  uint8_t addr[6];            /**< 48-bits address */
  rte_be16_t length;          /**< number of bytes */
};

/**
* eCPRI Message Header of Type #5: One-Way Delay Measurement
*/
struct rte_ecpri_msg_delay_measure {
  uint8_t msr_id;                     /**< Measurement ID */
  uint8_t act_type;           /**< Action Type */
};

/**
* eCPRI Message Header of Type #6: Remote Reset
*/
struct rte_ecpri_msg_remote_reset {
  rte_be16_t rst_id;          /**< Reset ID */
  uint8_t rst_op;                     /**< Reset Code Op */
};

/**
* eCPRI Message Header of Type #7: Event Indication
*/
struct rte_ecpri_msg_event_ind {
  uint8_t evt_id;                     /**< Event ID */
  uint8_t evt_type;           /**< Event Type */
  uint8_t seq;                        /**< Sequence Number */
  uint8_t number;                     /**< Number of Faults/Notif */
};

Matching Patterns

A matching pattern is formed by stacking items starting from the lowest protocol layer to match. Patterns are terminated by END pattern item.

Actions

Each possible action is represented by a type. An action can have an associated configuration object. Actions are terminated by the END action.

Table 15. RTE FLOW Actions

Action*

Description

Configuration Structure

END

End marker for action
lists

none

VOID

Used as a placeholder for
convenience

none

PASSTHRU

Leaves traffic up for
additional processing by
subsequent flow rules;
makes a flow rule
non-terminating.

none

MARK

Attaches an integer value
to packets and sets
PKT_RX_FDIR and
PKT_RX_FDIR_ID mbuf flags

rte_flow_action_mark

QUEUE

Assigns packets to a given
queue index

rte_flow_action_queue

DROP

Drops packets

none

COUNT

Enables Counters for this
flow rule

rte_flow_action_count

RSS

Similar to QUEUE, except
RSS is additionally
performed on packets to
spread them among several
queues according to the
provided parameters.

rte_flow_action_rss

VF

Directs matching traffic
to a given virtual
function of the current
device

rte_flow_action_vf

Route to specific Queue id based on ecpriRtcid/ecpriPcid

An RTE Flow Rule will be created to match an eCPRI packet with a specific pc_id value and route it to specified queues.

Pattern Items

Table 16. Pattern Items to match eCPRI packet with a Specific Physical Channel ID (pc_id)

Index

Item

Spec

Mask

0

Ethernet

0

0

1

eCPRI

hdr.common.type =
RTE_EC
PRI_MSG_TYPE_IQ_DATA;
hdr.type0.pc_id =
pc_id;
hdr.common.type =
0xff;
hdr.type0.pc_id =
0xffff;

2

END

0

0

The following code sets up the RTE_FLOW_ITEM_TYPE_ETH and RTE_FLOW_ITEM_TYPE_ECPRI Pattern Items.

The RTE_FLOW_ITEM_TYPE_ECPRI Pattern is configured to match on the pc_id value (in this case 8 converted to Big Endian byte order).

uint8_t pc_id_be = 0x0800;

#define MAX_PATTERN_NUM 3

struct rte_flow_item pattern[MAX_PATTERN_NUM];

struct rte_flow_action action[MAX_ACTION_NUM];

struct rte_flow_item_ecpri ecpri_spec;

struct rte_flow_item_ecpri ecpri_mask;

/* Ethernet */

patterns[0].type = RTE_FLOW_ITEM_TYPE_ETH;

patterns[0].spec = 0;

patterns[0].mask = 0;

/* ECPRI */

ecpri_spec.hdr.common.type = RTE_ECPRI_MSG_TYPE_IQ_DATA;

ecpri_spec.hdr.type0.pc_id = pc_id_be;

ecpri_mask.hdr.common.type = 0xff;

ecpri_mask.hdr.type0.pc_id = 0xffff;

ecpri_spec.hdr.common.u32 = rte_cpu_to_be_32(ecpri_spec.hdr.common.u32);

pattern[1].type = RTE_FLOW_ITEM_TYPE_ECPRI;

pattern[1].spec = &ecpri_spec;

pattern[1].mask = &ecpri_mask;

/* END the pattern array */

patterns[2].type = RTE_FLOW_ITEM_TYPE_END

Action

Table 17. QUEUE action for given queue id

Index

Action

Fields

Description

Value

0

QUEUE

index

queue indices to use

Must be 0,1,2,3, etc

1

END

The following code sets up the action RTE_FLOW_ACTION_TYPE_QUEUE and calls the rte_flow_create function to create the RTE Flow rule.

#define MAX_ACTION_NUM 2

uint16_t rx_q = 4;

struct rte_flow_action_queue queue = { .index = rx_q };

struct rte_flow *handle;

struct rte_flow_error err;

struct rte_flow_action actions[MAX_ACTION_NUM];

struct rte_flow_attr attributes = {.ingress = 1 };

action[0].type = RTE_FLOW_ACTION_TYPE_QUEUE;

action[0].conf = &queue;

action[1].type = RTE_FLOW_ACTION_TYPE_END;

handle = rte_flow_create (port_id, &attributes, patterns, actions, &err);

Front Haul Interface Library Release Notes

Version FH oran_f_release_v1.0, June 2022

  • Update to DPDK 20.11.3

  • oneAPI compiler support

  • core optimizations for massive mimo scenarios

Version FH oran_e_maintenance_release_v1.0, March 2022

  • Update to DPDK 20.11.1.

  • Static Compression support which reduces overhead in user plane packets.

  • QoS support per configuration table 3-7.

  • DDP Profile for O-RAN FH.

  • VPP like vectorization of packet handling.

  • C-plane update.

  • Support for Measurement of dummy payloads in the range of 40 to 1400 bytes per user.

  • CVL(Columbiaville) measured transport implementation including timing parameters adjustment prior to C/U plane traffic.

Version FH oran_release_bronze_v1.1, Aug 2020

  • Add profile and support for LTE.

  • Add support for the 32x32 massive mimo scenario. Up to 2 cells demo showed with testmac

  • mmWave RRH integration. Address regression from the previous release.

  • Integrate block floating-point compression/decompression.

  • Enhance C-plane for the Category B scenario.

Version FH oran_release_bronze_v1.0, May 2020

  • Integration and optimization of block floating point compression and decompression.

  • Category B support

  • Add support for alpha and beta value when calculating SFN based on GPS time.

  • Support End to End integration with commercial UE with xRAN/ORAN RRU for both mmWave and sub-6 scenarios

Version FH oran_release_amber_v1.0, 1 Nov 2019

  • Second version released to ORAN in support of Release A

  • Incorporates support for 5G NR sub 6 and mmWave

  • Support for Application fragementation under Transport features was added

  • This release has been designed and implemented to support the following numerologies defined in the 3GPP
    specification * Numerology 0 with bandwidth 5/10/20 MHz with up to 12 cells * Numerology 1 with bandwidth 100MHz with up to 1 cell * Numerology 3 with bandwidth 100MHz with up to 1 cell

  • The feature set of xRAN protocol should be aligned with Radio Unit (O-RU) implementation

  • Inter-operability testing (IOT) is required to confirm correctness of functionality on both sides

  • The following mandatory features of the ORAN FH interface are not yet supported in this release

  • RU Category Support of CAT-B RU (i.e. precoding in RU)

  • Beamforming Beam Index Based and Real Time BF weights

  • Transport Features QoS over FrontHaul

Version FH seedcode_v1.0, 25 Jul 2019

  • This first version supports only mmWave per 5G NR and it is not yet

  • optimized

  • It is a first draft prior to the November 2019 Release A

  • The following mandatory features of the ORAN FH interface are not yet

  • supported in this initial release

  • RU Category Support of CAT-B RU (i.e. precoding in RU)

  • Beamforming Beam Index Based and Real Time BF weights

  • Transport Features QoS over FrontHaul and Application Fragmentation

WLS Library

Wls Lib Overview

The Wls_lib is a Wireless Service library that supports shared memory and buffer management used by applications implementing a gNb or eNb. This library uses DPDK, libhugetlbfs and pthreads to provide memcpy less data exchange between an L2 application, API Translator Module and a L1 application by sharing the same memory zone from the DPDK perspective.

Project Resources

The source code is avalable from the Linux Foundation Gerrit server:

https://gerrit.o-ran-sc.org/r/gitweb?p=o-du%2Fphy.git;a=summary

The build (CI) jobs will be in the Linux Foundation Jenkins server:

https://jenkins.o-ran-sc.org

Issues are tracked in the Linux Foundation Jira server:

https://jira.o-ran-sc.org/secure/Dashboard.jspa

Project information is available in the Linux Foundation Wiki:

https://wiki.o-ran-sc.org

Library Functions

  • WLS_Open() and WLS_Open_Dual() that open a single or dual wls instance interface and registers the instance with the kernel space driver.

  • WLS_Close(), WLS_Close1() closes the wls instance and deregisters it from the kernel space driver.

  • WLS_Ready(), WLS_Ready1() checks state of remote peer of WLS interface and returns 1 if remote peer is available.

  • WLS_Alloc() allocates a memory block for data exchange shared memory. This block uses hugepages.

  • WLS_Free() frees memory block for data exchange shared memory.

  • WLS_Put(), WLS_Put1() puts memory block (or group of blocks) allocated from WLS memory into the interface for transfer to remote peer.

  • WLS_Check(), WLS_Check1() checks if there are memory blocks with data from remote peer and returns number of blocks available for “get” operation.

  • WLS_Get(), WLS_Get1() gets memory block from interface received from remote peer. Function is a non-blocking operation and returns NULL if no blocks available.

  • WLS_Wait(), WLS_Wait1() waits for new memory block from remote peer. This Function is a blocking call and returns number of blocks received.

  • WLS_WakeUp(), WLS_WakeUp1() performs “wakeup” notification to remote peer to unblock “wait” operations pending.

  • WLS_Get(), WLS_Get1() gets a memory block from the interface received from remote peer. This Function is blocking operation and waits till next memory block from remote peer.

  • WLS_VA2PA() converts virtual address (VA) to physical address (PA).

  • WLS_PA2VA() converts physical address (PA) to virtual address (VA).

  • WLS_EnqueueBlock(), WLS_EnqueueBlock1() This function is used by a master or secondary master to provide memory blocks to a slave for next slave to master (sec master) transfer of data.

  • WLS_NumBlocks() returns number of current available blocks provided by master for a new transfer of data from the slave.

The _1() functions are only needed when using the WLS_Open_Dual().

The source code and documentation will be updated in the next release to use inclusive engineering terms.

Wls Lib Installation Guide

The wls library uses DPDK as the basis for the shared memory operations and requires that DPDK be installed in the system since in the makefile it uses the RTE_SDK environment variable when building the library.
The current release was tested using DPDK version 20.11.3 but it doesn’t preclude the use of newer releases. Since the L1 binaries are built with DPDK 20.11.3 the ODULOW as a whole does has the limitation to use only this version of DPDK. Also the library uses the Intel Compiler that is defined as part of the ODULOW documentation.

Contents

  • Overview

  • Building and Installation

  • Command Line Parameters

  • Known Issues/Troubleshooting

  • License


Overview

This document describes the wls DPDK base library for ODULOW to ODUHIGH communication as part of the
ORAN Reference Architecture where an intermediate shin layer can be present between these components.


Building and Installation

Retrieve the source files from the Linux Foundation Gerrit server:

https://gerrit.o-ran-sc.org/r/gitweb?p=o-du%2Fphy.git;a=summary

  1. cd wls_lib

  2. wls_lib$ ./build.sh xclean

  3. wls_lib$ ./build.sh

The shared library is available at wls_lib/lib

This library is used by the ODUHIGH, shin layer implementing a 5G FAPI to IAPI translator and the ODULOW components.

Please define an environment variable DIR_WIRELESS_WLS with the path to the root folder of the wls_lib as it is needed for the fapi_5g build process.

Unit Test building and validation

In order to build the unit test do the following steps:

  1. cd test

  2. ./build.sh xclean

  3. ./build.sh

  4. Create an SSH session into the target an change directory to wls_lib/bin/phy

  5. issue ./phy.sh

  6. Create a second SSH session into the target and change directory to wls_lib/bin/fapi

  7. issue ./fapi.sh

  8. Create a third SSH session into the target and change directory to wls_lib/bin/mac

  9. issue ./mac.sh

After the test run you should see that each module sent and receive 16 messages from the display status messages.


Known Issues/Troubleshooting

No known issues. For troubleshooting use unit test application.


License

Please see License.txt at the root of the phy repository for license information details

WLS Library Release Notes

Version WLS oran_f_release_v1.0, June 2022

  • Support for oneAPI compiler

  • Added flag in WLS_Put for LTE support

Version WLS oran_e_maintenance_release_v1.0, Mar 2022

  • Minor changes to deal with initial shared memory management

Version WLS oran_release_bronze_v1.1, Aug 2020

  • Second release of this library aligned with FlexRAN 20.04

  • No changes to external interfaces,

Version WLS oran_release_bronze_v1.0, May 2020

  • First release of the wls library to ORAN in support of the Bronze Release

  • This version supports both single and dual instances using a single Open function

FAPI 5G TM

O-RAN FAPI 5G TM Introduction

The ORAN FAPI 5G Translator Module (TM) is a standalone application that communicates with the ODU-High using the 5G FAPI protocol defined by the Small Cell Forum and communicates with the ODU Low using the Intel L2-L1 API using the Wireless Subsystem Interface Library (WLS) to handle the shared memory and buffer management required by the Interfaces. In addition the ORAN 5G FAPI TM requires the Data Plane Design Kit (DPDK) which is integrated with the WLS Library.

Table 1. Terminology

Term

Description

API

Application Platform Interface

BBU

Baseband Unit

CORESET

Control Resource Set

DOS

Denial of Service Attack

DPDK

Data Plane Design Kit

eNb

Enode B

EPC

Evolved Packet Core

EVM

Error Vector magnitude

FAPI

Functional Application Platform Interface

gNB

Next generation eNB or g Node B

IQ

In-phase and in-quadrature

ISA

Intel Software Architecture i.e. AVX2, AVX256, AVX512

MAC

Medium Access Control

MIB

Master Information Block

nFAPI

Network FAPI (Between VNF(L2/L3) and PNF(L1))

PDU

Protocol Data Unit

PHY

Physical Layer Processing

PMD

Poll Mode Driver

PNF

Physical Network Function

PSS

Primary Synchronization Signal

QPSK

Quadrature Phase Shift Keying

RAN

Radio Access Network

RE

Radio Equipment

REC

Radio Equipment Control

ROE

Radio Over Ethernet

RX or Rx

Receive

SCF

Small Cell Forum

SFN

System Frame Number ∈ {0,…,1023}

SIB

System Information Block

SSS

Secondary Synchronization Signal

TLV

Type Length Value

TX or Tx

Transmit

U-Plane

User Plane

URLLC

Ultra Reliable Low Latency Coding

VNF

Virtual Network Function

WLS

Wireless Subsystem Interface

WLS_DPDK

WLS that uses DPDK functions instead of accessing kernel functions

Reference Documents

Table 2. Reference Documents

Document

Document No./Location

1) FlexRAN 5G New Radio Reference
Solution L1-L2 API Specification
Rev 10.00 March 2021

CDI 603575 Intel Corp.

2) 5G FAPI:PHY API Specification,
Version 2.0.0, March 2020

222.10.02/ smallcellforum.org

Translator Module Top Level Design

The following diagram illustrates the different functions and components used by this module and how it interconnects to the L2 and L1 layers.

Figure 1. ORAN 5G FAPI Translator Module Top Level Architecture

Figure 1. ORAN 5G FAPI TM

Figure 1. ORAN 5G FAPI TM Top Level Diagram

The Translator Module consists of the following functions:

  • A 5G FAPI Parser facing the L2 interface.

  • An Inter API Mapper and Logic.

  • An Intel API Parser facing the L1 interface.

  • WLS dpdk based library supporting 2 instances.

ORAN 5G FAPI TM Installation Guide

The 5G FAPI TM uses the wls library which uses DPDK as the basis for the shared memory operations and requires that DPDK be installed in the system since in the makefile it uses the RTE_SDK environment variable when building the library.
The current release was tested using DPDK version 20.11.3 but it doesn’t preclude the use of newer releases.
Also the 5G FAPI TM currently uses the Intel Compiler that is defined as part of the ODULOW documentation.

Contents

  • Overview

  • Building and Installation

  • Command Line Parameters

  • Known Issues/Troubleshooting

  • License


Overview

This document describes how to install and build the 5G FAPI TM for ODULOW to ODUHIGH communication as part of the
ORAN Reference Architecture.


Building and Installation

Retrieve the source files from the Linux Foundation Gerrit server:

https://gerrit.o-ran-sc.org/r/gitweb?p=o-du%2Fphy.git;a=summary

  1. Make sure that the follwoing environment variables are defined DIR_WIRELESS_WLS for the wls_lib and RTE_SDK for the DPDK

  2. cd fapi_5g/build

  3. $ ./build.sh xclean // Force full rebuild

  4. $ ./build.sh // Build the 5G FAPI TM

The executable is available at fapi_5g/bin and it is called oran_5g_fapi

Unit Test and validation

The unit test for the ORAN 5G FAPI TM requires the testmac and L1 binaries that are described in a later section and that for the O-RAN current Release consists of a suite of tests in timer mode where the DL, UL and FD paths are exercised for different channel types and numerologies 0, 1 and 2.

1.Open SSH session and cd l1\bin\nr5g\gnb\l1
2.Issue l1.sh
3.Open a second SSH session and cd fapi_5gbin
4.Issue ./oran_5g_fapi.sh –cfg oran_5g_fapi.cfg
5.Open a third SSH session and cd l1\bin\nr5g\gnb\testmac
6.Issue ./l2.sh
7.From the testmac command prompt (i.e. the l2 executable) issue:: run Direction Numerology Bandwidth TestCase where Direction is 0 DL, 1 UL and 2 FD Numerology 0 15 Khz, 1 30 Khz, 2 60 KHz, etc Bandwidth is 5, 10 , 20, 100 Testcase is defined from the set supported in this release In general issue only the cases provided with this release that have the full set of supporting files required.
8.Observe in the SSH associated with the testmac the PASS/FAIL status. All of the reference cases pass.

Testmac cases used for 5g FAPI TM

The following DL, UL and PRACH test cases are used for validation.

Full Duplex Sub6 Test Case [mu=0 (15khz) and 20Mhz]
  1. Test case 1018 4 Antennas, 4 PDSCH and 8 PDCCH in D Slots and 1 SSB, 4 PUSCH and 58 PUCCH in U Slots Spatial Multiplexing, 40 D slots, 40 U Slots QAM16,16 RBs

Full Duplex Sub6 Test Cases [u = 1 (30khz) and 100Mhz]
  1. Test Case 1300 4 Antennas, 20 Slots, 16 PDSCH {QAM256, mcs28, 272rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM64, mcs28, 248rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDDCH,189 PUCCH and PRACH

  2. Test Case 1301 4 Antennas, 20 Slots, 16 PDSCH {QAM64, mcs16, 272rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM16, mcs16, 248rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDSCH, 189 PUCCH.

  3. Test Case 1302 4 Antennas, 20 Slots, 16 PDSCH {QAM16, mcs9, 272rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QPSK, mcs9, 248rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  4. Test Case 1303 4 Antennas, 20 Slots, 16 PDSCH {QAM256, mcs28, 190rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM64, mcs28, 190rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  5. Test Case 1304 4 Antennas. 20 Slots, 16 PDSCH {QAM64, mcs16, 190rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM16, mcs16, 190rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  6. Test Case 1305 4 Antennas, 20 Slots, 16 PDSCH {QAM16, mcs9, 190rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QPSK, mcs9, 190rbs, 14symbols, 2Layers, 16UE/TTI},16 PDCCH, 189 PUCCH.

  7. Test Case 1306 4 Antennas, 20 Slots, 16 PDSCH {QAM256, mcs28, 96rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM64, mcs28, 96rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  8. Test Case 1307 4 Antennas, 20 Slots, 16 PDSCH {QAM64, mcs16, 96rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM16, mcs16, 96rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  9. Test Case 1308 4 Antennas, 20 Slots, 16 PDSCH {QAM16, mcs9, 96rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QPSK, mcs9, 96rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  10. Test Case 1004 2 antennas, 1 Slot, URRLC test case with URLLC in D slot starting at Sym0,3 and in U Slot at sym8,11

  11. Test Case 1350 32 Antennas, 20 Slots, 16 PDSCH {QAM256, mcs27, 32rbs,12/10symbols, 4Layers}, 16 PUSCH {QAM64, mcs28, 32rbs, 13 symbols, 2Layers}, 16 PDCCH, 189 PUCCH, PRACH, SRS.

Full Duplex mmWave Test Case [u = 3 (120khz) and 100Mhz]
  1. Test Case 1001 2 Antennas, 80 Slots, 1 PDSCH {QAM64, mcs19, 66rbs, 2Layers}, 1 PUSCH {QAM64, mcs19, 2Layers},

ORAN 5G FAPI TM Release Notes

Version FAPI TM oran_f_release_v1.0, June 2022

  • Support of oneAPI compiler

  • Support for additional features not properly defined in the SCF 5G FAPI 2.0 specs has been added by means of vendor specific fields. (FlexRAN 22.11 compatible).

Version FAPI TM oran_e_maintenance_release_v1.0, Mar 2022

  • Increased test coverage. Now DL, UL, FD, URLLC and Massive MIMO use cases are supported.

  • Support for features not properly defined in the SCF 5G FAPI 2.0 specs has been added by means of vendor specific fields.

Version FAPI TM oran_release_bronze_v1.1, Aug 2020

  • Increased test coverage. All supported DL, UL and FD standard MIMO cases are validated

  • Support for carrier aggregation

  • Support for API ordering

  • Support for handling Intel proprietary PHY shutdown message in radio mode

  • FAPI TM latency measurement

  • Bug fixes

  • Feedback provided to SCF on parameter gaps identified in SCF 5G FAPI specification dated March 2020

  • This version of the 5G FAPI TM incorporates the changes that were provided to the SCF.

Version FAPI TM oran_release_bronze_v1.0, May 2020

  • First release of the 5G FAPI TM to ORAN in support of the Bronze Release

  • This version supports 5G only

  • PARAM.config and PARAM.resp are not supported

  • ERROR.ind relies on the L1 support for error detection as the 5G FAPI TM only enforces security checks and
    integrity checks to avoid DOS attacks but it doesn’t perform full validation of the input parameters for compliance to the standard

  • Deviations from the March version of the SCF 5G FAPI document have been implemented in order to deal with
    limitations and ommisions found in the current SCF document, these differences are being provided to the SCF for the next document update. The 5G FAPI implementation is defined in the file fapi_interface.h

  • Multi-user MIMO, Beamforming, Precoding and URLLC are not supported in the current implementation as they
    require additional alignment between the SCF P19 and the ORAN

  • The option for the MAC layer doing the full generation of the PBCH payload is not supported in this release and it will be added in the maintainance release cycle.

Running L1 and TESTMAC

Run L1 and Testmac

Before you run L1, please make sure all the FH, WLS, and FAPI TM lib was built according to above relative chapters for each lib, or you can refer below quick build command to create these libs.

Build FH

under folder phy/fhi_lib:

#./build.sh

Build WLS

under folder phy/wls_lib:: #./build.sh

Build FAPI TM

under folder phy/fapi_5g/build:: #./build.sh

For the current O-RAN release, the L1 only has a binary image as well as the testmac which is an L2 test application, details of the L1 and testmac application are in https://github.com/intel/FlexRAN

Download L1 and testmac

Download L1 and testmac through https://github.com/intel/FlexRAN

CheckList Before Running the code

Before running the L1 and Testmac code make sure that you have built the wls_lib, FHI_lib and 5G_FAPI_TM using the instructions provided earlier in this document and in the order specified in this documentation.

Run L1 with testmac

Three console windows are needed, one for L1 app, one for FAPI TM, one for testmac. They need to run in the following order L1-> FAPI TM-> testmac. In each console window, the environment needs to be set using a shell script under folder phy/ example:

#source ./setupenv.sh
  • Run L1 under folder FlexRAN/l1/bin/nr5g/gnb/l1 in timer mode using:

    #l1.sh -e
    
Note that the markups dpdkBasebandFecMode and dpdkBasebandDevice needs to be adjusted in the relevant phycfg.xml under folder

FlexRAN/l1/bin/nr5g/gnb/l1 before starting L1.
dpdkBasebandFecMode = 0 for LDPC Encoder/Decoder in software.
dpdkBasebandFecMode = 1 for LDPC Encoder/Decoder in FPGA.

  • Run FAPI TM under folder phy/fapi_5g/bin:

    #./oran_5g_fapi.sh --cfg=oran_5g_fapi.cfg
    
  • Run testmac under folder FlexRAN/l1/bin/nr5g/gnb/testmac:

    #./l2.sh
    

Once the application comes up, you will see a <TESTMAC> prompt. The same Unit tests can be run using the command:

  • run testtype numerology bandwidth testnum where

  • testtype is 0 (DL), 1 (UL) or 2 (FD)

  • numerology [0 -> 4], 0=15khz, 1=30khz, 2=60khz, 3=120khz, 4=240khz

  • bandwidth 5, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 400 (in Mhz)

  • testnum is the Bit Exact TestNum. [1001 -> above]If this is left blank, then all tests under type testtype are run

testnum is always a 4 digit number. First digit represents the number of carriers to run. For example, to run Test Case 5 for Uplink Rx mu=3, 100Mhz for 1 carrier, the command would be: run 1 3 100 1005 All the pre-defined test cases for the current O-RAN Release are defined in the Test Cases section in https://github.com/intel/FlexRAN and also in the Test Cases section of this document. If the user wants to run more slots (than specified in test config file) or change the mode or change the TTI interval of the test, then the command phystart can be used as follows:

  • phystart mode interval num_tti

  • mode is 4 (ORAN compatible Radio) or 1 (Timer)

  • interval is the TTI duration scaled as per Numerology (only used in timer mode).

    • So if Numerology is 3 and this parameter is set to 1, then the interval will be programmed to 0.125 ms.

    • If this is set to 10, then interval is programmed to 1.25ms

  • num_tti is the total number of TTIs to run the test.

    • If 0, then the test config file defines how many slots to run.

    • If a non zero number, then test is run for these many slots.

    • If the num_tti is more than the number of slots in config file, then the configuration is repeated till end of test.

    • So if num_tti=200 and num_slot from config file is 10, then the 10 slot configs are repeated 20 times in a cyclic fashion.

  • The default mode set at start of testmac is (phystart 1 10 0). So it is timer mode at 10ms TTI intervals running for duration specified in each test config file

  • Once user manually types the phystart command on the l2 console, then all subsequent tests will use this phystart config till user changes it or testmac is restarted.

  • If user wants to run a set of tests which are programmed in a cfg file (for example tests_customer.cfg):

    ./l2.sh –testfile=tests_customer.cfg

    example:

    #./l2.sh --testfile=oran_bronze_rel_fec_sw.cfg
    
  • This will run all the tests that are listed in the config file. Please see the tests_customer.cfg present in the release for example of how to program the tests

Test Cases for the F Release

Test Cases

This section describes the downlink, uplink and full duplex bit exact test cases that are present as part of the F Release
release. All the test config files, IQ samples and reference Inputs are placed under the FlexRAN/testcase folder. These test config files are used for testmac.

There are 3 kinds of tests: dl, ul, and fd. The following test cases are part of the F Release and reside in the github repo mentioned earlier in this document.

The following DL, UL and PRACH test cases are used for validation.

Full Duplex Sub6 Test Case [mu=0 (15khz) and 20Mhz]

  1. Test case 1018 4 Antennas, 4 PDSCH and 8 PDCCH in D Slots and 1 SSB, 4 PUSCH and 58 PUCCH in U Slots Spatial Multiplexing, 40 D slots, 40 U Slots QAM16,16 RBs

Full Duplex Sub6 Test Cases [u = 1 (30khz) and 100Mhz]

  1. Test Case 1300 4 Antennas, 20 Slots, 16 PDSCH {QAM256, mcs28, 272rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM64, mcs28, 248rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDDCH,189 PUCCH and PRACH

  2. Test Case 1301 4 Antennas, 20 Slots, 16 PDSCH {QAM64, mcs16, 272rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM16, mcs16, 248rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDSCH, 189 PUCCH.

  3. Test Case 1302 4 Antennas, 20 Slots, 16 PDSCH {QAM16, mcs9, 272rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QPSK, mcs9, 248rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  4. Test Case 1303 4 Antennas, 20 Slots, 16 PDSCH {QAM256, mcs28, 190rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM64, mcs28, 190rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  5. Test Case 1304 4 Antennas. 20 Slots, 16 PDSCH {QAM64, mcs16, 190rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM16, mcs16, 190rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  6. Test Case 1305 4 Antennas, 20 Slots, 16 PDSCH {QAM16, mcs9, 190rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QPSK, mcs9, 190rbs, 14symbols, 2Layers, 16UE/TTI},16 PDCCH, 189 PUCCH.

  7. Test Case 1306 4 Antennas, 20 Slots, 16 PDSCH {QAM256, mcs28, 96rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM64, mcs28, 96rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  8. Test Case 1307 4 Antennas, 20 Slots, 16 PDSCH {QAM64, mcs16, 96rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QAM16, mcs16, 96rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  9. Test Case 1308 4 Antennas, 20 Slots, 16 PDSCH {QAM16, mcs9, 96rbs, 12symbols, 4Layers, 16UE/TTI}, 16 PUSCH {QPSK, mcs9, 96rbs, 14symbols, 2Layers, 16UE/TTI}, 16 PDCCH, 189 PUCCH.

  10. Test Case 1004 2 antennas, 1 Slot, URRLC test case with URLLC in D slot starting at Sym0,3 and in U Slot at sym8,11

  11. Test Case 1350 32 Antennas, 20 Slots, 16 PDSCH {QAM256, mcs27, 32rbs,12/10symbols, 4Layers}, 16 PUSCH {QAM64, mcs28, 32rbs, 13 symbols, 2Layers}, 16 PDCCH, 189 PUCCH, PRACH, SRS.

Full Duplex mmWave Test Case [u = 3 (120khz) and 100Mhz]

  1. Test Case 1001 2 Antennas, 80 Slots, 1 PDSCH {QAM64, mcs19, 66rbs, 2Layers}, 1 PUSCH {QAM64, mcs19, 2Layers},