Welcome to Infrastructure(INF) ducomentation

Infrastructure Overview (INF)

This project is a reference implementation of O-Cloud infrastructure and it implements a real time platform (rtp) to deploy the O-CU and O-DU.

In O-RAN architecture, the O-DU and O-CU could have different deployed scenarios. The could be container based or VM based, which will be both supported in the release. In general the performance sensitive parts of the 5G stack require real time platform, especially for O-DU, the L1 and L2 are requiring the real time feature, the platform should support the Preemptive Scheduling feature.

Following requirements are going to address the container based solution:

  1. Support the real time kernel

  2. Support Node Feature Discovery

  3. Support CPU Affinity and Isolation

  4. Support Dynamic HugePages Allocation

And for the network requirements, the following should be supported:

  1. Multiple Networking Interface

  2. High performance data plane including the DPDK based vswitch and PCI pass-through/SR-IOV.

O-Cloud Components

In this project, the following O-Cloud components and services are enabled:

  1. Fault Management

    • Framework for infrastructure services to raise and persist alarm and event data.

      • Set, clear and query customer alarms

      • Generate customer logs for significant events

    • Maintains an Active Alarm List

    • Provides REST API to query alarms and events, also available through SNMP traps

    • Support for alarm suppression

    • Operator alarms

      • On platform nodes and resources

      • On hosted virtual resources

    • Operator logs - Event List

      • Logging of sets/clears of alarms

      • Related to platform nodes and resources

      • Related to hosted virtual resources

  2. Configuration Management

    • Manages Installation and Commissioning

      • Auto-discover of new nodes

      • Full Infrastructure management

      • Manage installation parameters (i.e. console, root disks)

    • Nodal Configuration

      • Node role, role profiles

      • Core, memory (including huge page) assignments

      • Network Interfaces and storage assignments

    • Hardware Discovery

      • CPU/cores, SMT, processors, memory, huge pages

      • Storage, ports

      • GPUs, storage, Crypto/compression H/W

  3. Software Management

    • Manages Installation and Commissioning

      • Auto-discover of new nodes

      • Full Infrastructure management

      • Manage installation parameters (i.e. console, root disks)

    • Nodal Configuration

      • Node role, role profiles

      • Core, memory (including huge page) assignments

      • Network Interfaces and storage assignments

    • Hardware Discovery

      • CPU/cores, SMT, processors, memory, huge pages

      • Storage, ports

      • GPUs, storage, Crypto/compression H/W

  4. Host Management

    • Full life-cycle and availability management of the physical hosts

    • Detects and automatically handles host failures and initiates recovery

    • Monitoring and fault reporting for:

      • Cluster connectivity

      • Critical process failures

      • Resource utilization thresholds, interface states

      • H/W fault / sensors, host watchdog

      • Activity progress reporting

    • Interfaces with board management (BMC)

      • For out of band reset

      • Power-on/off

      • H/W sensor monitoring

  5. Service Management

    • Manages high availability of critical infrastructure and cluster services

      • Supports many redundancy models: N, or N+M

      • Active or passive monitoring of services

      • Allows for specifying the impact of a service failure and escalation policy

      • Automatically recovers failed services

    • Uses multiple messaging paths to avoid split-brain communication failures

      • Up to 3 independent communication paths

      • LAG can also be configured for multi-link protection of each path

      • Messages are authenticated using HMAC

      • SHA-512 if configured / enabled on an interface by-interface basis

  6. Support the ansible bootstrap to implement the zero touch provisioning

Enable the ansible configuration functions for infrastructure itself including the image installation and service configuration.

NOTE: These features leverage the StarlingX (www.starlingx.io). And in current release, these features are only avalaible for IA platform.

Multi OS and Deployment Configurations

  • The INF project supports Multi OS and currently the following OS are supported:

    • Debian 11 (bullseye)

    • CentOS 7

    • Yocto 2.7 (warrior)

A variety of deployment configuration options are supported:

  1. All-in-one Simplex

A single physical server providing all three cloud functions (controller, worker and storage).

  1. All-in-one Duplex

Two HA-protected physical servers, both running all three cloud functions (controller, worker and storage), optionally with up to 50 worker nodes added to the cluster.

  1. All-in-one Duplex + up to 50 worker nodes

Two HA-protected physical servers, both running all three cloud functions (controller, worker and storage), plus with up to 50 worker nodes added to the cluster.

  1. Standard with Storage Cluster on Controller Nodes

A two node HA controller + storage node cluster, managing up to 200 worker nodes.

  1. Standard with Storage Cluster on dedicated Storage Nodes

A two node HA controller node cluster with a 2-9 node Ceph storage cluster, managing up to 200 worker nodes.

  1. Distributed Cloud

Distributed Cloud configuration supports an edge computing solution by providing central management and orchestration for a geographically distributed network of StarlingX systems.

NOTE:

  • For Debian and CentOS based image, all the above deployment configuration are supported.

  • For Yocto Based image, only deployment 1 - 3 are supported, and only container based solution is supported, VM based is not supprted yet.

About Yocto and OpenEmbedded

The Yocto Project is an open source collaboration project that provides templates, tools and methods to help you create custom Linux-based systems for embedded and IOT products, regardless of the hardware architecture.

OpenEmbedded is a build automation framework and cross-compile environment used to create Linux distributions for embedded devices. The OpenEmbedded framework is developed by the OpenEmbedded community, which was formally established in 2003. OpenEmbedded is the recommended build system of the Yocto Project, which is a Linux Foundation workgroup that assists commercial companies in the development of Linux-based systems for embedded products.

About StarlingX

StarlingX is a complete cloud infrastructure software stack for the edge used by the most demanding applications in industrial IOT, telecom, video delivery and other ultra-low latency use cases. With deterministic low latency required by edge applications, and tools that make distributed edge manageable, StarlingX provides a container-based infrastructure for edge implementations in scalable solutions that is ready for production now.

Contact info

If you need support or add new features/components, please feel free to contact the following:

INF Release Notes

This document provides the release notes for G-Release (7.0.0) of INF RTP.

Version history

Date

Ver.

Author

Comment

2019-11-02

1.0.0

Jackie Huang

Initial Version Amber Release

2020-06-14

2.0.0

Xiaohua Zhang

Bronze Release

2020-11-23

3.0.0

Xiaohua Zhang

Cherry Release

2021-06-29

4.0.0

Xiaohua Zhang

Dawn Release

2021-12-15

5.0.0

Jackie Huang

E Release

2022-06-15

6.0.0

Jackie Huang

F Release

2022-12-15

7.0.0

Jackie Huang

G Release

Version 7.0.0, 2022-12-15

  1. Seventh version (G release)

  2. INF MultiOS support:

    • Add support for Debian as the base OS

    • Three images will be provided:

      • Yocto based image

      • CentOS based image

      • Debian based image

  3. Enable three deployment modes on Yocto based image:

    • AIO simplex mode

    • AIO duplex mode (2 servers with High Availabity)

    • AIO duplex mode (2 servers with High Availabity) with additional worker node

  4. Enable four deployment modes on CentOS based image:

    • AIO simplex mode

    • AIO duplex mode (2 servers with High Availabity)

    • AIO duplex mode (2 servers with High Availabity) with additional worker node

    • Distributed Cloud

  5. Enable four deployment modes on Debian based image:

    • AIO simplex mode

    • AIO duplex mode (2 servers with High Availabity)

    • AIO duplex mode (2 servers with High Availabity) with additional worker node

    • Distributed Cloud

Version 6.0.0, 2022-06-15

  1. Sixth version (F release)

  2. INF MultiOS support:

    • Add support for CentOS as the base OS

    • Two images will be provided:

      • Yocto based image

      • CentOS based image

  3. Enable three deployment modes on Yocto based image:

    • AIO simplex mode

    • AIO duplex mode (2 servers with High Availabity)

    • AIO duplex mode (2 servers with High Availabity) with additional worker node

  4. Enable four deployment modes on CentOS based image:

    • AIO simplex mode

    • AIO duplex mode (2 servers with High Availabity)

    • AIO duplex mode (2 servers with High Availabity) with additional worker node

    • Distributed Cloud

Version 5.0.0, 2021-12-15

  1. Fifth version (E release)

  2. Upgrade most components to align with StarlingX 5.0

  3. Enable three deployment modes:

    • AIO simplex mode

    • AIO duplex mode (2 servers with High Availabity)

    • AIO duplex mode (2 servers with High Availabity) with additional worker node

Version 4.0.0, 2021-06-29

  1. Fourth version (D release)

  2. Enable the AIO duplex mode (2 servers with High Availabity) with additional worker node.

  3. Reconstruct the repo to align the upstream projects include StarlingX and Yocto

Version 3.0.0, 2020-11-23

  1. Third version (Cherry)

  2. Based on version 2.0.0 (Bronze)

  3. Add the AIO (all-in-one) 2 servers mode (High Availability)

Version 2.0.0, 2020-06-14

  1. Second version (Bronze)

  2. Based on Yocto version 2.7

  3. Linux kernel 5.0 with preempt-rt patches

  4. Leverage the StarlingX 3.0

  5. Support the AIO (all-in-one) deployment scenario

  6. With Software Management, Configuration Management, Host Management, Service Management, and Service Management enabled for IA platform

  7. Support the Kubernetes Cluster for ARM platform (verified by NXP LX2160A)

  8. With the ansbile bootstrap supported for IA platform

Version 1.0.0, 2019-11-02

  1. Initial Version

  2. Based on Yocto version 2.6 (‘thud’ branch)

  3. Linux kernel 4.18.41 with preempt-rt patches

  4. Add Docker-18.09.0, kubernetes-1.15.2

  5. Add kubernetes plugins:

    • kubernetes-dashboard-1.8.3

    • flannel-0.11.0

    • multus-cni-3.3

    • node-feature-discovery-0.4.0

    • cpu-manager-for-kubernetes-1.3.1

INF Installation Guide

Overview

O-RAN INF is a downstream project of StarlingX, and use the same installation and deployment methods.

Please see the detail of all the supported Deployment Configurations.

Notes: For Yocto based image, only “All-in-one Simplex” and “All-in-one Duplex (up to 50 worker nodes)” are supported.

Preface

Before starting the installation and deployment of O-RAN INF, you need to download the released ISO image or build from source as described in developer-guide.

The INF project supports Multi OS and the latest released images for each based OS can be dwonloaded in:

Hardware Requirements

Installation

Platform Installation

INF uses the same installation and deployment methods of StarlingX, please refer to StarlingX Installation for the detail installation steps.

Applications Installation

Here are the example applications installations:

INF Developer Guide

1. About the INF project

This project is a reference implementation of O-Cloud infrastructure which is based on StarlingX, and it supports multi-OS.

  • Currently the following OS are supported:

    • Debian 11 (bullseye)

    • CentOS 7

    • Yocto 2.7 (warrior)

Notes:
  • Debian based is the recommended platfrom.

  • The intended audiences of this guide are the developers who want to develop/integrate apps in INF platform, if you just want to install and deploy INF platform, you can ignore this guide and read the INF Installation Guide

1.1 About the Debian based implementaion

The project provde wrapper scripts to automate all the steps of StarlingX Debian Build Guide to build out the reference platform as an installable ISO image.

1.2 About the CentOS based implementaion

The project provde wrapper scripts to automate all the steps of StarlingX Build Guide to build out the reference platform as an installable ISO image.

1.3 About the Yocto based implementation

The project provde wrapper scripts to pull all required Yocto/OE layers to build out the reference platform as an installable ISO image.

To contribute on this project, basic knowledge of Yocto/OpenEmbedded is needed, please refer to the following docs if you want to learn about how to develop with Yocto/OpenEmbedded:

2. How to build the INF project

2.1 How to build the Debian based image

2.1.1 Prerequisite for Debian build environment

NOTE: The build system for Debian requires a Linux system with Docker and python 3.x installed. The the following steps have been tested on CentOS 7 and Ubuntu 20.04.

2.1.2 Use wrapper script build_inf_debian.sh to build the Debian based image
# Get the wrapper script to build the debian image
$ wget -O build_inf_debian.sh 'https://gerrit.o-ran-sc.org/r/gitweb?p=pti/rtp.git;a=blob_plain;f=scripts/build_inf_debian/build_inf_debian.sh;hb=HEAD'

$ chmod +x build_inf_debian.sh
$ WORKSPACE=/path/to/workspace
$ ./build_inf_debian.sh -w ${WORKSPACE}

If all go well, you will get the ISO image in: ${WORKSPACE}/prj_output/inf-image-debian-all-x86-64.iso

2.2 How to build the CentOS based image

NOTE: This only supports build on CentOS 7 which will be EOL 30 Jun 2024.

2.2.1 Prerequisite for CentOS build environment

NOTE: This step needs the user has sudo permission.

# Get the wrapper script for preparing the build environment
$ wget -O build_inf_prepare.sh https://gerrit.o-ran-sc.org/r/gitweb?p=pti/rtp.git;a=blob_plain;f=scripts/build_inf_centos/build_inf_prepare_jenkins.sh;hb=HEAD

$ chmod +x build_inf_prepare.sh
$ WORKSPACE=/path/to/workspace
$ ./build_inf_prepare.sh -w ${WORKSPACE}
2.2.2 Use wrapper script build_inf_centos.sh to build the CentOS based image
# Get the wrapper script to build the centos image
$ wget -O build_inf_centos.sh 'https://gerrit.o-ran-sc.org/r/gitweb?p=pti/rtp.git;a=blob_plain;f=scripts/build_inf_centos/build_inf_centos.sh;hb=HEAD'

$ chmod +x build_inf_centos.sh
$ WORKSPACE=/path/to/workspace
$ ./build_inf_centos.sh -w ${WORKSPACE}

If all go well, you will get the ISO image in: ${WORKSPACE}/prj_output/inf-image-centos-all-x86-64.iso

2.3 How to build the Yocto based image

2.3.1 Prerequisite for Yocto build environment

The recommended and tested host is Ubuntu 16.04/18.04 and CentOS 7.

  • To install the required packages for Ubuntu 16.04/18.04:

$ sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib \
  build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \
  xz-utils debianutils iputils-ping make xsltproc docbook-utils fop dblatex xmlto \
  python-git
  • To install the required packages for CentOS 7:

$ sudo yum install -y epel-release
$ sudo yum makecache
$ sudo yum install gawk make wget tar bzip2 gzip python unzip perl patch \
  diffutils diffstat git cpp gcc gcc-c++ glibc-devel texinfo chrpath socat \
  perl-Data-Dumper perl-Text-ParseWords perl-Thread-Queue perl-Digest-SHA \
  python34-pip xz which SDL-devel xterm
2.3.2 Use wrapper script build_inf_yocto.sh to setup build the Yocto based image
# Get the wrapper script with either curl or wget
$ curl -o build_inf_yocto.sh 'https://gerrit.o-ran-sc.org/r/gitweb?p=pti/rtp.git;a=blob_plain;f=scripts/build_inf_yocto/build_inf_yocto.sh;hb=HEAD'
$ wget -O build_inf_yocto.sh 'https://gerrit.o-ran-sc.org/r/gitweb?p=pti/rtp.git;a=blob_plain;f=scripts/build_inf_yocto/build_inf_yocto.sh;hb=HEAD'

$ chmod +x build_inf_yocto.sh
$ WORKSPACE=/path/to/workspace
$ ./build_inf_yocto.sh -w ${WORKSPACE}

If all go well, you will get the ISO image in: ${WORKSPACE}/prj_output/inf-image-yocto-aio-x86-64.iso