Installation Guides

This document describes how to install the RIC components deployed by scripts and Helm charts under the ric-plt/dep repository, including the dependencies and required system resources.

Version history

Date

Ver.

Author

Comment

2020-02-29

0.1.0

Abdulwahid W

Overview

This section explains the installation of Near Realtime RAN Intelligent Controller Platform only.

Prerequisites

The steps below assume a clean installation of Ubuntu 20.04 (no k8s, no docker, no helm)

Installing Near Realtime RIC in RIC Cluster

After the Kubernetes cluster is installed, the next step is to install the (Near Realtime) RIC Platform.

Getting and Preparing Deployment Scripts

Clone the ric-plt/dep git repository that has deployment scripts and support files on the target VM.

% git clone "https://gerrit.o-ran-sc.org/r/ric-plt/ric-dep"
git clone "https://gerrit.o-ran-sc.org/r/ric-plt/ric-dep"

Deploying the Infrastructure and Platform Groups

Use the scripts below to install kubernetes, kubernetes-CNI, helm and docker on a fresh Ubuntu 20.04 installation. Note that since May 2022 there’s no need for anything form the repo it/dep anymore.

# install kubernetes, kubernetes-CNI, helm and docker
cd ric-dep/bin
./install_k8s_and_helm.sh

# install chartmuseum into helm and add ric-common templates
./install_common_templates_to_helm.sh

After the recipes are edited and helm started, the Near Realtime RIC platform is ready to be deployed, but first update the deployment recipe as per instructions in the next section.

Modify the deployment recipe

Edit the recipe files ./RECIPE_EXAMPLE/example_recipe_latest_stable.yaml (which is a softlink that points to the latest release version). “example_recipe_latest_unstable.yaml points to the latest example file that is under current development.

extsvcplt:
  ricip: ""
  auxip: ""
  • Deployment scripts support both helm v2 and v3. The deployment script will determine the helm version installed in cluster during the deployment.

  • To specify which version of the RIC platform components will be deployed, update the RIC platform component container tags in their corresponding section.

  • You can specify which docker registry will be used for each component. If the docker registry requires login credential, you can add the credential in the following section. Please note that the installation suite has already included credentials for O-RAN Linux Foundation docker registries. Please do not create duplicate entries.

docker-credential:
  enabled: true
  credential:
    SOME_KEY_NAME:
      registry: ""
      credential:
        user: ""
        password: ""
        email: ""

For more advanced recipe configuration options, please refer to the recipe configuration guideline.

Installing the RIC

After updating the recipe you can deploy the RIC with the command below. Note that generally use the latest recipe marked stable or one from a specific release.

cd ric-dep/bin
./install -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe_latest_stable.yaml

Checking the Deployment Status

Now check the deployment status after a short wait. Results similar to the output shown below indicate a complete and successful deployment. Check the STATUS column from both kubectl outputs to ensure that all are either “Completed” or “Running”, and that none are “Error” or “ImagePullBackOff”.

# helm list
NAME                  REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
r3-a1mediator         1               Thu Jan 23 14:29:12 2020        DEPLOYED        a1mediator-3.0.0        1.0             ricplt
r3-appmgr             1               Thu Jan 23 14:28:14 2020        DEPLOYED        appmgr-3.0.0            1.0             ricplt
r3-dbaas1             1               Thu Jan 23 14:28:40 2020        DEPLOYED        dbaas1-3.0.0            1.0             ricplt
r3-e2mgr              1               Thu Jan 23 14:28:52 2020        DEPLOYED        e2mgr-3.0.0             1.0             ricplt
r3-e2term             1               Thu Jan 23 14:29:04 2020        DEPLOYED        e2term-3.0.0            1.0             ricplt
r3-infrastructure     1               Thu Jan 23 14:28:02 2020        DEPLOYED        infrastructure-3.0.0    1.0             ricplt
r3-jaegeradapter      1               Thu Jan 23 14:29:47 2020        DEPLOYED        jaegeradapter-3.0.0     1.0             ricplt
r3-rsm                1               Thu Jan 23 14:29:39 2020        DEPLOYED        rsm-3.0.0               1.0             ricplt
r3-rtmgr              1               Thu Jan 23 14:28:27 2020        DEPLOYED        rtmgr-3.0.0             1.0             ricplt
r3-submgr             1               Thu Jan 23 14:29:23 2020        DEPLOYED        submgr-3.0.0            1.0             ricplt
r3-vespamgr           1               Thu Jan 23 14:29:31 2020        DEPLOYED        vespamgr-3.0.0          1.0             ricplt

# kubectl get pods -n ricplt
NAME                                               READY   STATUS             RESTARTS   AGE
deployment-ricplt-a1mediator-69f6d68fb4-7trcl      1/1     Running            0          159m
deployment-ricplt-appmgr-845d85c989-qxd98          2/2     Running            0          160m
deployment-ricplt-dbaas-7c44fb4697-flplq           1/1     Running            0          159m
deployment-ricplt-e2mgr-569fb7588b-wrxrd           1/1     Running            0          159m
deployment-ricplt-e2term-alpha-db949d978-rnd2r     1/1     Running            0          159m
deployment-ricplt-jaegeradapter-585b4f8d69-tmx7c   1/1     Running            0          158m
deployment-ricplt-rsm-755f7c5c85-j7fgf             1/1     Running            0          158m
deployment-ricplt-rtmgr-c7cdb5b58-2tk4z            1/1     Running            0          160m
deployment-ricplt-submgr-5b4864dcd7-zwknw          1/1     Running            0          159m
deployment-ricplt-vespamgr-864f95c9c9-5wth4        1/1     Running            0          158m
r3-infrastructure-kong-68f5fd46dd-lpwvd            2/2     Running            3          160m

# kubectl get pods -n ricinfra
NAME                                        READY   STATUS      RESTARTS   AGE
deployment-tiller-ricxapp-d4f98ff65-9q6nb   1/1     Running     0          163m
tiller-secret-generator-plpbf               0/1     Completed   0          163m

Checking Container Health

Check the health of the application manager platform component by querying it via the ingress controller using the following command.

% curl -v http://localhost:32080/appmgr/ric/v1/health/ready

The output should look as follows.

*   Trying 10.0.2.100...
* TCP_NODELAY set
* Connected to 10.0.2.100 (10.0.2.100) port 32080 (#0)
> GET /appmgr/ric/v1/health/ready HTTP/1.1
> Host: 10.0.2.100:32080
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json
< Content-Length: 0
< Connection: keep-alive
< Date: Wed, 22 Jan 2020 20:55:39 GMT
< X-Kong-Upstream-Latency: 0
< X-Kong-Proxy-Latency: 2
< Via: kong/1.3.1
<
* Connection #0 to host 10.0.2.100 left intact

Undeploying the Infrastructure and Platform Groups

To undeploy all the containers, perform the following steps in a root shell within the it-dep repository.

# cd bin
# ./uninstall

Results similar to below indicate a complete and successful cleanup.

# ./undeploy-ric-platform
Undeploying RIC platform components [appmgr rtmgr dbaas1 e2mgr e2term a1mediator submgr vespamgr rsm jaegeradapter infrastructure]
release "r3-appmgr" deleted
release "r3-rtmgr" deleted
release "r3-dbaas1" deleted
release "r3-e2mgr" deleted
release "r3-e2term" deleted
release "r3-a1mediator" deleted
release "r3-submgr" deleted
release "r3-vespamgr" deleted
release "r3-rsm" deleted
release "r3-jaegeradapter" deleted
release "r3-infrastructure" deleted
configmap "ricplt-recipe" deleted
namespace "ricxapp" deleted
namespace "ricinfra" deleted
namespace "ricplt" deleted

Restarting the VM

After a reboot of the VM, and a suitable delay for initialization, all the containers should be running again as shown above.

RIC Applications

xApp Onboarding using CLI tool called dms_cli

xApp onboarder provides a cli tool called dms_cli to fecilitate xApp onboarding service to operators. It consumes the xApp descriptor and optionally additional schema file, and produces xApp helm charts.

Below are the sequence of steps to onboard, install and uninstall the xApp.

Step 1: (OPTIONAL ) Install python3 and its dependent libraries, if not installed.

Step 2: Prepare the xApp descriptor and an optional schema file. xApp descriptor file is a config file that defines the behavior of the xApp. An optional schema file is a JSON schema file that validates the self-defined parameters.

Step 3: Before any xApp can be deployed, its Helm chart must be loaded into this private Helm repository.

#Create a local helm repository with a port other than 8080 on host
docker run --rm -u 0 -it -d -p 8090:8080 -e DEBUG=1 -e STORAGE=local -e STORAGE_LOCAL_ROOTDIR=/charts -v $(pwd)/charts:/charts chartmuseum/chartmuseum:latest

Step 4: Set up the environment variables for CLI connection using the same port as used above.

#Set CHART_REPO_URL env variable
export CHART_REPO_URL=http://0.0.0.0:8090

Step 5: Install dms_cli tool

#Git clone appmgr
git clone "https://gerrit.o-ran-sc.org/r/ric-plt/appmgr"

#Change dir to xapp_onboarder
cd appmgr/xapp_orchestrater/dev/xapp_onboarder

#If pip3 is not installed, install using the following command
yum install python3-pip

#In case dms_cli binary is already installed, it can be uninstalled using following command
pip3 uninstall xapp_onboarder

#Install xapp_onboarder using following command
pip3 install ./

Step 6: (OPTIONAL ) If the host user is non-root user, after installing the packages, please assign the permissions to the below filesystems

#Assign relevant permission for non-root user
sudo chmod 755 /usr/local/bin/dms_cli
sudo chmod -R 755 /usr/local/lib/python3.6
sudo chmod -R 755 /usr/local/lib/python3.6

Step 7: Onboard your xApp

# Make sure that you have the xapp descriptor config file and the schema file at your local file system
dms_cli onboard CONFIG_FILE_PATH SCHEMA_FILE_PATH
OR
dms_cli onboard --config_file_path=CONFIG_FILE_PATH --shcema_file_path=SCHEMA_FILE_PATH

#Example:
dms_cli onboard /files/config-file.json /files/schema.json
OR
dms_cli onboard --config_file_path=/files/config-file.json --shcema_file_path=/files/schema.json

Step 8: (OPTIONAL ) List the helm charts from help repository.

#List all the helm charts from help repository
curl -X GET http://localhost:8080/api/charts | jq .

#List details of specific helm chart from helm repository
curl -X GET http://localhost:8080/api/charts/<XAPP_CHART_NAME>/<VERSION>

Step 9: (OPTIONAL ) Delete a specific Chart Version from helm repository.

#Delete a specific Chart Version from helm repository
curl -X DELETE http://localhost:8080/api/charts/<XAPP_CHART_NAME>/<VERSION>

Step 10: (OPTIONAL ) Download the xApp helm charts.

dms_cli download_helm_chart XAPP_CHART_NAME VERSION --output_path=OUTPUT_PATH
OR
dms_cli download_helm_chart --xapp_chart_name=XAPP_CHART_NAME --version=VERSION --output_path=OUTPUT_PATH

Example:
dms_cli download_helm_chart ueec 1.0.0 --output_path=/files/helm_xapp
OR
dms_cli download_helm_chart --xapp_chart_name=ueec --version=1.0.0 --output_path=/files/helm_xapp

Step 11: Install the xApp.

dms_cli install XAPP_CHART_NAME VERSION NAMESPACE
OR
dms_cli install --xapp_chart_name=XAPP_CHART_NAME --version=VERSION --namespace=NAMESPACE

Example:
dms_cli install ueec 1.0.0 ricxapp
OR
dms_cli install --xapp_chart_name=ueec --version=1.0.0 --namespace=ricxapp

Step 12: (OPTIONAL ) Install xApp using helm charts by providing the override values.yaml.

#Download the default values.yaml
dms_cli download_values_yaml XAPP_CHART_NAME VERSION --output_path=OUTPUT_PATH
OR
dms_cli download_values_yaml --xapp_chart_name=XAPP_CHART_NAME --version=VERSION --output_path=OUTPUT_PATH

Example:
dms_cli download_values_yaml traffic-steering 0.6.0 --output-path=/tmp
OR
dms_cli download_values_yaml --xapp_chart_name=traffic-steering --version=0.6.0 --output-path=/tmp

#Modify values.yaml and provide it as override file
dms_cli install XAPP_CHART_NAME VERSION NAMESPACE OVERRIDEFILE
OR
dms_cli install --xapp_chart_name=XAPP_CHART_NAME --version=VERSION --namespace=NAMESPACE --overridefile=OVERRIDEFILE

Example:
dms_cli install ueec 1.0.0 ricxapp /tmp/values.yaml
OR
dms_cli install --xapp_chart_name=ueec --version=1.0.0 --namespace=ricxapp --overridefile=/tmp/values.yaml

Step 13: (OPTIONAL ) Uninstall the xApp.

dms_cli uninstall XAPP_CHART_NAME NAMESPACE
OR
dms_cli uninstall --xapp_chart_name=XAPP_CHART_NAME --namespace=NAMESPACE

Example:
dms_cli uninstall ueec ricxapp
OR
dms_cli uninstall --xapp_chart_name=ueec --namespace=ricxapp

Step 14: (OPTIONAL) Upgrade the xApp to a new version.

dms_cli upgrade XAPP_CHART_NAME OLD_VERSION NEW_VERSION NAMESPACE
OR
dms_cli upgrade --xapp_chart_name=XAPP_CHART_NAME --old_version=OLD_VERSION --new_version=NEW_VERSION --namespace=NAMESPACE

Example:
dms_cli upgrade ueec 1.0.0 2.0.0 ricxapp
OR
dms_cli upgrade --xapp_chart_name=ueec --old_version=1.0.0 --new_version=2.0.0 --namespace=ricxapp

Step 15: (OPTIONAL) Rollback the xApp to old version.

dms_cli rollback XAPP_CHART_NAME NEW_VERSION OLD_VERSION NAMESPACE
OR
dms_cli rollback --xapp_chart_name=XAPP_CHART_NAME --new_version=NEW_VERSION --old_version=OLD_VERSION --namespace=NAMESPACE

Example:
dms_cli rollback ueec 2.0.0 1.0.0 ricxapp
OR
dms_cli rollback --xapp_chart_name=ueec --new_version=2.0.0 --old_version=1.0.0 --namespace=ricxapp

Step 16: (OPTIONAL) Check the health of xApp.

dms_cli health_check XAPP_CHART_NAME NAMESPACE
OR
dms_cli health_check --xapp_chart_name=XAPP_CHART_NAME --namespace=NAMESPACE

Example:
dms_cli health_check ueec ricxapp
OR
dms_cli health_check --xapp_chart_name=ueec --namespace=ricxapp

OPTIONALLY use Redis Cluster (instead of Redis standalone)

Important

The redis-cluster currently is NOT part of RIC platform & hence is completely optional. This piece of document has been created as part of delivery item for below jira ticket https://jira.o-ran-sc.org/browse/RIC-109 This ticket is about assessing the feasibility of redis-cluster (with data sharding) supporting desired pod anti-affinity for high availability as per the ticket.

Overview

This document describes the environment/conditions used to test the feasibility of Redis cluster set-up as detailed in the above ticket. Redis Cluster is a distributed implementation of Redis with high performance goals. More details at https://redis.io/topics/cluster-spec

Environment Set-Up

The set up was tested with kubernetes v1.19 cluster with
  1. Pod topology spread constraint enabled Reference: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints

  2. CEPH as the Cluster Storage Solution.

  3. Three worker nodes in the kubernet cluster

Execution

Once environment is set-up, a redis-cluster can be set up using the helm-chart (also provided with this commit). Once cluster is running, any master/slave of the redis instance pods can be deleted which will be compensated automatically by new instances

At this stage the perl utility program (included with helm-chart) can be run. The helm chart installation output generates the requirement commands to invoke.

This utility program identifies the missing anti-affinity(as per above ticket) of redis instances required in a redis-cluster. When executed it communicates to redis nodes to switch roles (e.g. master/slave) such that the end-state meets the desired anti-affinity.