Installing Near-realtime RIC

The installation of Near Realtime RAN Intelligent Controller is spread onto two separate Kubernetes clusters. The first cluster is used for deploying the Near Realtime RIC (platform and applications), and the other is for deploying other auxiliary functions. They are referred to as RIC cluster and AUX cluster respectively.

The following diagram depicts the installation architecture.


Within the RIC cluster, Kubernetes resources are deployed using three name spaces: ricinfra, ricplt, and ricxapp by default. Similarly, within the AUX cluster, Kubernetes resources are deployed using two name spaces: ricinfra, and ricaux.

For each cluster, there is a Kong ingress controller that proxies incoming API calls into the cluster. With Kong, service APIs provided by Kubernetes resources can be accessed at the cluster node IP and port via a URL path. For cross-cluster communication, in addition to Kong, each Kubernetes namespace has a special Kubernetes service defined with an endpoint pointing to the other cluster’s Kong. This way any pod can access services exposed at the other cluster via the internal service hostname and port of this special service. The figure below illustrates the details of how Kong and external services work together to realize cross-cluster communication.



Both RIC and AUX clusters need to fulfill the following prerequisites.

  • Kubernetes v.1.16.0 or above

  • helm v2.12.3/v3.5.x or above

  • Read-write access to directory /mnt

The following two sections show two example methods to create an environment for installing RIC.

VirtualBox VMs as Installation Hosts

The deployment of Near Realtime RIC can be done on a wide range of hosts, including bare metal servers, OpenStack VMs, and VirtualBox VMs. This section provides detailed instructions for setting up Oracle VirtualBox VMs to be used as installation hosts.


The set up requires two VMs connected by a private network. With VirtualBox, this can be done by going under its “Preferences” menu and setting up a private NAT network.

  1. Pick “Preferences”, then select the “Network” tab;

  2. Click on the “+” icon to create a new NAT network. A new entry will appear in the NAT networks list

  3. Double click on the new network to edit its details; give it a name such as “RICNetwork”

  4. In the dialog, make sure to check the “Enable Network” box, uncheck the “Supports DHCP” box, and make a note of the “Network CIDR” (for this example, it is;

  5. Click on the “Port Forwarding” button then in the table create the following rules:

    1. “ssh to ric”, TCP,, 22222,, 22;

    2. “ssh to aux”, TCP,, 22223,, 22;

    3. “entry to ric”, TCP,, 22224,, 32080;

    4. “entry to aux”, TCP,, 22225,, 32080.

  6. Click “Ok” all the way back to create the network.

Creating VMs

Create a VirtualBox VM:

  1. “New”, then enter the following in the pop-up: Name it for example myric, of “Linux” type, and at least 6G RAM and 20G disk;

  2. “Create” to create the VM. It will appear in the list of VMs.

  3. Highlight the new VM entry, right click on it, select “Settings”.

    1. Under the “System” tab, then “Processor” tab, make sure to give the VM at least 2 vCPUs.

    2. Under the “Storage” tab, point the CD to a Ubuntu 18.04 server ISO file;

    3. Under the “Network” tab, then “Adapter 1” tab, make sure to:

      1. Check “Enable Network Adapter”;

      2. Attached to “NAT Network”;

      3. Select the Network that was created in the previous section: “RICNetwork”.

Repeat the process and create the second VM named myaux.

Booting VM and OS Installation

Follow the OS installation steps to install OS to the VM virtual disk media. During the setup you must configure static IP addresses as discussed next. And make sure to install openssh server.

VM Network Configuration

Depending on the version of the OS, the networking may be configured during the OS installation or after. The network interface is configured with a static IP address:

  • IP Address: for myric or for myaux;

  • Subnet, or network mask

  • Default gateway:

  • Name server:; if access to that is is blocked, configure a local DNS server

Accessing the VMs

Because of the port forwarding configurations, the VMs are accessible from the VirtualBox host via ssh.

One-Node Kubernetes Cluster

This section describes how to set up a one-node Kubernetes cluster onto a VM installation host.

Script for Setting Up 1-node Kubernetes Cluster

The it/dep repo can be used for generating a simple script that can help setting up a one-node Kubernetes cluster for dev and testing purposes. Related files are under the tools/k8s/bin directory. Clone the repository on the target VM:

% git clone


The generation of the script reads in the parameters from the following files:

  • etc/env.rc: Normally no change needed for this file. If where the Kubernetes cluster runs has special requirements, such as running private Docker registry with self-signed certificates, or hostnames that can be only resolved via private /etc/hosts entries, such parameters are entered into this file.

  • etc/infra.rc: This file specifies the docker host, Kubernetes, and Kubernetes CNI versions. If a version is left empty, the installation will use the default version that the OS package management software would install.

  • etc/openstack.rc: If the Kubernetes cluster is deployed on Open Stack VMs, this file specifies parameters for accessing the APIs of the Open Stack installation. This is not supported in Amber release yet.

Generating Set-up Script

After the configurations are updated, the following steps will create a script file that can be used for setting up a one-node Kubernetes cluster. You must run this command on a Linux machine with the ‘envsubst’ command installed.

% cd tools/k8s/bin
% ./

A file named would now appear under the bin directory.

Setting up Kubernetes Cluster

The new file is now ready for setting up the Kubernetes cluster.

It can be run from a root shell of an existing Ubuntu 16.04 or 18.04 VM. Running this script will replace any existing installation of Docker host, Kubernetes, and Helm on the VM. The script will reboot the machine upon successful completion. Run the script like this:

% sudo -i
# ./

This script can also be used as the user-data (a.k.a. cloud-init script) supplied to Open Stack when launching a new Ubuntu 16.04 or 18.04 VM.

Upon successful execution of the script and reboot of the machine, when queried in a root shell with the kubectl command the VM should display information similar to below:

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY   STATUS       RESTARTS  AGE
kube-system   coredns-5644d7b6d9-4gjp5               1/1     Running      0         103m
kube-system   coredns-5644d7b6d9-pvsj8               1/1     Running      0         103m
kube-system   etcd-ljitest                           1/1     Running      0         102m
kube-system   kube-apiserver-ljitest                 1/1     Running      0         103m
kube-system   kube-controller-manager-ljitest        1/1     Running      0         102m
kube-system   kube-flannel-ds-amd64-nvjmq            1/1     Running      0         103m
kube-system   kube-proxy-867v5                       1/1     Running      0         103m
kube-system   kube-scheduler-ljitest                 1/1     Running      0         102m
kube-system   tiller-deploy-68bf6dff8f-6pwvc         1/1     Running      0         102m

Onetime setup for Influxdb

Once Kubernetes setup is done, we have to create PersistentVolume through the storage class for the influxdb database. The following one time process should be followed before deploying the influxdb in ricplt namespace.

# User has to check the following namespace exist or not using
% kubectl get ns ricinfra

# If the namespace doesn’t exist, then create it using:
% kubectl create ns ricinfra

% helm install stable/nfs-server-provisioner --namespace ricinfra --name nfs-release-1
% kubectl patch storageclass nfs -p '{"metadata": {"annotations":{"":"true"}}}'
% sudo apt install nfs-common

RIC Platform

After the Kubernetes cluster is installed, the next step is to install the (Near Realtime) RIC Platform.

See instructions in ric-plt/ric-dep:

AUX Functionalities (Optional)

Resource Requirements

To run the RIC-AUX cluster in a dev testing setting, the minimum requirement for resources is a VM with 4 vCPUs, 16G RAM and at least 40G of disk space.

Getting and Preparing Deployment Scripts

Run the following commands in a root shell:

git clone
cd dep
git submodule update --init --recursive --remote

Modify the deployment recipe

Edit the recipe file ./RECIPE_EXAMPLE/AUX/example_recipe.yaml.

  • Specify the IP addresses used by the RIC and AUX cluster ingress controller (e.g., the main interface IP) in the following section. If you are only testing the AUX cluster, you can put down any private IPs (e.g., and

  ricip: ""
  auxip: ""
  • To specify which version of the RIC platform components will be deployed, update the RIC platform component container tags in their corresponding section.

  • You can specify which docker registry will be used for each component. If the docker registry requires login credential, you can add the credential in the following section. Note that the installation script has already included credentials for O-RAN Linux Foundation docker registries. Please do not create duplicate entries.

  enabled: true
      registry: ""
        user: ""
        password: ""
        email: ""

For more advanced recipe configuration options, refer to the recipe configuration guideline.

Deploying the Aux Group

After the recipes are edited, the AUX group is ready to be deployed.

cd dep/bin
./deploy-ric-aux ../RECIPE_EXAMPLE/AUX/example_recipe.yaml

Checking the Deployment Status

Now check the deployment status and results similar to the below indicate a complete and successful deployment.

# helm list
NAME                  REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
r3-aaf                1               Mon Jan 27 13:24:59 2020        DEPLOYED        aaf-5.0.0                               onap
r3-dashboard          1               Mon Jan 27 13:22:52 2020        DEPLOYED        dashboard-1.2.2         1.0             ricaux
r3-infrastructure     1               Mon Jan 27 13:22:44 2020        DEPLOYED        infrastructure-3.0.0    1.0             ricaux
r3-mc-stack           1               Mon Jan 27 13:23:37 2020        DEPLOYED        mc-stack-0.0.1          1               ricaux
r3-message-router     1               Mon Jan 27 13:23:09 2020        DEPLOYED        message-router-1.1.0                    ricaux
r3-mrsub              1               Mon Jan 27 13:23:24 2020        DEPLOYED        mrsub-0.1.0             1.0             ricaux
r3-portal             1               Mon Jan 27 13:24:12 2020        DEPLOYED        portal-5.0.0                            ricaux
r3-ves                1               Mon Jan 27 13:23:01 2020        DEPLOYED        ves-1.1.1               1.0             ricaux

# kubectl get pods -n ricaux
NAME                                           READY   STATUS     RESTARTS   AGE
deployment-ricaux-dashboard-f78d7b556-m5nbw    1/1     Running    0          6m30s
deployment-ricaux-ves-69db8c797-v9457          1/1     Running    0          6m24s
elasticsearch-master-0                         1/1     Running    0          5m36s
r3-infrastructure-kong-7697bccc78-nsln7        2/2     Running    3          6m40s
r3-mc-stack-kibana-78f648bdc8-nfw48            1/1     Running    0          5m37s
r3-mc-stack-logstash-0                         1/1     Running    0          5m36s
r3-message-router-message-router-0             1/1     Running    3          6m11s
r3-message-router-message-router-kafka-0       1/1     Running    1          6m11s
r3-message-router-message-router-kafka-1       1/1     Running    2          6m11s
r3-message-router-message-router-kafka-2       1/1     Running    1          6m11s
r3-message-router-message-router-zookeeper-0   1/1     Running    0          6m11s
r3-message-router-message-router-zookeeper-1   1/1     Running    0          6m11s
r3-message-router-message-router-zookeeper-2   1/1     Running    0          6m11s
r3-mrsub-5c94f5b8dd-wxcw5                      1/1     Running    0          5m58s
r3-portal-portal-app-8445f7f457-dj4z8          2/2     Running    0          4m53s
r3-portal-portal-cassandra-79cf998f69-xhpqg    1/1     Running    0          4m53s
r3-portal-portal-db-755b7dc667-kjg5p           1/1     Running    0          4m53s
r3-portal-portal-db-config-bfjnc               2/2     Running    0          4m53s
r3-portal-portal-zookeeper-5f8f77cfcc-t6z7w    1/1     Running    0          4m53s

RIC Applications

See instructions in ric-plt/ric-dep: