Welcome to INF O2 documentation
INF O2 Service Overview
This project implements a reference O-RAN O2 IMS and DMS service to expose the INF platform to SMO via the O-RAN O2 interface.
In the G release, the following APIs are supported by the INF O2 service:
INF O2 service Infrastructure Management Service (IMS)
INF O2 service Inventory API
O2 service discovers following resources of INF platform to answer queries from SMO
INF platform information
Resource Pool of the INF platform
Resources of the Resource Pool, including pserver, cpu, memory, interface, accelerator
Resource Types associated with Resources
INF platform Subscription and Notification
INF O2 service exposes Subscription API to enable SMO subscribes to Notification of changes of resources
INF platform Deployment Management Service profile queries API
INF O2 service enables lookup of INF Native Kubernetes API information as part of inventory
INF O2 service Monitoring API
O2 service discovers alarms of INF platform to answer queries from SMO
INF alarm event record information
INF alarm Subscription and Notification
INF O2 service exposes alarm Subscription API to enable SMO subscribes to Notification of changes of alarms
Developer-Guide
This project implements a reference implementation for O-RAN O2 IMS and DMS to expose the INF platform to SMO with the O2 interface.
To contribute to this project, you are supposed to be familiar with the INF platform as well as O-RAN O2 interface specifications:
1. Prerequisite for building environment
A ubuntu 18.04 host is sufficient to build o2 projects
# clone code from gerrit repo
$ git clone "https://gerrit.o-ran-sc.org/r/pti/o2" && (cd "o2" && mkdir -p .git/hooks && curl -Lo `git rev-parse --git-dir`/hooks/commit-msg https://gerrit.o-ran-sc.org/r/tools/hooks/commit-msg; chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
# run unit tests
$ sudo apt-get install tox
$ tox -e flake8
$ tox -e code
2. Local test with docker-compose
To test with docker-compose, a docker engine is supposed to be installed as well
$ docker-compose build
$ docker-compose up -d
$ docker-compose run --rm --no-deps --entrypoint=pytest api /tests/unit /tests/integration
3. Test with INF platform
To test with INF platform, you should install INF platform first, by default you will be able to use the ‘admin’ user
$ source ./admin_openrc.sh
$ export |grep OS_AUTH_URL
$ export |grep OS_USERNAME
$ export |grep OS_PASSWORD
$ docker-compose run --rm --no-deps --entrypoint=pytest api /tests/integration-ocloud --log-level=DEBUG --log-file=/tests/debug.log
4. Tear down docker containers
$ docker-compose down --remove-orphans
Release-notes
This document provides the release notes for 2.0.2 of INF O2 service.
Version History
Date |
Ver. |
Author |
Comment |
2023-06-15 |
2.0.2 |
Jon Zhang, Jackie Huang |
H Release |
2022-12-15 |
2.0.1 |
Bin Yang, Jon Zhang, Jackie Huang, David Liu |
G Release |
2022-06-15 |
1.0.1 |
Bin Yang, Jon Zhang |
F Release |
2021-12-15 |
1.0.0 |
Bin Yang, Jon Zhang |
E Release |
Version 2.0.2, 2023-06-15
Upgrade Inventory API
Support capabilities attribute of the DMS query to support the PlugFest with SMO integration
Update the Subscription and Notification part
Adding the oAuth2 configuration for the O2 service query the SMO
Registration and notification to SMO support oAuth2 verification
Rewrite the subscription filter part
Specification compliance
Compliance to “O-RAN.WG6.O2IMS-INTERFACE-R003-v04.00”
Adding InfrastructureInventoryObject abstract class
Other updates
Bugs fixed
Version 2.0.1, 2022-12-15
Upgrade Inventory API, and add Monitoring API
Support HTTPS/TLS for API endpoint
Support authentication with token of API
Add “api_version” query in base API
Add “O2IMS_InfrastructureMonitoring” API part
Support Attribute-based selectors, and API query filter parameters following the specification
Updating error handling of all the API queries
Update the Subscription and Notification part
Notification SMO and register O-Cloud when the application starts with SMO configuration
Support subscription inventory change or alarm notification with the filter parameter
Specification compliance
Compliance to “O-RAN.WG6.O2IMS-INTERFACEv03.0”
Updating modeling, including ResourcePool, ResourceInfo, DeploymentManager, ResourceType, Notification, O-Cloud, AlarmEventRecord, AlarmDictorary, and AlarmDefinition
Adding Accelerators as a resource; adding virtual resource type
Other updates
Add configuration file load at application starts
Fix bugs
Replace POC O2DMS APIs with Kubernetes Native API Profile for Containerized NFs
Version 1.0.1, 2022-06-15
Add Distributed Cloud(DC) supported
Enable multiple ResourcePool support in DC mode
Enable multiple DeploymentManager support in DC mode
Add O2 DMS profiles
Support native_k8sapi profile that can get native Kubernetes API information
Support SOL018 specification, it includes native Kubernetes API profile and Helm CLI profile, “sol018”, and “sol018_helmcli”
Version 1.0.0, 2021-12-15
Initial version (E release)
Add O2 IMS for INF platform
Enable INF platform registration to SMO
Enable O2 infrastructure inventory service API
Enable O2 Subscription service API
Enable O2 Notification service to notify SMO about the resource changes
ADD O2 DMS for INF platform
A PoC which enables Lifecycle management of NfDeployment represents CNF described with helm chart
Add Lifecycle Management API for NfDeploymentDescriptor which represents a helm chart for NfDeployment
Add Lifecycle Management API for NfDeployment
Installation Guide
Abstract
This document describes how to install INF O2 service over the O-RAN INF platform.
The audience of this document is assumed to have basic knowledge of kubernetes CLI, and helm chart cli.
Preface
In the context of hosting a RAN Application on INF, the O-RAN O2 Application provides and exposes the IMS and DMS service APIs of the O2 interface between the O-Cloud (INF) and the Service Management & Orchestration (SMO), in the O-RAN Architecture.
The O2 interfaces enable the management of the O-Cloud (INF) infrastructure and the deployment life-cycle management of O-RAN cloudified NFs that run on O-Cloud (INF). See O-RAN O2 General Aspects and Principles 2.0, and INF O2 documentation.
The O-RAN O2 application is integrated into INF as a system application. The O-RAN O2 application package is saved in INF during system installation, but it is not applied by default.
System administrators can follow the procedures below to install and uninstall the O-RAN O2 application.
INF O2 Service Install
1. Prerequisites
Configure the internal Ceph storage for the O2 application persistent storage, see INF Storage Configuration and Management: Configure the Internal Ceph Storage Backend.
Enable PVC support in oran-o2
namespace, see INF Storage
Configuration and Management: Enable ReadWriteOnce PVC Support in
Additional
Namespaces.
2. Procedure
You can install O-RAN O2 application on INF from the command line.
Locate the O2 application tarball in
/usr/local/share/applications/helm
.For example:
/usr/local/share/applications/helm/oran-o2-<version>.tgz
Download
admin_openrc.sh
from the INF admin dashboard.Click the Download OpenStack RC File”/”OpenStack RC File button
Copy the file to the controller host.
Source the platform environment.
$ source ./admin_openrc.sh ~(keystone_admin)]$
Upload the application.
~(keystone_admin)]$ system application-upload /usr/local/share/applications/helm/oran-o2-<version>.tgz
Prepare the override
yaml
file.Create a service account for SMO application.
Create a ServiceAccount which can be used to provide SMO application with minimal access permission credentials.
export SMO_SERVICEACCOUNT=smo1 cat <<EOF > smo-serviceaccount.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"] --- apiVersion: v1 kind: ServiceAccount metadata: name: ${SMO_SERVICEACCOUNT} namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: read-pods namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: pod-reader subjects: - kind: ServiceAccount name: ${SMO_SERVICEACCOUNT} namespace: default EOF kubectl apply -f smo-serviceaccount.yaml
Create a secret for service account and obtain an access token.
Create a secret with the type service-account-token and pass the ServiceAccount in the annotation section as shown below:
export SMO_SECRET=smo1-secret cat <<EOF > smo-secret.yaml apiVersion: v1 kind: Secret metadata: name: ${SMO_SECRET} annotations: kubernetes.io/service-account.name: ${SMO_SERVICEACCOUNT} type: kubernetes.io/service-account-token EOF kubectl apply -f smo-secret.yaml export SMO_TOKEN_DATA=$(kubectl get secrets $SMO_SECRET -o jsonpath='{.data.token}' | base64 -d -w 0)
Create certificates for the O2 service.
Obtain an intermediate or Root CA-signed certificate and key from a trusted intermediate or Root Certificate Authority (CA). Refer to the documentation for the external Root CA that you are using on how to create a public certificate and private key pairs signed by an intermediate or Root CA for HTTPS.
For lab purposes, see INF Security: Create Certificates Locally using openssl to create an Intermediate or test Root CA certificate and key, and use it to locally sign test certificates.
The resulting files, from either an external CA or locally generated for the lab with openssl, should be:
Local CA certificate -
my-root-ca-cert.pem
Server certificate -
my-server-cert.pem
Server key -
my-server-key.pem
Note If using a server certificate signed by a local CA (i.e. lab scenario above), this local CA certificate (e.g. my-root-ca-cert.pem from lab scenario above) must be shared with the SMO application for the O2 server certificate verification.
Prepare the O2 service application configuration file.
As per the Cloudification and Orchestration use case defined in O-RAN Working Group 6, the following information should be generated by SMO:
O-Cloud Gload ID -
OCLOUD_GLOBAL_ID
SMO Register URL -
SMO_REGISTER_URL
See O-RAN Cloudification and Orchestration Use Cases and Requirements for O-RAN Virtualized RAN.
API_HOST_EXTERNAL_FLOATING=$(echo ${OS_AUTH_URL} | awk -F / '{print $3}' | cut -d: -f1) cat <<EOF > app.conf [DEFAULT] ocloud_global_id = ${OCLOUD_GLOBAL_ID} smo_register_url = ${SMO_REGISTER_URL} smo_token_data = ${SMO_TOKEN_DATA} [OCLOUD] OS_AUTH_URL = ${OS_AUTH_URL} OS_USERNAME = ${OS_USERNAME} OS_PASSWORD = ${OS_PASSWORD} API_HOST_EXTERNAL_FLOATING = ${API_HOST_EXTERNAL_FLOATING} [API] [WATCHER] [PUBSUB] EOF
Retrieve the CA certificate from your SMO vendor.
If the SMO application provides service via HTTPS, and the server certificate is self-signed, the CA certficate should be retrieved from the SMO.
This procedure assumes that the name of the certificate is
smo-ca.pem
Populate the override yaml file.
Refer to the previous step for the required override values.
APPLICATION_CONFIG=$(base64 app.conf -w 0) SERVER_CERT=$(base64 my-server-cert.pem -w 0) SERVER_KEY=$(base64 my-server-key.pem -w 0) SMO_CA_CERT=$(base64 smo-ca.pem -w 0) cat <<EOF > o2service-override.yaml applicationconfig: ${APPLICATION_CONFIG} servercrt: ${SERVER_CERT} serverkey: ${SERVER_KEY} smocacrt: ${SMO_CA_CERT} EOF
To deploy other versions of an image required for a quick solution, to have early access to the features (eg. oranscinf/pti-o2imsdms:2.0.0), and to authenticate images that are hosted by a private registry, follow the steps below:
Create a docker-registry secret in
oran-o2
namespace.export O2SERVICE_IMAGE_REG=<docker-server-endpoint> kubectl create secret docker-registry private-registry-key \ --docker-server=${O2SERVICE_IMAGE_REG} --docker-username=${USERNAME} \ --docker-password=${PASSWORD} -n oran-o2
Refer to the
imagePullSecrets
in override file.cat <<EOF > o2service-override.yaml imagePullSecrets: - private-registry-key o2ims: serviceaccountname: admin-oran-o2 images: tags: o2service: ${O2SERVICE_IMAGE_REG}/docker.io/oranscinf/pti-o2imsdms:2.0.0 postgres: ${O2SERVICE_IMAGE_REG}/docker.io/library/postgres:9.6 redis: ${O2SERVICE_IMAGE_REG}/docker.io/library/redis:alpine pullPolicy: IfNotPresent logginglevel: "DEBUG" applicationconfig: ${APPLICATION_CONFIG} servercrt: ${SERVER_CERT} serverkey: ${SERVER_KEY} smocacrt: ${SMO_CA_CERT} EOF
Update the overrides for the oran-o2 application.
~(keystone_admin)]$ system helm-override-update oran-o2 oran-o2 oran-o2 --values o2service-override.yaml # Check the overrides ~(keystone_admin)]$ system helm-override-show oran-o2 oran-o2 oran-o2
Run the system application-apply command to apply the updates.
~(keystone_admin)]$ system application-apply oran-o2
Monitor the status using the command below.
~(keystone_admin)]$ watch -n 5 system application-list
OR
~(keystone_admin)]$ watch kubectl get all -n oran-o2
3. Results
You have launched services in the above namespace.
4. Postrequisites
You will need to integrate INF with an SMO application that performs management of O-Cloud infrastructure and the deployment life cycle management of O-RAN cloudified NFs. See the following API reference for details:
INF O2 Service Uninstall
1. Procedure
You can uninstall the O-RAN O2 application on INF from the command line.
Uninstall the application.
Remove O2 application related resources.
~(keystone_admin)]$ system application-remove oran-o2
Delete the application.
Remove the uninstalled O2 application’s definition, including the manifest and helm charts and helm chart overrides, from the system.
~(keystone_admin)]$ system application-delete oran-o2
2. Results
You have uninstalled the O2 application from the system.
INF O2 Service User Guide
This guide will introduce the process that make INF O2 interface work with SMO.
Assume you have an O2 service with INF platform environment, and you have the token of the O2 service.
export OAM_IP=<INF_OAM_IP> export SMO_TOKEN_DATA=<TOKEN of O2 Service>
Discover INF platform inventory
INF platform auto-discovery
After you installed the INF O2 service, it will automatically discover the INF through the parameters that you give from the “o2service-override.yaml”
The below command can get the INF platform information as O-Cloud
curl -k -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}"
Resource pool
The INF platform is a standalone environment, it has one resource pool. If the INF platform is a distributed cloud environment, the central cloud will be one resource pool, and each of the sub-cloud will be a resource pool. All the resources that belong to the cloud will be organized into the resource pool.
Get the resource pool information through this interface
curl -k -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/resourcePools" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}" # export the first resource pool id export resourcePoolId=`curl -k -X 'GET' "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/resourcePools" -H 'accept: application/json' -H "Authorization: Bearer $SMO_TOKEN_DATA" 2>/dev/null | jq .[0].resourcePoolId | xargs echo` echo ${resourcePoolId} # check the exported resource pool id
Resource type
Resource type defined what type is the specified resource, like a physical machine, memory, or CPU
Show all resource type
curl -k -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/resourceTypes" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}"
Resource
Get the list of all resources, the value of resourcePoolId from the result of the resource pool interface
curl -k -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/resourcePools/${resourcePoolId}/resources" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}"
To get the detail of one resource, need to export one specific resource id that wants to check
# export the first resource id in the resource pool export resourceId=`curl -k -X 'GET' "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/resourcePools/${resourcePoolId}/resources" -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}" 2>/dev/null | jq .[0].resourceId | xargs echo` echo ${resourceId} # check the exported resource id # Get the detail of one specific resource curl -k -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/resourcePools/${resourcePoolId}/resources/${resourceId}" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}"
Deployment manager services endpoint
The Deployment Manager Service (DMS) related to this IMS information you can use the below API to check
curl -k -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/deploymentManagers" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}"
Provisioning INF platform with SMO endpoint configuration
Assume you have an SMO, and prepare the configuration of the INF platform with the SMO endpoint address before the O2 service installation. This provisioning of the INF O2 service will make a request from the INF O2 service to SMO while the O2 service installing, which make SMO know the O2 service is working.
After you installed the INF O2 service, it will automatically register the SMO through the parameters that you give from the “o2app.conf”
export OCLOUD_GLOBAL_ID=<Ocloud global UUID defined by SMO> export SMO_REGISTER_URL=<SMO Register URL for O2 service> cat <<EOF > o2app.conf [DEFAULT] ocloud_global_id = ${OCLOUD_GLOBAL_ID} smo_register_url = ${SMO_REGISTER_URL} ...
Subscribe to the INF platform resource change notification
Assume you have an SMO, and the SMO has an API that can receive callback request
Create a subscription to the INF O2 IMS
export SMO_SUBSCRIBE_CALLBACK=<The Callback URL for SMO Subscribe resource> export SMO_CONSUMER_SUBSCRIPTION_ID=<The Subscription ID of the SMO Consumer> curl -k -X 'POST' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/subscriptions" \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer ${SMO_TOKEN_DATA}" \ -d '{ "callback": "'${SMO_SUBSCRIBE_CALLBACK}'", "consumerSubscriptionId": "'${SMO_CONSUMER_SUBSCRIPTION_ID}'", "filter": "" }'
Handle resource change notification
When the SMO callback API gets the notification that the resource of INF platform changing, use the URL to get the latest resource information to update its database
Subscribe to the INF platform alarm change notification
Assume you have an SMO, and the SMO has an API that can receive callback request
Create an alarm subscription to the INF O2 IMS
export SMO_SUBSCRIBE_CALLBACK=<The Callback URL for SMO Subscribe alarm> export SMO_CONSUMER_SUBSCRIPTION_ID=<The Subscription ID of the SMO Consumer> curl -k -X 'POST' \ "https://${OAM_IP}:30205/o2ims-infrastructureMonitoring/v1/alarmSubscriptions" \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer ${SMO_TOKEN_DATA}" \ -d '{ "callback": "'${SMO_SUBSCRIBE_CALLBACK}'", "consumerSubscriptionId": "'${SMO_CONSUMER_SUBSCRIPTION_ID}'", "filter": "" }'
Handle alarm change notification
When the SMO callback API gets the alarm of the INF platform, use the URL to get the latest alarm event record information to get more details
Use Kubernetes Control Client through O2 DMS profile
Assume you have the kubectl command tool on your local Linux environment.
And install the ‘jq’ command for your Linux bash terminal. If you are using Ubuntu, you can follow the below command to install it.
# install the 'jq' command sudo apt-get install -y jq # install 'kubectl' command sudo apt-get install -y apt-transport-https echo "deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main" | \ sudo tee -a /etc/apt/sources.list.d/kubernetes.list gpg --keyserver keyserver.ubuntu.com --recv-keys 836F4BEB gpg --export --armor 836F4BEB | sudo apt-key add - sudo apt-get update sudo apt-get install -y kubectl
We need to get the Kubernetes profile to set up the kubectl command tool.
Get the DMS Id in the INF O2 service, and set it into bash environment.
# Get all DMS ID, and print them with command dmsIDs=$(curl -k -s -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/deploymentManagers" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}" \ | jq --raw-output '.[]["deploymentManagerId"]') for i in $dmsIDs;do echo ${i};done; # Choose one DMS and set it to bash environment, here I set the first one export dmsID=$(curl -k -s -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/deploymentManagers" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}" \ | jq --raw-output '.[0]["deploymentManagerId"]') echo ${dmsID} # check the exported DMS Id
The profile of the ‘kubectl’ need the cluster name, I assume it is set to “o2dmsk8s1”.
It also needs the server endpoint address, username, and authority, and for the environment that has Certificate Authority validation, it needs the CA data to be set up.
CLUSTER_NAME="o2dmsk8s1" # set the cluster name K8S_SERVER=$(curl -k -s -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=native_k8sapi" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}" \ | jq --raw-output '.["extensions"]["profileData"]["cluster_api_endpoint"]') K8S_CA_DATA=$(curl -k -s -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=native_k8sapi" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}" \ | jq --raw-output '.["extensions"]["profileData"]["cluster_ca_cert"]') K8S_USER_NAME=$(curl -k -s -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=native_k8sapi" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}" \ | jq --raw-output '.["extensions"]["profileData"]["admin_user"]') K8S_USER_CLIENT_CERT_DATA=$(curl -k -s -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=native_k8sapi" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}" \ | jq --raw-output '.["extensions"]["profileData"]["admin_client_cert"]') K8S_USER_CLIENT_KEY_DATA=$(curl -k -s -X 'GET' \ "https://${OAM_IP}:30205/o2ims-infrastructureInventory/v1/deploymentManagers/${dmsID}?profile=native_k8sapi" \ -H 'accept: application/json' -H "Authorization: Bearer ${SMO_TOKEN_DATA}" \ | jq --raw-output '.["extensions"]["profileData"]["admin_client_key"]') # If you do not want to set up the CA data, you can execute following command without the secure checking # kubectl config set-cluster ${CLUSTER_NAME} --server=${K8S_SERVER} --insecure-skip-tls-verify kubectl config set-cluster ${CLUSTER_NAME} --server=${K8S_SERVER} kubectl config set clusters.${CLUSTER_NAME}.certificate-authority-data ${K8S_CA_DATA} kubectl config set-credentials ${K8S_USER_NAME} kubectl config set users.${K8S_USER_NAME}.client-certificate-data ${K8S_USER_CLIENT_CERT_DATA} kubectl config set users.${K8S_USER_NAME}.client-key-data ${K8S_USER_CLIENT_KEY_DATA} # set the context and use it kubectl config set-context ${K8S_USER_NAME}@${CLUSTER_NAME} --cluster=${CLUSTER_NAME} --user ${K8S_USER_NAME} kubectl config use-context ${K8S_USER_NAME}@${CLUSTER_NAME} kubectl get ns # check the command working with this context
Now you can use “kubectl”, which means you set up a successfully Kubernetes client. But, it uses the default admin user, so I recommend you create an account for yourself.
Create a new user and account for K8S with a “cluster-admin” role. And, set the token of this user to the base environment.
USER="admin-user" NAMESPACE="kube-system" cat <<EOF > admin-login.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ${USER} namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ${USER} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: ${USER} namespace: kube-system EOF kubectl apply -f admin-login.yaml TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${USER} | awk '{print $1}') | grep "token:" | awk '{print $2}') echo $TOKEN_DATA
Set the new user in ‘kubectl’ replace the original user, and set the default namespace into the context.
NAMESPACE=default TOKEN_DATA=<TOKEN_DATA from INF> USER="admin-user" CLUSTER_NAME="o2dmsk8s1" kubectl config set-credentials ${USER} --token=$TOKEN_DATA kubectl config set-context ${USER}@inf-cluster --cluster=${CLUSTER_NAME} --user ${USER} --namespace=${NAMESPACE} kubectl config use-context ${USER}@inf-cluster
O-RAN O2 API Definition v1
This document defines how a SMO like application can perform the management of O-Cloud infrastructures and the deployment life cycle management of O-RAN cloudified NFs that run on O-Cloud via O-RAN O2 interfaces.
The typical port used for the O-RAN O2 REST API is 30205.
Here we describe the API to access the O2 API.
O2 API v1
The O2 API v1 provides API includes O2ims_InfrastructureInventory, O2ims_InfrastructureMonitoring and Kubernetes native API based O2dms interfaces.
See O-RAN O2 API v1 for full details of the API.
The API is also described in Swagger-JSON and YAML:
API name |
||
---|---|---|
O-RAN O2 API |
INF O2 Services API
This page is deprecated, please go to the O-RAN O2 API Definition v1.