- O-RAN O-Cloud Manager
- Operator Deployment
- mTLS pre-requisites
- Registering the O-Cloud Manager with the SMO
- Testing API endpoints on a cluster
- Using the development debug mode to attach the DLV debugger
- Request Examples
This project is an implementation of the O-RAN O2 IMS API on top of OpenShift and ACM.
Note that at this point this is just experimental and at its very beginnings, so don't try to use it for anything close to a production environment.
Note: this README is only for development purposes.
The ORAN O2 IMS implementation in OpenShift is managed by the IMS operator. It configures the different components defined in the specification: the deployment manager service, the resource server, alarm server, subscriptions to resource and alert.
The IMS operator will create an O-Cloud API that will be available to be queried, for instance from a SMO. It also provides a configuration mechanism using a Kubernetes custom resource definition (CRD) that allows the hub cluster administrator to configure the different IMS microservices properly.
The IMS operator is installed on an OpenShift cluster where Red Hat Advanced Cluster Management for Kubernetes (RHACM), a.k.a. hub cluster, is installed too. Let’s install the operator. You can use the latest automatic build from the openshift-kni namespace in quay.io or build your container image with the latest code.
If you want to build the image yourself and push it to your registry right after:
⚠️ Replace the USERNAME and IMAGE_NAME values with the full name of your container image.
$ export USERNAME=your_user
$ export IMAGE_NAME=quay.io/${USERNAME}/oran-o2ims:latest
$ git clone https://github.com/openshift-kni/oran-o2ims.git
$ cd oran-o2ims
$ make docker-build docker-push CONTAINER_TOOL=podman IMG=${IMAGE_NAME}
..REDACTED..
Update dependencies
hack/update_deps.sh
hack/install_test_deps.sh
Downloading golangci-lint
…
[3/3] STEP 5/5: ENTRYPOINT ["/usr/bin/oran-o2ims"]
[3/3] COMMIT quay.io/${USERNAME}/oran-o2ims:v4.16
--> eaa55268bfff
Successfully tagged quay.io/${USERNAME}/oran-o2ims:latest
eaa55268bfffeb23644c545b3d0a768326821e0afea8b146c51835b3f90a9d0c
Now, let's deploy the operator. If you want to deploy your already built image then add the IMG=${IMAGE_NAME}
argument to the make
command:
$ make deploy install
… REDACTED …
Update dependencies
hack/update_deps.sh
hack/install_test_deps.sh
Downloading golangci-lint
… REDACTED …
$PATH/oran-o2ims/bin/kustomize build config/default | $PATH/oran-o2ims/bin/kubectl apply -f -
namespace/oran-o2ims created
serviceaccount/oran-o2ims-controller-manager created
role.rbac.authorization.k8s.io/oran-o2ims-leader-election-role created
clusterrole.rbac.authorization.k8s.io/oran-o2ims-manager-role created
clusterrole.rbac.authorization.k8s.io/oran-o2ims-metrics-reader created
clusterrole.rbac.authorization.k8s.io/oran-o2ims-proxy-role created
rolebinding.rbac.authorization.k8s.io/oran-o2ims-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/oran-o2ims-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/oran-o2ims-proxy-rolebinding created
configmap/oran-o2ims-env-config created
service/oran-o2ims-controller-manager-metrics-service created
deployment.apps/oran-o2ims-controller-manager created
The operator and the default enabled components are installed in the oran-o2ims
namespace, which is created during the install.
$ oc get pods -n oran-o2ims
NAME READY STATUS RESTARTS AGE
alarms-server-5d5cfb75bf-rbp6g 2/2 Running 0 21s
artifacts-server-c48f6bd99-xnk2n 2/2 Running 0 21s
cluster-server-68f8946f74-l82bn 2/2 Running 0 21s
oran-o2ims-controller-manager-555755dbd7-sprs9 2/2 Running 0 26s
postgres-server-674458bfbd-mnzt5 1/1 Running 0 23s
provisioning-server-86bd6bf6f-kl829 2/2 Running 0 20s
resource-server-6dbd5788df-vpq44 2/2 Running 0 22s
Several routes were created in the same namespace too. The HOST
column is the URI where the o2ims API will be listening from outside the OpenShift cluster, for instance where the SMO will connect to.
$ oc get route -n oran-o2ims
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
oran-o2ims-ingress-8v8lp o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureArtifacts artifacts-server api reencrypt/Redirect None
oran-o2ims-ingress-92sf5 o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureCluster cluster-server api reencrypt/Redirect None
oran-o2ims-ingress-gfm9r o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureProvisioning provisioning-server api reencrypt/Redirect None
oran-o2ims-ingress-n6p9w o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureInventory resource-server api reencrypt/Redirect None
oran-o2ims-ingress-n9d7w o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureMonitoring alarms-server api reencrypt/Redirect None
The operator by default creates a default inventory CR in the oran-o2ims namespace:
$ oc get inventory -n oran-o2ims
NAME AGE
default 4m20s
⚠️ Currently, the following components are enabled by default.
$ oc get inventory -n oran-o2ims -oyaml
apiVersion: v1
items:
- apiVersion: o2ims.oran.openshift.io/v1alpha1
kind: Inventory
metadata:
creationTimestamp: "2025-01-22T16:45:32Z"
generation: 1
name: default
namespace: oran-o2ims
resourceVersion: "116847464"
uid: e296aede-6309-478b-be10-6fd8f7904324
spec:
alarmServerConfig:
enabled: true
artifactsServerConfig:
enabled: true
clusterServerConfig:
enabled: true
resourceServerConfig:
enabled: true
To deploy from catalog, first build the operator, bundle, and catalog images, pushing to your repo:
make IMAGE_TAG_BASE=quay.io/${MY_REPO}/oran-o2ims docker-build docker-push bundle-build bundle-push catalog-build catalog-push
You can then use the catalog-deploy
target to generate the catalog and subscription resources and deploy the operator:
$ make IMAGE_TAG_BASE=quay.io/${MY_REPO}/oran-o2ims catalog-deploy
hack/generate-catalog-deploy.sh \
--package oran-o2ims \
--namespace oran-o2ims \
--catalog-image quay.io/${MY_REPO}/oran-o2ims-catalog:v4.18.0 \
--channel alpha \
--install-mode AllNamespaces \
| oc create -f -
catalogsource.operators.coreos.com/oran-o2ims created
namespace/oran-o2ims created
operatorgroup.operators.coreos.com/oran-o2ims created
subscription.operators.coreos.com/oran-o2ims created
To undeploy and clean up the installed resources, use the catalog-undeploy
target:
$ make IMAGE_TAG_BASE=quay.io/${MY_REPO}/oran-o2ims VERSION=4.18.0 catalog-undeploy
hack/catalog-undeploy.sh --package oran-o2ims --namespace oran-o2ims --crd-search "o2ims.*oran"
subscription.operators.coreos.com "oran-o2ims" deleted
clusterserviceversion.operators.coreos.com "oran-o2ims.v4.18.0" deleted
customresourcedefinition.apiextensions.k8s.io "clustertemplates.o2ims.provisioning.oran.org" deleted
customresourcedefinition.apiextensions.k8s.io "hardwaretemplates.o2ims-hardwaremanagement.oran.openshift.io" deleted
customresourcedefinition.apiextensions.k8s.io "inventories.o2ims.oran.openshift.io" deleted
customresourcedefinition.apiextensions.k8s.io "nodepools.o2ims-hardwaremanagement.oran.openshift.io" deleted
customresourcedefinition.apiextensions.k8s.io "nodes.o2ims-hardwaremanagement.oran.openshift.io" deleted
customresourcedefinition.apiextensions.k8s.io "provisioningrequests.o2ims.provisioning.oran.org" deleted
namespace "oran-o2ims" deleted
clusterrole.rbac.authorization.k8s.io "oran-o2ims-alarms-server" deleted
clusterrole.rbac.authorization.k8s.io "oran-o2ims-alertmanager" deleted
clusterrole.rbac.authorization.k8s.io "oran-o2ims-cluster-server" deleted
clusterrole.rbac.authorization.k8s.io "oran-o2ims-deployment-manager-server" deleted
clusterrole.rbac.authorization.k8s.io "oran-o2ims-kube-rbac-proxy" deleted
clusterrole.rbac.authorization.k8s.io "oran-o2ims-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "oran-o2ims-resource-server" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-alarms-server" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-alarms-server-kube-rbac-proxy" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-alertmanager" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-cluster-server" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-cluster-server-kube-rbac-proxy" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-deployment-manager-server" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-deployment-manager-server-kube-rbac-proxy" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-metadata-server-kube-rbac-proxy" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-resource-server" deleted
clusterrolebinding.rbac.authorization.k8s.io "oran-o2ims-resource-server-kube-rbac-proxy" deleted
catalogsource.operators.coreos.com "oran-o2ims" deleted
In a production environment, using mTLS to secure communications between the O-Cloud and the SMO in both directions is recommended. The O-Cloud manager uses an ingress controller to terminate incoming TLS sessions. Before configuring the O-Cloud Manager, the ingress controller needs to be re-configured to enforce mTLS. If the ingress controller is shared with other applications, this will also impact those applications by requiring client certificates on all incoming connections. If that is not desirable, then consider creating a secondary ingress controller to manage a separate DNS domain and enable mTLS only on that controller.
To configure the primary controller to enable mTLS the following attributes must be set in the clientTLS
section of
the spec
. For more information, please refer to the
documention here.
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
clientTLS:
allowedSubjectPatterns:
- ^/C=CA/ST=Ontario/L=Ottawa/O=Red\ Hat/OU=ORAN/
clientCA:
name: ingress-client-ca-certs
clientCertificatePolicy: Required
...
...
allowedSubjectPatterns
is optional but can be set to limit access to specific certificate subjects.clientCA
is mandatory and must be set to aConfigMap
containing a list of CA certificates used to validate incoming client certificates.clientCertificatePolicy
must be set toRequired
to enforce that clients provide a valid certificate.
Once the hub cluster is set up and the O-Cloud Manager is started, the end user must update the Inventory CR to configure the SMO attributes so that the application can register with the SMO. The user must provide the Global O-Cloud ID value which is provided by the SMO.
In a production environment, this requires that an OAuth2 authorization server be available and configured with the appropriate client configurations for both the SMO and the O-Cloud Manager. In debug/test environments, the OAuth2 can also be used if the appropriate server and configurations exist, but OAuth2 can also be disabled to simplify the configuration requirements.
-
Create a ConfigMap that contains the custom X.509 CA certificate bundle if either of the SMO or OAuth2 server TLS certificates are signed by a non-public CA certificate. This is optional. If not required, then the 'caBundle' attribute can be omitted from the Inventory CR.
oc create configmap -n oran-o2ims o2ims-custom-ca-certs --from-file=ca-bundle.pem=/some/path/to/ca-bundle.pem
-
Create a Secret that contains the OAuth client-id and client-secret for the O-Cloud Manager. These values should be obtained from the administrator of the OAuth server that set up the client credentials. The client secrets must not be stored locally once the secret is created. The values used here are for example purposes only, your values may differ for the client-id and will definitely differ for the client-secret.
oc create secret generic -n oran-o2ims oauth-client-secrets --from-literal=client-id=o2ims-client --from-literal=client-secret=SFuwTyqfWK5vSwaCPSLuFzW57HyyQPHg
-
Create a Secret that contains a TLS client certificate and key to be used to enable mTLS to the SMO and OAuth2 authorization servers. The Secret is expected to have the 'tls.crt' and 'tls.key' attributes. The 'tls.crt' attribute must contain the full certificate chain having the device certificate first and the root certificate being last. In a production environment, it is expected that this certificate should be renewed periodically and managed by cert-manager. In a development environment, if mTLS is not required, then this can be skipped and the corresponding attribute can be omitted from the Inventory CR.
oc create secret tls -n oran-o2ims o2ims-client-tls-certificate --cert /some/path/to/tls.crt --key /some/path/to/tls.key
-
Update the Inventory CR to include the SMO and OAuth configuration attributes. These values will vary depending on the domain names used in your environment and by the type of OAuth2 server deployed. Check the configuration documentation for the actual server being used.
The following block can be added to the
spec
section of the Inventory CR.smo: url: https://smo.example.com registrationEndpoint: /mock_smo/v1/ocloud_observer oauth: url: https://keycloak.example.com/realms/oran clientSecretName: oauth-client-secrets tokenEndpoint: /protocol/openid-connect/token scopes: - profile - openid - smo-audience - roles usernameClaim: preferred_username groupsClaim: roles tls: clientCertificateName: o2ims-client-tls-certificate caBundleName: o2ims-custom-ca-certs
-
Once the Inventory CR is updated, the following condition will be updated to reflect the status of the SMO registration. If an error occurred that prevented registration from completing, then the error will be noted here.
oc describe inventories.o2ims.oran.openshift.io sample ... Status: Deployment Status: Conditions: Last Transition Time: 2024-10-04T15:39:46Z Message: Registered with SMO at: https://smo.example.com Reason: SmoRegistrationSuccessful Status: True Type: SmoRegistrationCompleted ...
To ensure interoperability between the SMO, the O-Cloud Manager, and the Authorization Server, these are the requirements regarding the OAuth settings and JWT contents.
-
It is expected that the administrator of the Authentication Server has created clients for both the SMO and IMS resource servers. For example purposes, this document assumes these to be "smo-client" and "o2ims-client".
-
It is expected that the "smo-client" has been assigned "roles" which map to the Kubernetes RBAC roles defined here according to the level of access required by the SMO. The "roles" attribute is expected to contain a list of roles assigned to the client. For example, one or more of "o2ims-reader", "o2ims-subscriber", "o2ims-maintainer", "o2ims-provisioner", or "o2ims-admin".
-
The "o2ims-client" is also expected to be assigned some form of authorization on the SMO. This depends largely on the SMO implementation and is somewhat transparent to the O-Cloud Manager. If specific scopes are required for the "o2ims-client" to access the SMO resource server, then the Inventory CR must be customized by setting the "scopes" attribute.
-
It is expected that the JWT token will contain the mandatory attributes defined in RFC9068 as well as the optional attributes related to "roles".
-
The "aud" attribute is expected to contain a list of intended audiences and must at a minimum include the "o2ims-client" identifier.
-
It is expected that all attributes are top-level attributes (i.e., are not nested). For example,
"realm_access.roles": ["a", "b", "c"]
is valid, but"realm_access": {"roles": ["a", "b", "c"]}
is not. In some cases, this may require special configuration steps on the Authorization Server to ensure the proper format in the JWT tokens.
The following is a sample JWT payload and header. The signature footer has been removed.
Header:
{
"alg": "RS256",
"typ": "JWT",
"kid": "VJr29clVBAFFo6rbwn4HvTByCH5KbhioharsRbXx3N8"
}
Footer:
{
"exp": 1737565732,
"iat": 1737564832,
"jti": "cc0f4c88-fffd-48da-91d8-24b80f1f6955",
"iss": "https://keycloak.example.com/realms/oran",
"aud": [
"o2ims-client"
],
"sub": "34c94cc0-720d-4f29-81e9-b9e794b51e9a",
"typ": "Bearer",
"azp": "smo-client",
"acr": "1",
"scope": "openid roles profile o2ims-audience",
"clientHost": "192.168.1.2",
"realm_access.roles": [
"o2ims-reader",
"o2ims-subscriber",
"o2ims-maintainer",
"o2ims-provisioner"
],
"preferred_username": "service-account-smo-client",
"clientAddress": "192.168.1.2",
"client_id": "smo-client"
}
Before accessing any O2IMS API endpoint, an access token must be acquired. The approach used depends on the configuration of the system. The following subsections describe both the OAuth and non-OAuth cases.
In a production environment, the system should be configured with mTLS and OAuth enabled. In this configuration, an API requests must include a valid OAuth JWT token acquired from the authorization server that is configured in the Inventory CR. To manually acquire a token from the authorization server, a command similar to this one should be used. This method may vary depending on the type of authorization server used. This example is for a Keycloak server.
export MY_TOKEN=$(curl -s --cert /path/to/client.crt --key /path/to/client.key --cacert /path/to/ca-bundle.pem \
-XPOST https://keycloak.example.com/realms/oran/protocol/openid-connect/token \
-d grant_type=client_credentials -d client_id=${SMO_CLIENT_ID} \
-d client_secret=${SMO_CLIENT_SECRET} \
-d 'response_type=token id_token' \
-d 'scope=profile o2ims-audience roles'| jq -j .access_token)
In a development environment in which OAuth is not being used, the access token must be acquired from a Kubernetes Service Account. This Service Account must be assigned appropriate RBAC permissions to access the O2IMS API endpoints. As a convenience, a pre-canned Service Account and ClusterRoleBinding is defined here. It can be applied as follows.
$ oc apply -f config/testing/client-service-account-rbac.yaml
serviceaccount/test-client created
clusterrole.rbac.authorization.k8s.io/oran-o2ims-test-client-role created
clusterrolebinding.rbac.authorization.k8s.io/oran-o2ims-test-client-binding created
And then the following command can be used to acquire a token.
export MY_TOKEN=$(oc create token -n oran-o2ims test-client --duration=24h)
Note that here the --cert
and --key
options can be omitted if not using mTLS and the --cacert
option can be
removed if the ingress certificate is signed by a public certificate or if you are operating in a development
environment, in which case it can be replaced with -k
.
MY_CLUSTER=your.domain.com
curl --cert /path/to/client.crt --key /path/to/client.key --cacert /path/to/ca-bundle.pem -q \
https://o2ims.apps.${MY_CLUSTER}/o2ims-infrastructureInventory/v1/api_version \
-H "Authorization: Bearer ${MY_TOKEN}"
The following instructions provide a mechanism to build an image that is based on a more full-featured distro so that debug tools are available in the image. It also tailors the deployment configuration so that certain features are disabled which would otherwise cause debugging with a debugger to be more difficult (or impossible).
-
Build and deploy the debug image
make IMAGE_TAG_BASE=quay.io/${USER}/oran-o2ims VERSION=latest DEBUG=yes build docker-build docker-push install deploy
-
Forward a port to the Pod to be debugged so the debugger can attach to it. This command will remain active until it is terminated with ctrl+c therefore you will need to execute it in a dedicated window (or move it to the background).
oc port-forward -n oran-o2ims pods/oran-o2ims-controller-manager-85b4bbcf58-4fc9s 40000:40000
-
Execute a shell into the Pod to be debugged.
oc rsh -n oran-o2ims pods/oran-o2ims-controller-manager-85b4bbcf58-4fc9s
-
Attach the DLV debugger to the process. This is usually PID 1, but this may vary based on the deployment. Use the same port number that was specified earlier in the
port-forward
command.dlv attach --continue --accept-multiclient --api-version 2 --headless --listen :40000 --log 1
-
Use your IDE's debug capabilities to attach to
localhost:40000
to start your debug session. This will vary based on which IDE is being used.
⚠️ Confirm that an authorization token has already been acquired. See section Testing API endpoints on a cluster
Notice the API_URI
is the route HOST/PORT column of the oran-o2ims operator.
$ oc get routes -n oran-o2ims
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
oran-o2ims-ingress-ghcwc o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureInventory resource-server api reencrypt/Redirect None
oran-o2ims-ingress-pz8hc o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureMonitoring alarms-server api reencrypt/Redirect None
oran-o2ims-ingress-qrnfq o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureProvisioning provisioning-server api reencrypt/Redirect None
oran-o2ims-ingress-t842p o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureArtifacts artifacts-server api reencrypt/Redirect None
oran-o2ims-ingress-tbzbl o2ims.apps.hubcluster2.hub.dev.vz.bos2.lab /o2ims-infrastructureCluster cluster-server api reencrypt/Redirect None
Export the o2ims endpoint as the API_URI variable so it can be re-used in the requests.
export API_URI=o2ims.apps.${DOMAIN}
To get the api versions supported
$ curl --insecure --silent --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/api_versions" | jq
To obtain information from the O-Cloud:
$ curl --insecure --silent --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1"
The deployment manager server (DMS) needs to connect to kubernetes API of the RHACM hub to obtain the required information. Here we can see a couple of queries to the DMS.
To get a list of all the deploymentManagers (clusters) available in our O-Cloud:
$ curl --insecure --silent --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/deploymentManagers" | jq
To get a list of only the name
of the available deploymentManagers available in our O-Cloud:
$ curl --insecure --silent --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/deploymentManagers?fields=name" | jq
To get a list of all the deploymentManagers whose name is not local-cluster in our O-Cloud:
$ curl --insecure --silent --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/deploymentManagers?filter=(neq,name,local-cluster)" | jq
| jq
The resource server exposes endpoints for retrieving resource types, resource pools and resources objects. The server relies on the Search Query API of ACM hub. Follow these instructions to enable and configure the search API access. The resource server will translate those REST requests and send them to the ACM search server that implements a graphQL API.
❗ To obtain the requested information we need to enable the searchCollector of all the managed clusters, concretely, in the KlusterletAddonConfig CR.
To get a list of available resource types:
$ curl -ks --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/resourceTypes" | jq
To get information of a specific resource type:
$ curl -ks --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/resourceTypes/${resource_type_name} | jq
To get a list of available resource pools:
$ curl -ks --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/resourcePools" | jq
To get information of a specific resource pool:
$ curl -ks --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/resourcePools/{resourcePoolId}" | jq
We can filter down to get all the resources of a specific resourcePool.
$ curl -ks --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/resourcePools/{resourcePoolId}
/resources" | jq
To get a list of resource subscriptions:
$ curl -ks --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/subscriptions | jq
To get all the information about an existing resource subscription:
$ curl -ks --header "Authorization: Bearer ${MY_TOKEN}"
"https://${API_URI}/o2ims-infrastructureInventory/v1/subscriptions/<subscription_uuid> | jq
To add a new resource subscription:
$ curl -ks -X POST \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${MY_TOKEN}" \
-d @infra-sub.json https://${API_URI}/o2ims-infrastructureInventory/v1/subscriptions | jq
Where the content of infra-sub.json
is as follows:
{
"consumerSubscriptionId": "69253c4b-8398-4602-855d-783865f5f25c",
"filter": "(eq,extensions/country,US);",
"callback": "https://128.224.115.15:1081/smo/v1/o2ims_inventory_observer"
}
To delete an existing resource subscription:
$ curl -ks -X DELETE \
--header "Authorization: Bearer ${MY_TOKEN}" \
https://${API_URI}/o2ims-infrastructureInventory/v1/subscriptions/<subscription_uuid> | jq