Skip to content


CNO is a platform that simplifies the adoption and use of Kubernetes in a multi-cloud ecosystem.

In this guide, we'll take you through installing CNO, configuring it and taking it for a test run. You'll have access to your own CNO Kubernetes management platform and you can immediately start getting familiar with its different features, functions and terminologies.

You can install CNO using either Helm or the CNO Command Line Interface(CNOCTL).

Installing With CNOCTL

Check out this video for installing CNO in a few minutes if you're a visual learner 😉 or continue on with the documentation. 🥂


To install CNO with CNOCTL you will need:

  • A Kubernetes version 1.16 or higher.

Install CNOCTL

To install the cnoctl, please go here.

Install CNO with CNOCTL

Once cnoctl is installed, you can begin the installation of CNO.

Run the command:

cnoctl init --type aks --expose loadbalancer --with-dataplane=true 

Supported Flags

The supported flags and some explanation are listed below:

Control Plane Type (Required)

Command Description

This refers to the cloud provider's managed kubernetes service with:

  • AKS, referring to Azure Kubernetes Service

  • EKS, referring Amazon Elastic Kubernetes Services and

  • GKE, referring to Google Kubernetes Engine.

  • The default vanilla refers to an unmanaged basic kubernetes set-up.

Command Line

  --type: The kubernetes in which cno is being installed
    type: vanilla, aks, eks, gke 
   default: "vanilla"

With Metric Server (Optional)

Command Description

This has 2 values. True or False. If your cluster already has a metric server you wish to use with the data-plane then you'll have to set this value to false. if you wish to install CNO with a metric server, set true.

Command Line


Expose (Required)

Command Description

if you want to take a replication controller, replica set, service, or pod and expose it as a new Kubernetes service You can use the command:


If you wish to use the nginx-ingress or route expose type. The --domain flag is required.

Command Line

  --expose string: The exposition type used in the cluster: nodeport,nginx-ingress,route,loadbalancer

Dataplane (Required)

Command Description

CNO is automatically installed with a dataplane in the same cluster as the control plane. To change that and install the dataplane in a different cluster, you can change it's value to false.

Command Line

  --with-dataplane: install cno with a default dataplane 
  default: true

Domain (Optional)

Command Description

The wildcard domain configured in the cluster. This is Required if you wish to use a nginx-ingress or route exposition type.

Command Line

  --domain: The wildcard domain configured in the cluster

NameSpace (Optional)

Command Description The namespace provides a way of declaring variables within a specific region. The default is "cno-system".

Command Line

  --namespace: cno install namespace 
    default: "cno-system"

Version (Optional)

Command Description

This is the cno version you currently have installed.

Command Line

  --version: cno version to install 
  default: "1.0-rc"

One last thing

If you want to edit some variables, you need to export them before running the cnoctl init command:


Example for editing variables:

export MYSQL_IMAGE = percona:8.0.26-17
cnoctl init --type aks --domain --expose ingress --metric-server=true --with-dataplane=true 

CNOCTL Configuration

After you've successfully installed CNO, you'll need to config your cnoctl to access your CNO UI.

cnoctl config
Supported Flags

server URL: This is your cno api external IP. To get it you can use the command:

kubectl get svc cno-api -n cno-system

Your CNO API server url will be http://External IP

organization: This should be set as 'default'


You'll be prompted to login to cnoctl regularly, you can use the command:

cnoctl login -u string -p string

Suported Flags -u: Username -p: Password

Installing With Helm


  • A Kubernetes Cluster. This Kubernetes cluster will host CNO's control plane and is the only cluster not managed within CNO.
  • Supported Kubernetes Version: Kubernetes 1.20, 1.21. 1.22., 1.23 and higher.

  • Helm. This is a kubernetes package manager and are most useful when setting up a kubernetes cluster and deploying it.

  • A Helm version compatible with your Kubernetes Version, preferably Helm 3.0 and higher
How to check Helm Versioning

You can refer to Helm Documentation. If your Kubernetes Version is between 1.21.x and 1.24.x, your Helm version should be 3.9.x

  • An installed metric server (optional): If you wish to install CNO control-plane along with a data-plane then this is not needed and can be ignored as it is already set to install by default. If your cluster already has a metric server you wish to use with the data-plane then you'll have to set the CNOAgent.metricServer value to false

  • A values.yaml file containing your cluster's configuration. You can download this Sample and simply switch out the values for your own cluster. Or copy-paste the code block into your own created "values.yaml" and swap the values.

Get Helm Repo

To get the helm repository infomation, run the commands:

helm repo add cno-repo
helm repo update 

Install CNO with Helm

After successfully adding the helm repo, you can proceed to installing CNO. Go through the configuration as it explains each value you'll need to put in your values.yam file. Use the following command to install cno and create a namespace.

Replace the "values.yaml" part of the command with the path of your own values.yaml file.

helm install cno cno-repo/cno -f values.yaml --namespace cno-system --create-namespace


The following explains the flags and values you'll need to give for the values.yaml file.

Kubernetes API Server URL

We're going to be setting the cluster type and it's kubernetes api-server url.

The following are all examples and are meant to be substituted.

platform: kubernetes
apiUrl: https://kubernetes-api-server-url

How to check your Kubernetes API Server URL

If you need help knowing your Kubernetes server url, just use the command: kubectl cluster-info

The 1st line tells you your kubernetes server url. You can copy-paste the url into your values.yaml file.

The 3rd line indicates that you already have a metric server running. You can change the Metricserver value to false in your values.yaml file. As CNO by default installs a metric server.


You can choose different types of expose for your kubernetes service:

  • loadbalancer
  • nodeport
  • route : This requires a domain i.e a URL.
  • nginx-ingress: If you choose this, it'll also require a domain and you'll need to ensure a nginx ingress controller is installed.
How to install Nginx

We advise you to refer to the Kubernetes Official Documentation

If you chose Loadbalancer or nodeport, you won't need any extra detail as your cluster is already well-equipped to deal with them.

example 1:

type: loadbalancer
Example 2:
type: nginx-ingress
  domain: <your domain>

Setting up your Super Admin role

The username of the default super admin is admin. After a password is set, it can be changed in the settings. You can simply just set the password or you can pass the password on a secret.

  1. Set the password:

      password: admin

  2. Pass the password on a secret:

    kubectl create secret generic cno-super-admin \
    --from-literal=PASSWORD=admin \
    --namespace cno-system
        name: cno-super-admin
        key: PASSWORD

Kafka Config

  • Kafka ephemeral

    externalBrokersUrl: <internal access to kafka>
        type: ephemeral
  • Kafka persistent

    externalBrokersUrl: <internal access to kafka>
        type: persistent-claim
        deleteClaim: true
        size: 1Gi


If you don't have storageclass, you will have to create two pvc named data-cno-kafka-cluster-zookeeper-0 and data-cno-kafka-cluster-kafka-0.

Agent Config

  cluster type eks, aks, gke, vanilla

default Cluster Type

If your cluster already has a metric server or the installation failed, you can change the CNOAgent.metricServer value to "false":

defaultClusterType: EKS
  metricServer: true

To make sure everything is running fine, you can run the command:

 kubectl get po -n cno-system

It'll give you a list similar to this. All your pods status should show running, it might take some few minutes so check again later if some of the pods aren't yet ready.

cno-agent-xxx 1/1 Running 0 5m
cno-api-xxx 1/1 Running 0 5m
cno-ui-xxx 1/1 Running 0 5m

Last update: 2022-09-16