I used to deploy Prometheus the traditional way, without the help of an operator. Finally, it is time to try out the operator as it has become the de-facto standard way of deploying Prometheus.

To get a good understanding I decided to stay away from Helm this time. I think it helps to really understand the different parts when you deploy the raw yaml files. Helm is a great tool but it hides the implementation details, which is great, but not when you want to get a deeper understanding. However, if you are thinking about deploying prometheus-operator in a production environment for your company I would suggest you to look at the community based helm chart kube-prometheus-stack . It is maintained by the community and installs a complete prometheus stack.

As always you can follow along and check out the source code or clone it here .

Install The Operator

First things first, we need a namespace to deploy our Kubernetes objects into. In this tutorial, I’ve chosen to name it o11y, but you’re free to use any name you prefer. Let’s create the namespace.

1
kubectl apply -k .

Now it is time to deploy the prometheus-operator along with the custom resource definitions (CRD). The operator will watch the CRD and ensure that your Prometheus setup reflects the specified values. Use the flag --server-side to avoid warnings about long annotation names.

1
kubectl apply --server-side -k prometheus-operator

And finally, let’s deploy Prometheus itself.

1
kubectl apply -k prometheus

Now, if you list the pods in the o11y namespace, you should see two pods running: one for prometheus-operator and one for Prometheus itself.

1
2
3
4
➜ kubectl -n o11y get pods
NAME                                   READY   STATUS    RESTARTS   AGE
prometheus-operator-58c87c9567-99brw   1/1     Running   0          4m
prometheus-production-0                2/2     Running   0          1m

There you have it! Prometheus is now deployed in your cluster, all set to scrape your metrics. Before we wrap up, let’s dive into an important part of the Prometheus CRD. In the YAML file, you’ll notice a few selectors:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
podMonitorSelector:
  matchLabels:
    prometheus.io/scrape: "true"
podMonitorNamespaceSelector:
  matchLabels:
    prometheus.io/scrape: "true"
serviceMonitorSelector:
  matchLabels:
    prometheus.io/scrape: "true"
serviceMonitorNamespaceSelector:
  matchLabels:
    prometheus.io/scrape: "true"

These selector names are arbitrary, but they must match for the prometheus-operator to listen to changes in the CRD and apply them to Prometheus. Check out the labels for the namespace and the PodMonitor, and you’ll see they match. This feature is incredibly useful when you have multiple Prometheus instances in your cluster and want to specify which instance should scrape your metrics. In a multi-tenant setup, this becomes crucial, especially when each team has its own Prometheus instance, and you want to keep metrics separate.

1
2
3
4
5
6
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: prometheus
  labels:
    prometheus.io/scrape: "true"

Finally, let’s port-forward the port for prometheus and explore what we can query.

1
kubectl -n o11y port-forward svc/prometheus-operated 9090

If you go to http://localhost:9090/targets you should now see a target called podMonitor/o11y/prometheus/0 (1/1 up). We are now successfully monitoring prometheus itself. Now you can keep adding more exporters to get metrics about the application you care about.

That’s it! Thank you for reading!