Scrum Alliance Price Increase_Dec 2024-mobile

HomeBlogDevOpsKubernetes Prometheus: Definition, Architecture, Pros & Cons

Kubernetes Prometheus: Definition, Architecture, Pros & Cons

Published
02nd Jan, 2024
Views
view count loader
Read it in
18 Mins
In this article
    Kubernetes Prometheus: Definition, Architecture, Pros & Cons

    The preferred monitoring solution for Kubernetes and Docker is gradually evolving towards Prometheus monitoring. This manual demonstrates how to use Prometheus for Kubernetes monitoring. You will learn how to build up Kube-state-metrics system, pull and collect metrics, deploy a Prometheus server and metrics exporters, configure alerts with Alertmanager, and create Grafana dashboards. We will go over both manual and automated deployment and installation techniques of Kubernetes Prometheus, including Prometheus operator. In the modern-day world, docker is something we hear day in and day out. Docker is used to run an application by containerizing and solving the major issue. It runs in the machine but not in prod. To learn more about docker, check out learning opportunities like Docker and Kubernetes Training.

    What is Prometheus?

    An open-source monitoring tool called Prometheus is used to gather and aggregate metrics as time series data. Simply put, every item in a Kubernetes Prometheus store is a metric event that comes with a timestamp. The Cloud Native Computing Foundation now supports Prometheus, a community project that was first created at Soundcloud. It has gained popularity quickly over the past ten years as a result of being the best monitoring stack for contemporary applications because of its combination of querying features and cloud-native architecture.

    Events are recorded in real time by Prometheus. Any event that is pertinent to your program can be included in this list, including memory usage, network activity, and specific inbound requests.

    "Metrics" are the basic unit of data. Each measure has a name that can be used to refer to it as well as a number of labels. The metrics in your database can be filtered using labels, which are arbitrary key-value data pairs.

    I want to highlight a few important points for your review.

    1. Metric Gathering: Kubernetes Prometheus retrieves metrics over HTTP using the pull approach. For use instances when Prometheus cannot scrape the data, it is possible to push metrics to Prometheus using Pushgateway. One such instance is gathering unique metrics from temporary Kubernetes tasks and Cronjobs.
    2. Metric Endpoint: The systems you want Prometheus to monitor should disclose their metrics on an endpoint called /metrics. This API is used by Prometheus to retrieve the metrics on a regular basis.
    3. PromQL: The query language PromQL, which is included with Prometheus, can be used to query the metrics in the Prometheus dashboard. Prometheus UI and Grafana will also display metrics using the PromQL query.
    4. Prometheus Exporters: Libraries that translate existing metrics from outside applications into Prometheus metrics format. There are lots of authorized and local Prometheus exporters. Prometheus node exporter is only one illustration. It makes available in Prometheus format all system-level metrics for Linux.
    5. TSDB: Prometheus stores all the data effectively using TSDB (time-series database). All of the data is initially saved locally. Prometheus TSDB can integrate remote storage, nevertheless, to prevent single points of failure.

    What is Kubernetes Prometheus?

    The Kubernetes Prometheus object model is extremely adaptable and abstract; it consists of a collection of time series, each of which is identified by a distinct set of label-value pairs and a collection of timestamped values. The job label and the instance label are my favorites among these labels since they are more equitable than the others.

    By defining these labels, we can arrange Prometheus' targets page to match the topology of our application appropriately and quickly identify any problems. In Prometheus, we decided to model Kubernetes services as Jobs and individual Pods as Instances.

    Prometheus Server
    Kubernetes Services

    You can find a variety of different items in the Kubernetes model using the Prometheus K8s service discovery module. The 'role' in the Prometheus configuration refers to the type of object. Each item of that type will be identified as a distinct scrape target, on which you can configure how to map your application into Prometheus using relabelling rules:

    kubernetes_sd_configs: 
    - api_servers: 
    - 'https://kubernetes.default.svc.cluster.local' 
    in_cluster: true 
    role: endpoint 

    Why Use Prometheus for Kubernetes Monitoring?

    1. DevOps culture

    Before the emergence of DevOps, monitoring was made up of services. Since they can handle a lot of operations-debugging on their own, developers need the freedom to swiftly incorporate apps. Monitoring had to be made more accessible, democratized and expanded to include additional stack tiers.

    2. Containers and Kubernetes

    The way we handle logging is fundamentally changing, and this includes monitoring. There are now a significant number of services, virtual network addresses, exposed metrics, and unstable software entities that can appear or go at any time.

    These four characteristics helped Prometheus establish itself as the de facto benchmark for Kubernetes monitoring:

    3. Multi-dimensional data model

    Similar to how Kubernetes labels infrastructure metadata, the model's structure is built on key-value pairs. It drives the query language of Prometheus, which makes time series data flexible and accurate.

    4. Accessible protocols and file formats

    The procedure of making the metrics of prometheus accessible is not too difficult. Metrics are published via a standard HTTP transport, are readable by humans, and have formats that are self-explanatory.

    5. Find a service

    Because the Prometheus server regularly scrapes the targets, applications and services do not need to worry about creating data (metrics are pulled, not pushed). These Prometheus servers can locate scrape targets automatically in a variety of ways. Some of them may be configured to filter and match container metadata, making them perfect for ephemeral Kubernetes workloads.

    6. Modular and high components

    Metric collecting, alerting, graphical visualization, and other functions are handled by various composable services. All of these services are intended to provide redundancy and sharding.

    Kubernetes Prometheus Monitoring Architecture

    We will go over each of the parts that make up the K8s Prometheus architecture in this section. In Kubernetes, cAdvisor functions as a component of the Kubelet binary. In essence, cAdvisor is an open-source resource consumption and performance analysis agent library. It is mostly for containers and natively supports Docker containers. A straightforward service for listening to the Kubernetes-API is Kube-State metrics. It produces metrics describing the condition of things like deployments, nodes, and pods.

    The following components make up the entire Kubernetes and Prometheus architecture:

    • Kubernetes API
    • Microservices like Fluentd, Redis, Traefik, etc. 
    • Kubernetes Nodes.
    • Prometheus Servers
    • Push Gateway
    • Alert Manager
    • Prometheus Targets
    • Client Libraries
    • Prometheus Exporters
    • Service Discovery
    • Key Takeaways
    • Grafana for metrics visualization and dashboards.
    • Some third-party apps for notification services like Slack, Ansible, or Email

    The Kubernetes Prometheus servers want as many targets to automatically find as they can. Prometheus employs Prometheus in Kubernetes SD, Consul SD, Azure SD, or Azure VM, and GCE SD for GCE instances to do this. AWS VM and File SD for EC2 SD. The central processing unit of the system, the Prometheus servers, performs similar functions to the brain. The measurements are recorded by the server as multidimensional time series. 

    Additionally, Prometheus is capable of gathering data on the orchestration and service status of Kubernetes. Orchestration and cluster-level metrics are retrieved through the Kube-state-metrics. Additionally, Kubernetes is in charge of basic metrics like kubelet, etcd, DNS, scheduler, etc. Alertmanager controls inhibition, among other things. The rules can be set up in Prometheus to launch PromQl alerts. Grafana utilizes a dashboard using a UI to display the scraped metrics. If you want in-depth knowledge of Kubernetes, then I would recommend you check Courses on DevOps.

    How to Install & Setup Prometheus Monitoring on Kubernetes Cluster

    Installing Prometheus on Kubernetes 

    By utilizing YAML files to describe rights, configuration, and services, you can configure Prometheus' monitoring processes. 

    YAML files are used by Prometheus to access resources. Additionally, it aids in information retrieval while scraping components of the Kubernetes cluster. The documentation for Prometheus provides more information about YAML configuration. 

    You can install Prometheus as a container on a Kubernetes cluster after the configuration is finished. These Docker containers can be deployed using a variety of orchestration techniques, including StatefulSets, Kubernetes operators, and Helm charts.

    You can use the following command in the Prometheus user interface to launch a Prometheus server within a container:

    docker run \
    -p 9090:9090 \
    -v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
    prom/prometheus 

    Connect to the Kubernetes Cluster

    Make sure you have administrator access to your Kubernetes cluster before connecting. 

    ACCOUNT=$(gcloud info --format='value(config.account)') kubectl create clusterrolebinding owner-cluster-admin-binding \ --clusterrole cluster-admin \ --user $ACCOUNT 

    Prometheus Kubernetes Manifest Files

    On Github, you can find all the configuration files I described in this article. The command listed below can be used to clone the repository. 

    git clone https://github.com/techiescamp/kubernetes-prometheus 

    For a better understanding, as described in the stages, you can utilize the GitHub repo configuration files or make the files on the fly. 

    Let us begin by setting everything up. 

    Create a Namespace & ClusterRole

    For all of our monitoring components, we will first construct a Kubernetes namespace. All Prometheus kubernetes deployment objects are deployed on the default namespace if a dedicated namespace is not created. 

    To create a new namespace called monitoring, use the following command. 

    kubectl creates namespace monitoring 

    To read all the accessible metrics from Nodes, Pods, Deployments, etc., Prometheus uses the Kubernetes APIs. This calls for the creation of an RBAC policy that is bound to the monitoring namespace and has read access to the necessary API groups. 

    Step 1: Copy the following RBAC role into a file called clusterRole.yaml. 

    apiVersion: rbac.authorization.k8s.io/v1 
    kind: ClusterRole 
    metadata: 
    name: prometheus 
    rules: 
    - apiGroups: [""] 
    resources: 
    - nodes 
    - nodes/proxy 
    - services 
    - endpoints 
    - pods 
    verbs: ["get", "list", "watch"] 
    - apiGroups: 
    - extensions 
    resources: 
    - ingresses 
    verbs: ["get", "list", "watch"] 
    - nonResourceURLs: ["/metrics"] 
    verbs: ["get"] 
    --- 
    apiVersion: rbac.authorization.k8s.io/v1 
    kind: ClusterRoleBinding 
    metadata: 
    name: prometheus 
    roleRef: 
    apiGroup: rbac.authorization.k8s.io 
    kind: ClusterRole 
    name: prometheus 
    subjects: 
    - kind: ServiceAccount 
    name: default 
    namespace: monitoring 

    Step 2: Use the following command to create the role. 

    kubectl create -f clusterRole.yaml

    Create a Config Map to Externalize Prometheus Configurations

    The prometheus.yaml file contains all of Prometheus' configurations, while prometheus.rules contain all of the alert rules for Alertmanager. 

    prometheus.yaml: This file contains the main configuration for Prometheus and contains information on data scraping, service discovery, storage locations, data retention settings, etc. 

    prometheus.rules: The Prometheus alerting rules are all contained in this file. 

    You can avoid building the Prometheus image each time a configuration needs to be added or removed by externalizing Prometheus configs to a Kubernetes config map. To apply the updated configuration, you must update the config map and restart the Prometheus pods. 

    The prometheus.yaml and prometheus.rules files that include all of the Prometheus scrape configuration and alerting rules are mounted to the Prometheus container in /etc/prometheus. 

    • Step 1: Create a file called config-map.yaml and paste the contents of that file into the newly created file. 
    • Step 2: Run the following command in Kubernetes to generate the configuration map. 
    kubectl create -f config-map.yaml 

    Inside the container, two files are created. 

    All of the configurations to dynamically find the running services and pods in the Kubernetes cluster are contained in the prometheus.yaml file. In our Prometheus scrape configuration, we have the following scrape jobs. 

    • kubernetes-apiservers: All of the metrics are obtained by kubernetes-apiservers from the API servers. 
    • kubernetes-nodes: It gathers all of the node metrics for kubernetes. 
    • Kubernetes-pods: If the pod metadata is marked with prometheus.io/scrape and prometheus.io/port annotations, all of the pod metrics are identified. 
    • kubernetes-cadvisor: All cAdvisor metrics are collected by kubernetes-cadvisor. 
    • kubernetes-service-endpoints: If the service metadata is annotated with the prometheus.io/scrape and prometheus.io/port annotations, then all of the service endpoints are trashed. Black-box monitoring can be done using it. 

    All of the alert rules for sending alerts to the Alertmanager are contained in prometheus.rules. 

    Create a Prometheus Deployment

    Step 1: Make a file called prometheus-deployment.yaml and copy the information below into it. In this configuration, as described in the previous section, the Prometheus configuration map is mounted as a file inside /etc/prometheus. 

    apiVersion: apps/v1 
    kind: Deployment 
    metadata: 
    name: prometheus-deployment 
    namespace: monitoring 
    labels: 
    app: prometheus-server 
    spec: 
    replicas: 1 
    selector: 
    matchLabels: 
    app: prometheus-server 
    template: 
    metadata: 
    labels: 
    app: prometheus-server 
    spec: 
    containers: 
    - name: prometheus 
    image: prom/prometheus 
    args: 
    - "--storage.tsdb.retention.time=12h" 
    - "--config.file=/etc/prometheus/prometheus.yml" 
    - "--storage.tsdb.path=/prometheus/" 
    ports: 
    - containerPort: 9090 
    resources: 
    requests: 
    cpu: 500m 
    memory: 500M 
    limits: 
    cpu: 1 
    memory: 1Gi 
    volumeMounts: 
    - name: prometheus-config-volume 
    mountPath: /etc/prometheus/ 
    - name: prometheus-storage-volume 
    mountPath: /prometheus/ 
    volumes: 
    - name: prometheus-config-volume 
    configMap: 
    defaultMode: 420 
    name: prometheus-server-conf 
    - name: prometheus-storage-volume 
    emptyDir: {} 

    Step 2: Utilizing the aforementioned file, create a deployment on the monitoring namespace.  

    kubectl create -f prometheus-deployment.yaml  

    Step 3: Use the following command to verify the deployment you established. 

    kubectl get deployments --namespace=monitoring 

    Connecting To Prometheus Dashboard

    Method 1  

    You can reach a pod from your local workstation using a chosen port on your localhost by using kubectl port forwarding. This approach is mainly used for debugging. 

    Step 1: Get the name of the Prometheus pod. 

    kubectl get pods --namespace=monitoring 

    The result will appear as shown below. 

    kubectl get pods --namespace=monitoring 

    NAME READY STATUS RESTARTS AGE 

    prometheus-monitoring-3331088907-hm5n1 1/1 Running 0 5m 

    Step 2: Use the command below to connect to Prometheus using localhost port 8080 and your pod name. 

    Please substitute your pod name for prometheus-monitoring-3331088907-hm5n1. 

    kubectl port-forward prometheus-monitoring-3331088907-hm5n1 8080:9090 -n monitoring 

    Step 3: At this point, your browser will display the Prometheus home page when you reach http://localhost:8080. 

    Method 2: 

    You must expose the Prometheus dashboard as a Kubernetes service in order to access it through an IP address or DNS name. 

    First, make a file called prometheus-service.yaml and paste the information below into it. On port 30000, we will expose Prometheus to all IP addresses of Kubernetes nodes. 

    apiVersion: v1 
    kind: Service 
    metadata: 
    name: prometheus-service 
    namespace: monitoring 
    annotations: 
    prometheus.io/scrape: 'true' 
    prometheus.io/port: '9090' 
    spec: 
    selector:  
    app: prometheus-server 
    type: NodePort  
    ports: 
    - port: 8080 
    targetPort: 9090  
    nodePort: 30000 

    The aforementioned service's annotations YAML ensures that Prometheus deletes the service endpoint. Always use the target port specified in service YAML, prometheus.io/port. 

    • Step 2: Use the following command to create the service. 
    • kubectl create -f prometheus-service.yaml --namespace=monitoring 
    • Step 3: After creating it, you may use any Kubernetes node's IP address and port 30000 to view the Prometheus dashboard. Make sure your firewall rules allow access to port 30000 from your workstation if you are using the cloud. 
    • Step 4: At this time, if you navigate to status —> Targets, you will see every Kubernetes endpoint that has been automatically connected to Prometheus via service discovery. 
    • Step 5: Go to the homepage, choose the metrics you want from the drop-down menu, and you will get a graph for the time period you specify. 

    Best practices of Kubernetes Prometheus

    Here are a few best practices that will enable you to integrate Prometheus into Kubernetes successfully. 

    1. Utilize dashboards and consoles

    Data is vital in general, but not all of it is required in every situation. As you create your consoles and dashboards, bear this in mind. 

    You should aim to display the most pertinent data rather than attempting to display all data within a single operational panel. Consider the most likely failure modes, and then portray each one in a useful visual presentation, to achieve this. 

    2. Restrict Use of Labels

    Labels can assist you in modifying and streamlining the data for your measurements. Resources such as RAM, disk space, bandwidth, and CPU are needed for each label set. Although this data is significant, producing labels on a wide scale requires a lot of resources. 

    By limiting the number of labels per meter to 10 or less, you can lower expenditures. Furthermore, you should only use labels for metrics that need them—not all metrics do. If you do need to label a lot of metrics, think about using specialized analysis tools to speed up the procedure. 

    3. Use Timestamps Carefully

    Consider using timestamps, rather than the amount of time that has passed since an event happened, for tracking event timing. This can lessen errors and reduce the need for logic updates. 

    4. Using Pushgateway

    Some parts cannot be scrapped. The Prometheus for Kubernetes Pushgateway, which enables you to push time series data from transitory, service-level batch processes to intermediary jobs that may be scraped, can be used to monitor these components. You may combine this with the straightforward text-based explication format of Prometheus to make this instrumentation simple. 

    Pushgateway is excellent for recording the results of batch tasks at the service level. It is not intended for any further use cases. A single Pushgateway will turn into not only a single point of failure but also a potential bottleneck if you try to use it to monitor numerous instances, for example.

    5. Protect Your Inner Loops

    Limit the actions carried out in the inner loop when including metrics in code that are executed more than 100,000 times per second or is performance critical. Here are a few methods you can use to safeguard inner loops: 

    • Reduce the number of metrics your code uses. 
    • Do not call too many metrics in inner loops. 
    • Avoid labelling whenever possible. 
    • Steer clear of metrics that need length or time measurements. 
    • Benchmarks should be used to gauge the effects of changes. 

    Advantages of Kubernetes Prometheus

    A contemporary approach to managing applications is Prometheus. Kubernetes is the main focus because that is the situation in which it performs best. Using Prometheus to deploy your Kubernetes cluster has a number of advantages, including a multidimensional data model and a DevOps culture. Following is a list of advantages of Kubernetes Prometheus: 

    1. Built into Kubernetes

    Kubernetes and Prometheus interact without any issues. Projects from the Cloud Native Computing Foundation (CNCF), Kubernetes, and Prometheus, coexist harmoniously. 

    2. Query language and APIs

    Prometheus offers APIs that make it simple to access monitoring metrics. 

    3. Many exporters and libraries

    There are many libraries and exporters available through Prometheus for gathering application metrics. 

    4. Community-developed Exporters

    All exporters are intended to increase Prometheus' coverage. 

    5. A pull-based model

    A standardized method that makes data collection easier is the pull-based model for time-series data collection. 

    Disadvantages of Kubernetes Prometheus

    1. Pure-telemetry monitoring

    Prometheus offers a simplified and constrained data model for pure telemetry monitoring, but this model does not fully contextualize events. 

    2. Limited granularity

    Prometheus only offers summarized data that exporters periodically scrape.

    3. Ideal mainly for Kubernetes

    Prometheus was not made to monitor legacy infrastructure; it is best for Kubernetes.

    4. Lack of authentication and encryption

    Authentication and encryption are not included in Prometheus' data collection capabilities. Telemetry information is accessible to any user or component with network access.

    Conclusion

    Always essential to any application are metrics. If you want to successfully expand your software, you must check on them frequently. Kubernetes node and cluster monitoring are very simple with Prometheus. It offers various metrics, including counters, graphs, summaries, gauges, etc. 

    Using Prometheus, you can keep an eye on your Kubernetes cluster or services. A few other microservices that are older than Prometheus require exporters in order to use their metrics in Prometheus, despite the fact that many microservices offer metrics that can be directly converted to the Prometheus endpoint. 

    The undistributed storage layer in Prometheus is not designed for long-term storage and is in charge of maintaining data for months or years. Prometheus excels for alerting and short-term trends but falls short for more in-depth historical data requirements. 

    Prometheus has a simple UI for experimenting with PromQL queries, but it is not a dashboarding solution. However, it uses Grafana for dashboarding, which increases the setup's complexity. 

    We hope this article made you familiar with what is Prometheus in Kubernetes and how it can help you. To get certified in docker and Kubernetes, do check out. 

    Kubernetes Prometheus FAQs

    1How to monitor Kubernetes using Prometheus?
    • Connect to the Kubernetes Cluster 
    • Prometheus Kubernetes Manifest Files 
    • Create a Namespace & ClusterRole 
    • Create a Config Map to Externalize Prometheus Configurations 
    • Create a Prometheus Deployment 
    • Connecting To Prometheus Dashboard 
    2How Does Prometheus Connect to Kubernetes?

    To read all the available metrics from Nodes, Pods, Deployments, etc., Prometheus uses the Kubernetes APIs. This calls for the creation of an RBAC policy that is bound to the monitoring namespace and has read access to the necessary API groups. 

    3Where does Prometheus store data in Kubernetes?

    The storage. local. path flag specifies the directory where Prometheus keeps its on-disk time series data. In terms of the working directory, the default path is ./data, which is fine for quick tests but probably not what you want for actual operations.

    4How does Prometheus discover targets in Kubernetes?

    Service discovery is a method used by Prometheus to find targets for scraping. Labels, annotations, and a mechanism for tracking status and changes for various elements are all included in Kubernetes clusters. Prometheus needs to use the Kubernetes API to find targets. 

    5Why is Prometheus good for Kubernetes?

    YAML files are used by Prometheus to access resources. Additionally, it aids in information retrieval when scraping components of the Kubernetes cluster. The documentation for Prometheus contains more information on YAML configuration. You can install Prometheus as a container on a Kubernetes cluster after the configuration is finished.

    6Can Prometheus monitor multiple Kubernetes clusters?

    Yes, you can monitor multiple Kubernetes clusters and providers using the Prometheus/Grafana combination.

    Profile

    Abhresh Sugandhi

    Author

    Abhresh is specialized as a corporate trainer, He has a decade of experience in technical training blended with virtual webinars and instructor-led session created courses, tutorials, and articles for organizations. He is also the founder of Nikasio.com, which offers multiple services in technical training, project consulting, content development, etc.

    Share This Article
    Ready to Master the Skills that Drive Your Career?

    Avail your free 1:1 mentorship session.

    Select
    Your Message (Optional)

    Upcoming DevOps Batches & Dates

    NameDateFeeKnow more
    Course advisor icon
    Course Advisor
    Whatsapp/Chat icon