As a full stack Developer, if you have been spending a lot of time in developing apps recently, you already understand a whole new set of challenges related to Microservice architecture. Although there has been a shift from bloated monolithic apps to compact, focused Microservices for faster implementation and improved resiliency but the fact is developers have to really worry about the challenges in integrating these services in distributed systems which includes accountability for service discovery, load balancing, registration, fault tolerance, monitoring, routing, compliance, and security.
Let us understand the challenges faced by the developers and operators with the Microservice Architecture in details. Consider a 1st Generation simple Service Mesh scenario. As shown below, Service (A) communicates to Service (B). Instead of communicating directly, the request gets routed via Nginx. The Nginx finds a route in Consul (A service discovery tool) and automatically retries to form the connection on HTTP 502’s happen. To know more about Cloud Computing, check out the Cloud infrastructure course.
Figure: 1.0 – 1st Gen Service Mesh
Figure:1.1 – Cascading Failure demonstrated with the increase in the number of services
But, with the advent of microservices architecture, the number is growing ever since. Below are the listed challenges encountered by both developers as well as operations team:
How to make these growing microservices communicate with each other?
Enabling the load balancing architectures over these microservices.
Providing role-based routing for the microservices.
How to implement outgoing traffic on these microservices and test canary deployment?
Managing complexity around these growing pieces of microservices.
Implementation of fine-grained control for traffic behavior with rich-routing rules.
Challenges in implementing Traffic encryption, service-to-service authentication, and strong identity assertions.
In a nutshell, although you could enable service discovery and retry logic into application or networking middleware, the fact is that service discovery becomes tricky to make it right.
Enter Istio’s Service Mesh
“Service Mesh” is one of the hottest buzzwords of 2018. As the name suggests, it’s a configurable infrastructure layer for a microservices app. It lays out the network of microservices that make up applications and enables interactions between them. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.
Istio is completely an open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and its a fully grown platform which provides APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is being hosted on GitHub. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.
Figure-1.2: Istio’s Capability
The Istio project adds a very tiny overhead to your system. It is being hosted on GitHub. Last month, Istio 1.0 release went public and ready for production environment.
What benefits does Istio bring?
Istio lets you connect, secure, control, and observe services.
It helps to reduce the complexity of service deployments and eases the strain on your development teams.
It provides developers and DevOps fine-grained visibility and control over traffic without requiring any changes to application code.
It provides CIOs with the necessary tools needed to help enforce security and compliance requirements across the enterprise.
It provides behavioral insights & operational control over the service mesh as a whole.
Istio makes it easy to create a network of deployed services with automatic Load Balancing for HTTP, gRPC, Web Socket & TCP Traffic.
It provides fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
It enables a pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
Istio provides automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
It provides secure service-to-service communication in a cluster with strong identity-based authentication and authorization.
Under this blog post, I will showcase how Istio can be setup on Play with Kubernetes (PWK) Platform for a free of cost. In case you’re new, Play with Kubernetes rightly aka PWK is a labs site provided by Docker. It is a playground which allows users to run K8s clusters in a matter of seconds. It gives the experience of having a free CentOS Linux Virtual Machine in the browser. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs.
Click on the Login button to authenticate with Docker Hub or GitHub ID.
Once you start the session, you will have your own lab environment.
Adding First Kubernetes Node
Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.
Bootstrapping the Master Node
You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.
When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.
Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.
Adding Worker Nodes Click on “Add New Node” to add a new worker node.
Checking the Cluster Status
Verifying the running Pods
Installing Istio 1.0.0
Istio is deployed in a separate Kubernetes namespace istio-system. We will verify it later. As of now, you can copy the below content in a file called install_istio.sh and save it. You can make it executable and run it to install Istio and related tools.
You should be able to see screen flooding with the below output.
As shown above, it will enable the Prometheus, ServiceGraph, Jaeger, Grafana, and Zipkin by default.
Please note – While executing this script, it might end up with the below error message –
unable to recognize "install/kubernetes/istio-demo.yaml": no matches for admissionregistration.k8s.io/, Kind=MutatingWebhookConfiguration
The error message is expected.
As soon as the command gets executed completely, you should be able to see a long list of ports which gets displayed at the top center of the page.
Verifying the Services
Exposing the Services
To expose Prometheus, Grafana & Servicegraph services, you will need to delete the existing services and then use NodePort instead of ClusterIP so as to access the service using the port displayed on the top of the instance page. (as shown below)
You should be able to access Grafana page by clicking on “30004” port and Prometheus page by clicking on “30003”.
You can check Prometheus metrics by selecting the necessary option as shown below:
Under Grafana Page, you can add “Data Source” for Prometheus and ensure that the dashboard is up and running:
Congratulations! You have installed Istio on Kubernetes cluster. Below listed services have been installed on K8s playground:
Istio Controllers and related RBAC rules
Istio Custom Resource Definitions
Prometheus and Grafana for Monitoring
Jeager for Distributed Tracing
Istio Sidecar Injector (we'll take a look next section)
Installing Istioctl
Istioctl is configuration command line utility of Istio. It helps to create, list, modify and delete configuration resources in the Istio system.
Deploying the Sample BookInfo Application
Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation- BookInfo. This is a simple mock bookstore application made up of four services that provide a web product page, book details, reviews (with several versions of the review service), and ratings - all managed using Istio.
Deploying BookInfo Services
Defining the Ingress Gateway:
Verifying BookInfo Application
Accessing it via Web URL
You should now be able the BookInfo Sample as shown below:
Hope, this Istio deployment Kubernetes tutorial helped you to successfully install Istio on Kubernetes. In the future blog post, I will deep dive into Istio Internal Architecture, traffic management, policies & telemetry in detail.
We hoped this article helped you get familiar with the concept. If you want to know more about it and get certified, you can try the AWS certification course offered by KnowledgeHut.
Ajeet Singh Raina
Blog Author
Ajeet Singh Raina is a Docker Captain & {code} Catalysts by DellEMC. He is currently working as Technical Lead Engineer in Enterprise Solution Group @ Dell R&D. He has over 10+ years of solid understanding of a diverse range of IT infrastructure, systems management, systems integration and quality assurance. He is a frequent blogger at www.collabnix.com and have 150+ blogs contributed on new upcoming Docker releases and features. His personal blog attracts roughly thousands of visitors and tons of page-views every month. His areas of interest includes Docker on Swarm Mode, IoTs, and Legacy Applications & Cloud.
Share This Article
Ready to Master the Skills that Drive Your Career?