Accreditation Bodies
Accreditation Bodies
Accreditation Bodies
Supercharge your career with our Multi-Cloud Engineer Bootcamp
KNOW MOREPrepare for your interviews with these top Microservices interview questions if you are keen on becoming a Microservices developer. These interview questions on Microservices compiled by our experts will help you ace your Microservices interview and let you work as a Microservices Developer. Get prepared to answer questions on the basics of Microservices, automation in Microservices-based architecture, how Docker works in microservice, etc. Prove yourself as a Microservices expert in your next interview!
Filter By
Clear all
Microservices is an architectural style which structures and application as a collection of loosely coupled, independently maintainable, testable and deployable services which are organized around business capabilities.
If you have a business focus and you want to solve a use case or a problem efficiently without the boundaries of technology, want to scale an independent service infinitely, highly available stateless services which are easy to maintainable and managed as well as independently testable then we would go ahead and implement Microservices architecture.
There are two cases.
One should have unit and integration tests where all the functionality of a microservice can be tested. One should also have component based testing.
One should have contract tests to assert that the expectations by the client is not breaking. End-to-end test for the microservices, however, should only test the critical flows as these can be time-consuming. The tests can be from two sides, consumer-driven contract test and consumer-side contract test.
You can also leverage Command Query Responsibility Segregation to query multiple databases and get a combined view of persisted data.
In a cloud environment where docker images are dynamically deployed on any machine or IP + Port combination, it becomes difficult for dependent services to update at runtime. Service discovery is created due to that purpose only.
Service discovery is one of the services running under microservices architecture, which registers entries of all of the services running under the service mesh. All of the actions are available through the REST API. So whenever the services are up and running, the individual services registers themselves to service discovery service and service discovery services maintains heartbeat to make sure that those services are alive. That also serves the purpose of monitoring services as well. Service discovery also helps in distributing requests across services deployed in a fair manner.
Instead of clients directly connecting to load balancer, in this architectural pattern the client connects to the service registry and tries to fetch data or services from it.
Once it gets all data, it does load balancing on its own and directly reaches out to the services it needs to talk to.
This can have a benefit where there are multiple proxy layers and delays are happening due to the multilayer communication.
In server-side discovery, the proxy layer or API Gateway later tries to connect to the service registry and makes a call to appropriate service afterward. Over here client connects to that proxy layer or API Gateway layer.
Assuming that the majority of providers using microservices architecture,
Microservices Architecture is a style of developing a scalable, distributed & highly automated system made up of many small autonomous services. It is not a technology but a new trend evolved out of SOA.
There is no single definition that fully describes the term "microservices". Some of the famous authors have tried to define it in the following way:
Microservices are a continuation to SOA.
SOA started gaining ground due to its distributed architecture approach and it emerged to combat the problems of large monolithic applications, around 2006.
Both (SOA and Microservices) of these architectures share one common thing that they both are distributed architecture and both allow high scalability. In both, service components are accessed remotely through remote access protocol (RMI, REST, SOAP, AMQP, JMS, etc.). both are modular and loosely coupled by design and offer high scalability. Microservices started gaining buzz in late 2000 after the emergence of lightweight containers, Docker, Orchestration Frameworks (Kubernetes, Mesos). Microservices differ from SOA in a significant manner conceptually -
Bounded Context is a central pattern in Domain-Driven Design. In Bounded Context, everything related to the domain is visible within context internally but opaque to other bounded contexts. DDD deals with large models by dividing them into different Bounded Contexts and being explicit about their interrelationships.
Monolithic Conceptual Model Problem
A single conceptual model for the entire organization is very tricky to deal with. The only benefit of such a unified model is that integration is easy across the whole enterprise, but the drawbacks are many, for example:
Embracing microservices architecture brings many benefits compared to using monolith architecture in your application, including:
The decentralized teams working on individual microservices are mostly independent of each other, so changing a service does not require coordination with other teams. This can lead to significantly faster release cycles. It is very hard to achieve the same thing in a realistic monolithic application where a small change may require regression of the entire system.
Microservices style of system architecture emphasizes on the culture of freedom, single responsibility, autonomy of teams, faster release iterations and technology diversification.
Unlike in monolithic applications, microservices are not bound to one technology stack (Java, .Net, Go, Erlang, Python, etc). Each team is free to choose a technology stack that is best suited for its requirements. For example, we are free to choose Java for a microservice, c++ for others and Go for another one.
The term comes from an abbreviated compound of "development" and "operations". It is a culture that emphasizes effective communication and collaboration between product management, software development, and operations team. DevOps culture, if implemented correctly can lead to shorter development cycles and thus faster time to market.
Polyglot persistence is all about using different databases for different business needs within a single distributed system. We already have different database products in the market each for a specific business need, for example:
Relational databases are used for transactional needs (storing financial data, reporting requirements, etc.)
Documented oriented databases are used for documents oriented needs (for e.g. Product Catalog). Documents are schema-free so changes in the schema can be accommodated into the application without much headache.
Key-value pair based database (User activity tracking, Analytics, etc.). DynamoDB can store documents as well as key-value pairs.
In memory distributed database (user session tracking), its mostly used as a distributed cache among multiple microservices.
Graph DB (social connections, recommendations, etc)
Benefits of Polyglot Persistence are manifold and can be harvested in both monolithic as well as microservices architecture. Any decent-sized product will have a variety of needs which may not be fulfilled by a single kind of database alone. For example, if there are no transactional needs for a particular microservice, then it's way better to use a key-value pair or document-oriented NoSql rather than using a transactional RDBMS database.
References: https://martinfowler.com/bliki/PolyglotPersistence.html
The Twelve-Factor App is a recent methodology (and/or a manifesto) for writing web applications which run as a service.
One codebase, multiple deploys. This means that we should only have one codebase for different versions of a microservices. Branches are ok, but different repositories are not.
Explicitly declare and isolate dependencies. The manifesto advises against relying on software or libraries on the host machine. Every dependency should be put into pom.xml or build.gradle file.
Store config in the environment. Do never commit your environment-specific configuration (most importantly: password) in the source code repo. Spring Cloud Config provides server and client-side support for externalized configuration in a distributed system. Using Spring Cloud Config Server you have a central place to manage external properties for applications across all environments.
Treat backing services as attached resources. A microservice should treat external services equally, regardless of whether you manage them or some other team. For example, never hard code the absolute url for dependent service in your application code, even if the dependent microservice is developed by your own team. For example, instead of hard coding url for another service in your RestTemplate, use Ribbon (with or without Eureka) to define the url:
Strictly separate build and run stages. In other words, you should be able to build or compile the code, then combine that with specific configuration information to create a specific release, then deliberately run that release. It should be impossible to make code changes at runtime, for e.g. changing the class files in tomcat directly. There should always be a unique id for each version of release, mostly a timestamp. Release information should be immutable, any changes should lead to a new release.
Execute the app as one or more stateless processes. This means that our microservices should be stateless in nature, and should not rely on any state being present in memory or in the filesystem. Indeed the state does not belong in the code. So no sticky sessions, no in-memory cache, no local filesystem storage, etc. Distributed cache like memcache, ehcache or Redis should be used instead
Export services via port binding. This is about having your application as standalone, instead of relying on a running instance of an application server, where you deploy. Spring boot provides a mechanism to create a self-executable uber jar that contains all dependencies and embedded servlet container (jetty or tomcat).
Scale-out via the process model. In the twelve-factor app, processes are a first-class citizen. This does not exclude individual processes from handling their own internal multiplexing, via threads inside the runtime VM, or the async/evented model found in tools such as EventMachine, Twisted, or Node.js. But an individual VM can only grow so large (vertical scale), so the application must also be able to span multiple processes running on multiple physical machines. Twelve-factor app processes should never write PID files, rather it should rely on operating system process manager such as systemd - a distributed process manager on a cloud platform.
The twelve-factor app’s processes are disposable, meaning they can be started or stopped at a moment’s notice. This facilitates fast elastic scaling, rapid deployment of code or config changes, and robustness of production deploys. Processes should strive to minimize startup time. Ideally, a process takes a few seconds from the time the launch command is executed until the process is up and ready to receive requests or jobs. Short startup time provides more agility for the release process and scaling up; and it aids robustness because the process manager can more easily move processes to new physical machines when warranted.
Keep development, staging, and production as similar as possible. Your development environment should almost identical to a production one (for example, to avoid some “works on my machine” issues). That doesn’t mean your OS has to be the OS running in production, though. Docker can be used for creating logical separation for your microservices.
Treat logs as event streams, sending all logs only to stdout. Most Java Developers would not agree to this advice, though.
Run admin/management tasks as one-off processes. For example, a database migration should be run using a separate process altogether.
Microservices architecture is meant for developing large distributed systems that scale with safely. There are many benefits of microservices architecture over monoliths, for example:
As illustrated in the above example, a typical monolith eShop application is usually a big war file deployed in a single JVM process (tomcat/jboss/websphere, etc). Different components of a monolith communicate with each other using in-process communication like direct method invocation. One or more databases are shared among different components of a monolith application.
Microservices should be autonomous and divided based on business capabilities. Each software component should have single well-defined responsibility (a.k.a Single Responsibility Principle) and the principle of Bounded Context (as defined by Domain Driven Design) should be used to create highly cohesive software components.
For example, an e-commerce site can be partitioned into following microservices based on its business capabilities:
Responsible for product information, searching products, filtering products & products facets.
Responsible for managing inventory of products (stock/quantity and facet).
Collecting feedback from users about the products.
Responsible for creating and managing orders.
Process payments both online and offline (Cash On Delivery).
Manage and track shipments against orders.
Market products to relevant users.
Manage users and their preferences.
Suggest new products based on the user’s preference or past purchases.
Email and SMS notification about orders, payments, and shipments.
The client application (browser, mobile app) will interact with these services via API gateway and render the relevant information to the user.
If you want to halt the service when it is not able to locate the config-server during bootstrap, then you need to configure the following property in microservice’s bootstrap.yml:
spring: cloud: config: fail-fast: true
Using this configuration will make microservice startup fail with an exception when config-server is not reachable during bootstrap.
We can enable a retry mechanism where microservice will retry 6 times before throwing an exception. We just need to add spring-retry and spring-boot-starter-aop to the classpath to enable this feature.
build.gradle:-
... dependencies { compile('org.springframework.boot:spring-boot-starter-aop') compile('org.springframework.retry:spring-retry') ... }
Microservices should be autonomous and divided based on business capabilities. Each software component should have single well-defined responsibility (a.k.a Single Responsibility Principle) and the principle of Bounded Context (as defined by Domain Driven Design) should be used to create highly cohesive software components.
For example, an e-shop can be partitioned into following microservices based on its business capabilities:
Responsible for product information, searching products, filtering products & products facets.
Responsible for managing inventory of products (stock/quantity and facet).
Collecting feedback from users about the products.
Responsible for creating and managing orders.
Process payments both online and offline (Cash On Delivery).
Manage and track shipments against orders.
Market products to relevant users.
Manage users and their preferences.
Suggest new products based on the user’s preference or past purchases.
Email and SMS notification about orders, payments, and shipments.
The client application (browser, mobile app) will interact with these services via the API gateway and render the relevant information to the user.
A good, albeit non-specific, rule of thumb is as small as possible but as big as necessary to represent the domain concept they own said by Martin Fowler
Size should not be a determining factor in microservices, instead bounded context principle and single responsibility principle should be used to isolate a business capability into a single microservice boundary.
Microservices are usually small but not all small services are microservices. If any service is not following the Bounded Context Principle, Single Responsibility Principle, etc. then it is not a microservice irrespective of its size. So the size is not the only eligibility criteria for a service to become microservice.
In fact, size of a microservice is largely dependent on the language (Java, Scala, PHP) you choose, as few languages are more verbose than others.
Microservices are often integrated using a simple protocol like REST over HTTP. Other communication protocols can also be used for integration like AMQP, JMS, Kafka, etc.
The communication protocol can be broadly divided into two categories- synchronous communication and asynchronous communication.
RestTemplate, WebClient, FeignClient can be used for synchronous communication between two microservices. Ideally, we should minimize the number of synchronous calls between microservices because networks are brittle and they introduce latency. Ribbon - a client-side load balancer can be used for better utilization of resource on the top of RestTemplate. Hystrix circuit breaker can be used to handle partial failures gracefully without a cascading effect on the entire ecosystem. Distributed commits should be avoided at any cost, instead, we shall opt for eventual consistency using asynchronous communication.
In this type of communication, the client does not wait for a response, instead, it just sends the message to the message broker. AMQP (like RabbitMQ) or Kafka can be used for asynchronous communication across microservices to achieve eventual consistency.
In Orchestration, we rely on a central system to control and call other Microservices in a certain fashion to complete a given task. The central system maintains the state of each step and sequence of the overall workflow. In Choreography, each Microservice works like a State Machine and reacts based on the input from other parts. Each service knows how to react to different events from other systems. There is no central command in this case.
Orchestration is a tightly coupled approach and is an anti-pattern in a microservices architecture. Whereas, Choreography’s loose coupling approach should be adopted where-ever possible.
Example
Let’s say we want to develop a microservice that will send product recommendation email in a fictitious e-shop. In order to send Recommendations, we need to have access to user’s order history which lies in a different microservices.
In Orchestration approach, this new microservice for recommendations will make synchronous calls to order service and fetch the relevant data, then based on his past purchases we will calculate the recommendations. Doing this for a million users will become cumbersome and will tightly couple the two microservices.
In Choreography approach, we will use event-based Asynchronous communication where whenever a user makes a purchase, an event will be published by order service. Recommendation service will listen to this event and start building user recommendation. This is a loosely coupled approach and highly scalable. The event, in this case, does not tell about the action, but just the data.
There is no right answer to this question, there could be a release every ten minutes, every hour or once a week. It all depends on the extent of automation you have at a different level of the software development lifecycle - build automation, test automation, deployment automation and monitoring. And of course on the business requirements - how small low-risk changes you care making in a single release.
In an ideal world where boundaries of each microservices are clearly defined (bounded context), and a given service is not affecting other microservices, you can easily achieve multiple deployments a day without major complexity.
Examples of deployment/release frequency
Cloud-Native Applications (NCA) is a style of application development that encourages easy adoption of best practices in the area of continuous delivery and distributed software development. These applications are designed specifically for a cloud computing architecture (AWS, Azure, CloudFoundary, etc).
DevOps, continuous delivery, microservices, and containers are the key concepts in developing cloud-native applications.
Spring Boot, Spring Cloud, Docker, Jenkins, Git are a few tools that can help you write Cloud-Native Application without much effort.
It is an architectural approach for developing a distributed system as a collection of small services. Each service is responsible for a specific business capability, runs in its own process and communicates via HTTP REST API or messaging (AMQP).
It is a collaboration between software developers and IT operations with a goal of constantly delivering high-quality software as per customer needs.
Its all about automated delivery of low-risk small changes to production, constantly. This makes it possible to collect feedback faster.
Containers (e.g. Docker) offer logical isolation to each microservices thereby eliminating the problem of "run on my machine" forever. It’s much faster and efficient compared to Virtual Machines.
Spring Boot along with Spring Cloud is a very good option to start building microservices using Java language. There are a lot of modules available in Spring Cloud that can provide boiler plate code for different design patterns of microservices, so Spring Cloud can really speed up the development process. Also, Spring boot provides out of the box support to embed a servlet container (tomcat/jetty/undertow) inside an executable jar (uber jar), so that these jars can be run directly from the command line, eliminating the need of deploying war files into a servlet container.
You can also use Docker container to ship and deploy the entire executable package onto a cloud environment. Docker can also help eliminate "works on my machine" problem by providing logical separation for the runtime environment during the development phase. That way you can gain portability across on-premises and cloud environment.
Spring Boot makes it easy to create stand-alone, production-grade Spring-based applications that you can "just run" with an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss.
Main features of Spring Boot
You can create a Spring Boot starter project by selecting the required dependencies for your project using online tool hosted at https://start.spring.io/
Bare minimum dependency for any spring boot application is:
dependencies { compile("org.springframework.boot:spring-boot-starter-web:2.0.4.RELEASE") } The main java class for Spring Boot application will look something like the following: import org.springframework.boot.*; import org.springframework.boot.autoconfigure.*; import org.springframework.stereotype.*; import org.springframework.web.bind.annotation.*; @Controller @EnableAutoConfiguration public class HelloWorldController { @RequestMapping("/") @ResponseBody String home() { return "Hello World!"; } public static void main(String[] args) throws Exception { SpringApplication.run(SampleController.class, args); } }
You can directly run this class, without deploying it to a servlet container.
Useful References
API Gateway is a special class of microservices that meets the need of a single client application (such as android app, web app, angular JS app, iPhone app, etc) and provide it with single entry point to the backend resources (microservices), providing cross-cutting concerns to them such as security, monitoring/metrics & resiliency.
Client Application can access tens or hundreds of microservices concurrently with each request, aggregating the response and transforming them to meet the client application’s needs. Api Gateway can use a client-side load balancer library (Ribbon) to distribute load across instances based on round-robin fashion. It can also do protocol translation i.e. HTTP to AMQP if necessary. It can handle security for protected resources as well.
Features of API Gateway
As the name suggests, zero-downtime deployments do not bring outage in a production environment. It is a clever way of deploying your changes to production, where at any given point in time, at least one service will remain available to customers.
One way of achieving this is blue/green deployment. In this approach, two versions of a single microservice are deployed at a time. But only one version is taking real requests. Once the newer version is tested to the required satisfaction level, you can switch from older version to newer version.
You can run a smoke-test suite to verify that the functionality is running correctly in the newly deployed version. Based on the results of smoke-test, newer version can be released to become the live version.
Lets say you have two instances of a service running at the same time, and both are registered in Eureka registry. Further, both instances are deployed using two distinct hostnames:
/src/main/resources/application.yml spring.application.name: ticketBooks-service --- spring.profiles: blue eureka.instance.hostname: ticketBooks-service -blue.example.com --- spring.profiles: green
eureka.instance.hostname: ticketBooks-service -green.example.com
Now the client app that needs to make api calls to books-service may look like below:
@RestController @SpringBootApplication @EnableDiscoveryClient public class ClientApp { @Bean @LoadBalanced public RestTemplate restTemplate() { return new RestTemplate(); } @RequestMapping("/hit-some-api") public Object hitSomeApi() { return restTemplate().getForObject("https://ticketBooks-service/some-uri", Object.class); }
Now, when ticketBooks-service-green.example.com goes down for upgrade, it gracefully shuts down and delete its entry from Eureka registry. But these changes will not be reflected in the ClientApp until it fetches the registry again (which happens every 30 seconds). So for upto 30 seconds, ClientApp’s @LoadBalanced RestTemplate may send the requests to ticketBooks-service-green.example.com even if its down.
To fix this, we can use Spring Retry support in Ribbon client-side load balancer. To enable Spring Retry, we need to follow the below steps:
Add spring-retry to build.gradle dependencies
compile("org.springframework.boot:spring-boot-starter-aop") compile("org.springframework.retry:spring-retry")
Now enable spring-retry mechanism in ClientApp using @EnableRetry annotation, as shown below:
@EnableRetry @RestController @SpringBootApplication @EnableDiscoveryClient public class ClientApp { ... }
Once this is done, Ribbon will automatically configure itself to use retry logic and any failed request to ticketBooks-service-green.example.com com will be retried to next available instance (in round-robins fashion) by Ribbon. You can customize this behaviour using the below properties:
/src/main/resources/application.yml ribbon: MaxAutoRetries: 5 MaxAutoRetriesNextServer: 5 OkToRetryOnAllOperations: true OkToRetryOnAllErrors: true
The deployment scenario becomes complex when there are database changes during the upgrade. There can be two different scenarios: 1. database change is backward compatible (e.g. adding a new table column) 2. database change is not compatible with an older version of the application (e.g. renaming an existing table column)
Complexity may be much more in a realistic production app, such discussions are beyond the scope of this book.
ACID is an acronym for four primary attributes namely atomicity, consistency, isolation, and durability ensured by the database transaction manager.
In a transaction involving two or more entities, either all of the records are committed or none are.
A database transaction must change affected data only in allowed ways following specific rules including constraints/triggers etc.
Any transaction in progress (not yet committed) must remain isolated from any other transaction.
Committed records are saved by a database such that even in case of a failure or database restart, the data is available in its correct state.
In a distributed system involving multiple databases, we have two options to achieve ACID compliance:
2 Phase Commit should ideally be discouraged in microservices architecture due to its fragile and complex nature. We can achieve some level of ACID compliance in distributed systems through eventual consistency and that should be the right approach to do it.
Spring team has an integrated number of battle-tested open-source projects from companies like Pivotal, Netflix into a Spring project known as Spring Cloud. Spring Cloud provides libraries & tools to quickly build some of the common design patterns of a distributed system, including the following:
Pattern Type | Pattern Name | Spring Cloud Library |
---|---|---|
Development Pattern | Distributed/versioned configuration management | Spring Cloud Config Server |
— | Core Microservices Patterns | Spring Boot |
— | Asynchronous/Distributed Messaging | Spring Cloud Stream (AMQP and Kafka) |
— | Inter-Service Communication | RestTemplate and Spring Cloud Feign |
Routing Pattern | Service Registration & Discovery | Spring Cloud Netflix Eureka & Consul |
Routing Pattern | Service Routing/ API Gateway Pattern | Spring Cloud Netflix Zuul |
Resiliency Pattern | Client-side load balancing | Spring Cloud Netflix Ribbon |
— | Circuit Breaker & Fallback Pattern | Spring Cloud Netflix Hystrix |
— | Bulkhead pattern | Spring Cloud / Spring Cloud Netflix Hystrix |
Logging Patterns | Log Correlation | Spring Cloud Sleuth |
— | Microservice Tracing | Spring Cloud Sleuth/Zipkin |
Security Patterns | Authorization and Authentication | Spring Cloud Security OAuth2 |
— | Credentials Management | Spring Cloud Security OAuth2/ JWT |
— | Distributed Sessions | Spring Cloud OAuth2 and Redis |
Spring Cloud makes it really easy to develop, deploy and operate JVM applications for the Cloud.
A microservice is a small, independently deployable service that performs a specific business function. Each microservice runs its own process and communicates with other services over a network, typically using lightweight protocols like HTTP.
Microservices break down an application into smaller, independent services, each handling a specific function. In contrast, monolithic architecture involves a single, large application where all components are interconnected and interdependent.
Benefits of microservices include improved scalability, easier maintenance, independent deployment, fault isolation, and the ability to use different technologies for different services.
Microservices communicate through lightweight protocols like HTTP/REST for synchronous communication and message brokers like RabbitMQ or Kafka for asynchronous communication.
REST (Representational State Transfer) is an architectural style that uses standard HTTP methods to enable communication between microservices, allowing them to perform CRUD (Create, Read, Update, Delete) operations on resources.
Microservices are monitored using tools like Prometheus, Grafana, ELK Stack, and Jaeger to track performance metrics, logs, and distributed tracing, ensuring the system's health and identifying issues.
An API Gateway acts as a single entry point for client requests, handling tasks such as routing, composition, protocol translation, and security, simplifying client interactions with microservices.
Microservices offer better scalability, flexibility, and resilience compared to monoliths. They allow independent deployment, easier maintenance, fault isolation, and the ability to use different technologies for different services.
Challenges include managing distributed data, ensuring consistency, handling inter-service communication, maintaining security, and dealing with operational complexity due to a large number of services.
Yes, each microservice can be written in a different programming language, allowing teams to choose the best technology for each service, known as polyglot programming.
When you are implementing microservices architecture, there are some challenges that you need to deal with every single microservices. Moreover, when you think about the interaction with each other, it can create a lot of challenges. As well as if you pre-plan to overcome some of them and standardize them across all microservices, then it happens that it also becomes easy for developers to maintain services.
Some of the most challenging things are testing, debugging, security, version management, communication ( sync or async ), state maintenance etc. Some of the cross-cutting concerns which should be standardized are monitoring, logging, performance improvement, deployment, security etc.
It is a very subjective question, but with the best of my knowledge I can say that it should be based on the following criteria.
In real time, it happens that a particular service is causing a downtime, but the other services are functioning as per mandate. So, under such conditions, the particular service and its dependent services get affected due to the downtime.
In order to solve this issue, there is a concept in the microservices architecture pattern, called the circuit breaker. Any service calling remote service can call a proxy layer which acts as an electric circuit breaker. If the remote service is slow or down for ‘n’ attempts then proxy layer should fail fast and keep checking the remote service for its availability again. As well as the calling services should handle the errors and provide retry logic. Once the remote service resumes then the services starts working again and the circuit becomes complete.
This way, all other functionalities work as expected. Only one or the dependent services get affected.
This is related to the automation for cross-cutting concerns. We can standardize some of the concerns like monitoring strategy, deployment strategy, review and commit strategy, branching and merging strategy, testing strategy, code structure strategies etc.
For standards, we can follow the 12-factor application guidelines. If we follow them, we can definitely achieve great productivity from day one. We can also containerize our application to utilize the latest DevOps themes like dockerization. We can use mesos, marathon or kubernetes for orchestrating docker images. Once we have dockerized source code, we can use CI/CD pipeline to deploy our newly created codebase. Within that, we can add mechanisms to test the applications and make sure we measure the required metrics in order to deploy the code.
We can use strategies like blue-green deployment or canary deployment to deploy our code so that we know the impact of code which might go live on all of the servers at the same time. We can do AB testing and make sure that things are not broken when live. In order to reduce a burden on the IT team, we can use AWS / Google cloud to deploy our solutions and keep them on autoscale to make sure that we have enough resources available to serve the traffic we are receiving.
This is a very interesting question. In monolith where HTTP Request waits for a response, the processing happens in memory and it makes sure that the transaction from all such modules work at its best and ensures that everything is done according to expectation. But it becomes challenging in the case of microservices because all services are running independently, their datastores can be independent, REST Apis can be deployed on different endpoints. Each service is doing a bit without knowing the context of other microservices.
In this case, we can use the following measures to make sure we are able to trace the errors easily.
It is an important design decision. The communication between services might or might not be necessary. It can happen synchronously or asynchronously. It can happen sequentially or it can happen in parallel. So, once we have decided what should be our communication mechanism, we can decide the technology which suits the best.
Here are some of the examples which you can consider.
There are mainly two ways to achieve authentication in microservices architecture.
All the microservices can use a central session store and user authentication can be achieved. This approach works but has many drawbacks as well. Also, the centralized session store should be protected and services should connect securely. The application needs to manage the state of the user, so it is called stateful session.
In this approach, unlike the traditional way, information in the form of token is held by the clients and the token is passed along with each request. A server can check the token and verify the validity of the token like expiry, etc. Once the token is validated, the identity of the user can be obtained from the token. However, encryption is required for security reasons. JWT(JSON web token) is the new open standard for this, which is widely used. Mainly used in stateless applications. Or, you can use OAuth based authentication mechanisms as well.
Logging is a very important aspect of any application. If we have done proper logging in an application, it becomes easy to support other aspects of the application as well. Like in order to debug the issues / in order to understand what business logic might have been executed, it becomes very critical to log important details.
Ideally, you should follow the following practices for logging.
Docker helps in many ways for microservices architecture.
As container based deployment involves a single image per microservice, it is a bad idea to bundle the configuration along with the image.
This approach is not at all scalable because we might have multiple environments and also we might have to take care of geographically distributed deployments where we might have different configurations as well.
Also, when there are application and cron application as part of the same codebase, it might need to take additional care on production as it might have repercussions how the crons are architected.
To solve this, we can put all our configuration in a centralized config service which can be queried by the application for all its configurations at the runtime. Spring cloud is one of the example services which provides this facility.
It also helps to secure the information, as the configuration might have passwords or access to reports or database access controls. Only trusted parties should be allowed to access these details for security reasons.
In a production environment, you don’t just deal with the application code/application server. You need to deal with API Gateway, Proxy Servers, SSL terminators, Application Servers, Database Servers, Caching Services, and other dependent services.
As in modern microservice architecture where each microservice runs in a separate container, deploying and managing these containers is very challenging and might be error-prone.
Container orchestration solves this problem by managing the life cycle of a container and allows us to automate the container deployments.
It also helps in scaling the application where it can easily bring up a few containers. Whenever there is a high load on the application and once the load goes down. it can scale down as well by bringing down the containers. It is helpful to adjust cost based on requirements.
Also in some cases, it takes care of internal networking between services so that you need not make any extra effort to do so. It also helps us to replicate or deploy the docker images at runtime without worrying about the resources. If you need more resources, you can configure that in orchestration services and it will be available/deployed on production servers within minutes.
An API Gateway is a service which sits in front of the exposed APIs and acts as an entry point for a group of microservices. Gateway also can hold the minimum logic of routing calls to microservices and also an aggregation of the response.
One should avoid sharing database between microservices, instead APIs should be exposed to perform the change.
If there is any dependency between microservices then the service holding the data should publish messages for any change in the data for which other services can consume and update the local state.
If consistency is required then microservices should not maintain local state and instead can pull the data whenever required from the source of truth by making an API call.
In the microservices architecture, it is possible that due to service boundaries, a lot of times you need to update one or more entities on the state change of one of the entities. In that case, one needs to publish a message and new event gets created and appended to already executed events. In case of failure, one can replay all events in the same sequence and you will get the desired state as required. You can think of event sourcing as your bank account statement.
You will start your account with initial money. Then all of the credit and debit events happen and the latest state is generated by calculating all of the events one by one. In a case where events are too many, the application can create a periodic snapshot of events so that there isn’t any need to replay all of the events again and again.
Servers come and go in a cloud environment, and new instances of same services can be deployed to cater increasing load of requests. So it becomes absolutely essential to have service registry & discovery that can be queried for finding address (host, port & protocol) of a given server. We may also need to locate servers for the purpose of client-side load balancing (Ribbon) and handling failover gracefully (Hystrix).
Spring Cloud solves this problem by providing a few ready-made solutions for this challenge. There are mainly two options available for the service discovery - Netflix Eureka Server and Consul. Let's discuss both of these briefly:
Netflix Eureka Server
Eureka is a REST (Representational State Transfer) based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. The main features of Netflix Eureka are:
Spring Cloud provides two dependencies - eureka-server and eureka-client. Eureka server dependency is only required in eureka server’s build.gradle
On the other hand, each microservice need to include the eureka-client dependencies to enables eureka discovery.
build.gradle - Eureka Client (to be included in all microservices) compile('org.springframework.cloud:spring-cloud-starter-netflix-eureka-client').
Eureka server provides a basic dashboard for monitoring various instances and their health in the service registry. The ui is written in freemarker and provided out of the box without any extra configuration. Screenshot for Eureka Server looks like the following.
It contains a list of all services that are registered with Eureka Server. Each server has information like zone, host, port, and protocol.
Consul Server
It is a REST-based tool for dynamic service registry. It can be used for registering a new service, locating a service and health checkup of a service.
You have the option to choose any one of the above in your spring cloud-based distributed application. In this book, we will focus more on the Netflix Eureka Server option.
If you have 3 different environments (develop/stage/production) in your project setup, then you need to create three different config storage projects. So in total, you will have four projects:
It is the config-server that can be deployed in each environment. It is the Java Code without configuration storage.
It is the git storage for your development configuration. All configuration related to each microservices in the development environment will fetch its config from this storage. This project has no Java code, and t is meant to be used with config-server.
Same as config-dev but its meant to be used only in qa environment.
There are two main components in Eureka project: eureka-server and eureka-client.
The central server (one per zone) that acts as a service registry. All microservices register with this eureka server during app bootstrap.
Eureka also comes with a Java-based client component, the eureka-client, which makes interactions with the service much easier. The client also has a built-in load balancer that does basic round-robin load balancing. Each microservice in the distributed ecosystem much include this client to communicate and register with eureka-server.
There is usually one eureka server cluster per region (US, Asia, Europe, Australia) which knows only about instances in its region. Services register with Eureka and then send heartbeats to renew their leases every 30 seconds. If the service can not renew their lease for a few times, it is taken out of server registry in about 90 seconds. The registration information and the renewals are replicated to all the eureka nodes in the cluster. The clients from any zone can look up the registry information (happens every 30 seconds) to locate their services (which could be in any zone) and make remote calls.
Eureka clients are built to handle the failure of one or more Eureka servers. Since Eureka clients have the registry cache information in them, they can operate reasonably well, even when all of the eureka servers go down.
Microservices often need to make remote network calls to another microservices running in a different process. Network calls can fail due to many reasons, including-
This can lead to cascading failures in the calling service due to threads being blocked in the hung remote calls. A circuit breaker is a piece of software that is used to solve this problem. The basic idea is very simple - wrap a potentially failing remote call in a circuit breaker object that will monitor for failures/timeouts. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, without the protected call being made at all. This mechanism can protect the cascading effects of a single component failure in the system and provide the option to gracefully downgrade the functionality.
A typical use of circuit breaker in microservices architecture looks like the following diagram-
Typical Circuit Breaker Implementation
Here a REST client calls the Recommendation Service which further communicates with Books Service using a circuit breaker call wrapper. As soon as the books-service API calls starts to fail, circuit breaker will trip (open) the circuit and will not make any further call to book-service until the circuit is closed again.
Martin Fowler has beautifully explained this phenomenon in detail on his blog.
Martin Fowler on Circuit Breaker Pattern : https://martinfowler.com/bliki/CircuitBreaker.html
Circuit Breaker wraps the original remote calls inside it and if any of these calls fails, the failure is counted. When the service dependency is healthy and no issues are detected, the circuit breaker is in Closed state. All invocations are passed through to the remote service.
If the failure count exceeds a specified threshold within a specified time period, the circuit trips into the Open State. In the Open State, calls always fail immediately without even invoking the actual remote call. The following factors are considered for tripping the circuit to Open State -
After a predetermined period of time (by default 5 seconds), the circuit transitions into a half-open state. In this state, calls are again attempted to the remote dependency. Thereafter the successful calls transition the circuit breaker back into the closed state, while the failed calls return the circuit breaker into the open state.
Benefits:-
Config first bootstrap and discovery first bootstrap are two different approaches for using Spring Cloud Config client in Spring Cloud-powered microservices. Let’s discuss both of them:
Config First Bootstrap
This is the default behavior for any spring boot application where Spring Cloud Config client is on the classpath. When a config client starts up it binds to the Config Server using the bootstrap configuration property and initializes Spring Environment with remote property sources.
The only configuration that each microservice (except config-server) needs to provide is the following:
Discovery First Bootstrap
If you are using Spring Cloud Netflix and Eureka Service Discovery then you can have Config Server register with the Discovery Service and let all clients get access to config server via discovery service.
This is not the default configuration in Spring Cloud applications, so we need to manually enable it using the below property in bootstrap.yml
Listing 17. /src/main/resources/bootstrap.yml spring: cloud: config: discovery: enabled: true This property should be provided by all microservices so that they can take advantage of discovery first approach.
The benefit of this approach is that now config-server can change its host/port without other microservices knowing about it since each microservice can get the configuration via eureka service now. The downside of this approach is that an extra network round trip is required to locate the service registration at app startup.
Strangulation is used to slowly decommission an older system and migrate the functionality to a newer version of microservices.
Normally one endpoint is Strangled at a time, slowly replacing all of them with the newer implementation. Zuul Proxy (API Gateway) is a useful tool for this because we can use it to handle all traffic from clients of the old endpoints, but redirect only selected requests to the new ones.
Let’s take an example use-case:
/src/main/resources/application.yml zuul: routes:
first:
path: /first/** url: http://first.example.com --1
legacy:
path: /** url: http://legacy.example.com -- 2
This configuration is for API Gateway (zuul reverse proxy), and we are strangling selected endpoints /first/ from the legacy app hosted at http://legacy.example.com slowly to newly created microservice with external URL http://first.example.com
Hystrix is Netflix implementation for circuit breaker pattern, that also employs bulkhead design pattern by operating each circuit breaker within its own thread pool. It also collects many useful metrics about the circuit breaker’s internal state, including -
All these metrics can be aggregated using another Netflix OSS project called Turbine. Hystrix dashboard can be used to visualize these aggregated metrics, providing excellent visibility into the overall health of the distributed system.
Hystrix can be used to specify the fallback method for execution in case the actual method call fails. This can be useful for graceful degradation of functionality in case of failure in remote invocation.
Add hystrix library to build.gradle dependencies { compile('org.springframework.cloud:spring-cloud-starter-hystrix')
1) Enable Circuit Breaker in main application
@EnableCircuitBreaker @RestController @SpringBootApplication public class ReadingApplication { ... }
2) Using HystrixCommand fallback method execution
@HystrixCommand(fallbackMethod = "reliable") public String readingList() { URI uri = URI.create("http://localhost:8090/recommended"); return this.restTemplate.getForObject(uri, String.class); } public String reliable() { 2 return "Cached recommended response"; }
Hystrix library makes our distributed system resilient (adaptable & quick to recover) to failures. It
provides three main features:
It helps stop cascading failures, provide decent fallbacks and graceful degradation of service functionality to confine failures. It works on the idea of fail-fast and rapid recovery. Two different options namely Thread isolation and Semaphore isolation are available for use to confine failures.
Using real-time metrics, you can remain alert, make decisions, affect changes and see results.
Parallel execution, concurrent aware request caching and finally automated batching through request collapsing improves the concurrency performance of your application.
More information on Netflix hystrix library:
Let's say we want to handle service to service failure gracefully without using the Circuit Breaker pattern. The naive approach would be to wrap the REST call in a try-catch clause. But Circuit Breaker does a lot more than try-catch can not accomplish -
So instead of wrapping service to service calls with try/catch clause, we must use the circuit breaker pattern to make our system resilient to failures.
The bulkhead implementation in Hystrix limits the number of concurrent calls to a component/service. This way, the number of resources (typically threads) that are waiting for a reply from the component/service is limited.
Let's assume we have a fictitious web e-commerce application as shown in the figure below. The WebFront communicates with 3 different components using remote network calls (REST over HTTP).
Now let's say due to some problem in Product Review Service, all requests to this service start to hang (or timeout), eventually causing all request handling threads in WebFront Application to hang on waiting for an answer from Reviews Service. This would make the entire WebFront Application non-responsive. The resulting behavior of the WebFront Application would be same if request volume is high and Reviews Service is taking time to respond to each request.
The Hystrix Solution
Hystrix’s implementation for bulkhead pattern would limit the number of concurrent calls to components and would have saved the application in this case by gracefully degrading the functionality. Assume we have 30 total request handling threads and there is a limit of 10 concurrent calls to Reviews Service. Then at most 10 request handling threads can hang when calling Reviews Service, the other 20 threads can still handle requests and use components Products and Orders Service. This will approach will keep our WebFront responsive even if there is a failure in Reviews Service.
Martin Fowler introduced the concept of "smart endpoints & dumb pipes" while describing microservices architecture.
To give context, one of the main characteristic of a based system is to build small utilities and connect them using pipes. For example, a very popular way of finding all java processes in Linux system is Command pipeline in Unix shell ps elf | grep java
Here two commands are separated by a pipe, the pipe’s job is to forward the output of the first command as an input to the second command, nothing more. like a dumb pipe which has no business logic except the routing of data from one utility to another.
In his article Martin Fowler compares Enterprise Service Bus (ESB) to ZeroMQ/RabbitMQ, ESB is a pipe but has a lot of logic inside it while ZeroMQ has no logic except the persistence/routing of messages. ESB is a fat layer that does a lot of things like - security checks, routing, business flow & validations, data transformations, etc. So ESB is a kind of smart pipe that does a lot of things before passing data to next endpoint (service). Smart endpoints & dumb pipes advocate an exactly opposite idea where the communication channel should be stripped of any business-specific logic and should only distribute messages between components. The components (endpoints/services) should do all the data validations, business processing, security checks, etc on those incoming messages.
Microservices team should follow the principles and protocols that worldwide web & Unix is built on.
There are different ways to handle the versioning of your REST api to allow older consumers to still consume the older endpoints. The ideal practice is that any nonbackward compatible change in a given REST endpoint shall lead to a new versioned endpoint.
Different mechanisms of versioning are:
Most common approach in versioning is the URL versioning itself. A versioned URL looks like the following:
Versioned URL
As an API developer you must ensure that only backward-compatible changes are accommodated in a single version of URL. Consumer-Driven-Tests can help identify potential issues with API upgrades at an early stage.
Using config-server, it's possible to refresh the configuration on the fly. The configuration changes will only be picked by Beans that are declared with @RefreshScope annotation.
The following code illustrates the same. The property message is defined in the config-server and changes to this property can be made at runtime without restarting the microservices.
package hello; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.context.config.annotation.RefreshScope; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @SpringBootApplication public class ConfigClientApplication { public static void main(String[] args) { SpringApplication.run(ConfigClientApplication.class, args); } } @RefreshScope 1 @RestController class MessageRestController { @Value("${message:Hello World}") private String message; @RequestMapping("/message") String getMessage() { return this.message; }}
1 @RefreshScope makes it possible to dynamically reload the configuration for this bean.
@HystrixCommand annotation provides attribute ignoreExceptions that can be used to provide a list of ignored exceptions.
Code
@Service
public class HystrixService {
@Autowired
private LoadBalancerClient loadBalancer;
@Autowired
private RestTemplate restTemplate;
@HystrixCommand(fallbackMethod = "reliable", ignoreExceptions = IllegalStateException.class, MissingServletRequestParameterException.class, TypeMismatchException.class)
public String readingList() {
ServiceInstance instance = loadBalancer.choose("product-service"); URI uri = URI.create("http://product-service/product/recommended"); return this.restTemplate.getForObject(uri, String.class);}
public String reliable(Throwable e) { return "Cloud Native Java (O'Reilly)";
In the above example, if the actual method call throws IllegalStateException, MissingServletRequestParameterException or TypeMismatchException then hystrix will not trigger the fallback logic (reliable method), instead the actual exception will be wrapped inside HystrixBadRequestException and re-thrown to the caller. It is taken care by javanica library under the hood.
In a microservices architecture, each microservice shall own its private data which can only be accessed by the outside world through owning service. If we start sharing microservice’s private datastore with other services, then we will violate the principle of Bounded Context.
Practically we have three approaches -
Microservices Architecture can become cumbersome & unmanageable if not done properly. There are best practices that help design a resilient & highly scalable system. The most important ones are
Get to know the domain of your business, that's very very important. Only then you will be able to define the bounded context and partition your microservice correctly based on business capabilities.
Typically, everything from continuous integration all the way to continuous delivery and deployment should be automated. Otherwise, a big pain to manage a large fleet of microservices.
We never know where a new instance of a particular microservice will be spun up for scaling out or for handling failure, so maintaining a state inside service instance is a very bad idea.
Failures are inevitable in distributed systems, so we must design our system for handling failures gracefully. failures can be of different types and must be dealt with accordingly, for example -
We should try to make our services backward compatible, explicit versioning must be used to cater different versions of the RESt endpoints.
Asynchronous communication should be preferred over synchronous communication in inter microservice communication. One of the biggest advantages of using asynchronous messaging is that the service does not block while waiting for a response from another service.
Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.
Since networks are brittle, we should always design our services to accept repeated calls without any side effects. We can add some unique identifier to each request so that service can ignore the duplicate request sent over the network due to network failure/retry logic.
In monolithic applications, sharing is considered to be a best practice but that's not the case with Microservices. Sharing results in a violation of Bounded Context Principle, so we shall refrain from creating any single unified shared model that works across microservices. For example, if different services need a common Customer model, then we should create one for each microservice with just the required fields for a given bounded context rather than creating a big model class that is shared in all services.
The more dependencies we have between services, the harder it is to isolate the service changes, making it difficult to make a change in a single service without affecting other services. Also, creating a unified model that works in all services brings complexity and ambiguity to the model itself, making it hard for anyone to understand the model.
In a way are want to violate the DRY principle in microservices architecture when it comes to domain models.
Caching is a technique of performance improvement for getting query results from a service. It helps minimize the calls to network, database, etc. We can use caching at multiple levels in microservices architecture -
Swagger is a very good open-source tool for documenting APIs provided by microservices. It provides very easy to use interactive documentation.
By the use of swagger annotation on REST endpoint, api documentation can be auto-generated and exposed over the web interface. An internal and external team can use web interface, to see the list of APIs and their inputs & error codes. They can even invoke the endpoints directly from web interface to get the results.
Swagger UI is a very powerful tool for your microservices consumers to help them understand the set of endpoints provided by a given microservice.
Basic Authentication is natively supported by almost all servers and clients, even Spring security has very good support for it and its configured out of the box. But it is not a good fit for Microservices due to many reasons, including -
There are 3 parts in every JWT claim - Header, Claim and Signature. These 3 parts are separated by a dot. The entire JWT is encoded in Base64 format.
JWT = {header}.{payload}.{signature}
A typical JWT is shown here for reference.
Encoded JSON Web Token
Entire JWT is encoded in Base64 format to make it compatible with HTTP protocol. Encoded JWT looks like the following:
Decoded JSON Web Token
Header
Header contains algorithm information e.g. HS256 and type e.g. JWT
{ "alg": "HS256", "typ": "JWT" }
Claim
claim part has an expiry, issuer, user_id, scope, roles, client_id etc. It is encoded as a JSON object. You can add custom attributes to the claim. This is the information that you want to exchange with the third party.
{ "uid": "2ce35360-ef8e-4f69-a8d7-b5d1aec78759", "user_name": "user@mail.com", "scope": ["read"], "exp": 1520017228, "authorities": ["ROLE_USER","ROLE_ADMIN"], "jti": "5b42ca29-8b61-4a3a-8502-53c21e85a117", "client_id": "acme-app" }
Signature
Signature is typically a one way hash of (header + payload), is calculated using HMAC SHA256 algorithm. The secret used for signing the claim should be kept private. Pubic/private key can also be used to encrypt the claim instead of using symmetric cryptography.
HMACSHA256(base64(header) + "." + base64(payload), "secret")
OAuth2.0 is a delegation protocol where the Client (Mobile App or web app) does not need to know about the credentials of Resource Owner (end-user).
Oauth2 defines four roles.
Important Tools and Libraries for testing Spring-based Microservices are -
the standard test runners
the next generation test runner
declarative matchers and assertions
for writing REST Api driven end to end tests
for mocking dependencies
for stubbing thirdparty services
Create API simulation for end-to-end tests.
for writing Spring Integration Tests - includes MockMVC, TestRestTemplate, Webclient like features.
An assertion library for JSON.
The Pact family of frameworks provide support for Consumer Driven Contracts testing.
Selenium automates browsers. Its used for end-to-end automated ui testing.
Gradle helps build, automate and deliver software, fastr.
IDE for Java Development
This starter will import two spring boot test modules spring-boot-test & spring-boot-test- autoconfigure as well as Junit, AssertJ, Hamcrest, Mockito, JSONassert, Spring Test, Spring Boot Test and a number of other useful libraries.
There are many useful scenarios for leveraging the power of JWT-
Authentication is one of the most common scenarios for using JWT, specifically in microservices architecture (but not limited to it). In microservices, the oauth2 server generates a JWT at the time of login and all subsequent requests can include the JWT AccessToken as the means for authentication. Implementing Single Sign-On by sharing JWT b/w different applications hosted in different domains.
JWT can be signed, using public/private key pairs, you can be sure that the senders are who they say they are. Hence JWT is a good way of sharing information between two parties. An example use case could be -
Microservices is an architectural style which structures and application as a collection of loosely coupled, independently maintainable, testable and deployable services which are organized around business capabilities.
If you have a business focus and you want to solve a use case or a problem efficiently without the boundaries of technology, want to scale an independent service infinitely, highly available stateless services which are easy to maintainable and managed as well as independently testable then we would go ahead and implement Microservices architecture.
There are two cases.
One should have unit and integration tests where all the functionality of a microservice can be tested. One should also have component based testing.
One should have contract tests to assert that the expectations by the client is not breaking. End-to-end test for the microservices, however, should only test the critical flows as these can be time-consuming. The tests can be from two sides, consumer-driven contract test and consumer-side contract test.
You can also leverage Command Query Responsibility Segregation to query multiple databases and get a combined view of persisted data.
In a cloud environment where docker images are dynamically deployed on any machine or IP + Port combination, it becomes difficult for dependent services to update at runtime. Service discovery is created due to that purpose only.
Service discovery is one of the services running under microservices architecture, which registers entries of all of the services running under the service mesh. All of the actions are available through the REST API. So whenever the services are up and running, the individual services registers themselves to service discovery service and service discovery services maintains heartbeat to make sure that those services are alive. That also serves the purpose of monitoring services as well. Service discovery also helps in distributing requests across services deployed in a fair manner.
Instead of clients directly connecting to load balancer, in this architectural pattern the client connects to the service registry and tries to fetch data or services from it.
Once it gets all data, it does load balancing on its own and directly reaches out to the services it needs to talk to.
This can have a benefit where there are multiple proxy layers and delays are happening due to the multilayer communication.
In server-side discovery, the proxy layer or API Gateway later tries to connect to the service registry and makes a call to appropriate service afterward. Over here client connects to that proxy layer or API Gateway layer.
Assuming that the majority of providers using microservices architecture,
Microservices Architecture is a style of developing a scalable, distributed & highly automated system made up of many small autonomous services. It is not a technology but a new trend evolved out of SOA.
There is no single definition that fully describes the term "microservices". Some of the famous authors have tried to define it in the following way:
Microservices are a continuation to SOA.
SOA started gaining ground due to its distributed architecture approach and it emerged to combat the problems of large monolithic applications, around 2006.
Both (SOA and Microservices) of these architectures share one common thing that they both are distributed architecture and both allow high scalability. In both, service components are accessed remotely through remote access protocol (RMI, REST, SOAP, AMQP, JMS, etc.). both are modular and loosely coupled by design and offer high scalability. Microservices started gaining buzz in late 2000 after the emergence of lightweight containers, Docker, Orchestration Frameworks (Kubernetes, Mesos). Microservices differ from SOA in a significant manner conceptually -
Bounded Context is a central pattern in Domain-Driven Design. In Bounded Context, everything related to the domain is visible within context internally but opaque to other bounded contexts. DDD deals with large models by dividing them into different Bounded Contexts and being explicit about their interrelationships.
Monolithic Conceptual Model Problem
A single conceptual model for the entire organization is very tricky to deal with. The only benefit of such a unified model is that integration is easy across the whole enterprise, but the drawbacks are many, for example:
Embracing microservices architecture brings many benefits compared to using monolith architecture in your application, including:
The decentralized teams working on individual microservices are mostly independent of each other, so changing a service does not require coordination with other teams. This can lead to significantly faster release cycles. It is very hard to achieve the same thing in a realistic monolithic application where a small change may require regression of the entire system.
Microservices style of system architecture emphasizes on the culture of freedom, single responsibility, autonomy of teams, faster release iterations and technology diversification.
Unlike in monolithic applications, microservices are not bound to one technology stack (Java, .Net, Go, Erlang, Python, etc). Each team is free to choose a technology stack that is best suited for its requirements. For example, we are free to choose Java for a microservice, c++ for others and Go for another one.
The term comes from an abbreviated compound of "development" and "operations". It is a culture that emphasizes effective communication and collaboration between product management, software development, and operations team. DevOps culture, if implemented correctly can lead to shorter development cycles and thus faster time to market.
Polyglot persistence is all about using different databases for different business needs within a single distributed system. We already have different database products in the market each for a specific business need, for example:
Relational databases are used for transactional needs (storing financial data, reporting requirements, etc.)
Documented oriented databases are used for documents oriented needs (for e.g. Product Catalog). Documents are schema-free so changes in the schema can be accommodated into the application without much headache.
Key-value pair based database (User activity tracking, Analytics, etc.). DynamoDB can store documents as well as key-value pairs.
In memory distributed database (user session tracking), its mostly used as a distributed cache among multiple microservices.
Graph DB (social connections, recommendations, etc)
Benefits of Polyglot Persistence are manifold and can be harvested in both monolithic as well as microservices architecture. Any decent-sized product will have a variety of needs which may not be fulfilled by a single kind of database alone. For example, if there are no transactional needs for a particular microservice, then it's way better to use a key-value pair or document-oriented NoSql rather than using a transactional RDBMS database.
References: https://martinfowler.com/bliki/PolyglotPersistence.html
The Twelve-Factor App is a recent methodology (and/or a manifesto) for writing web applications which run as a service.
One codebase, multiple deploys. This means that we should only have one codebase for different versions of a microservices. Branches are ok, but different repositories are not.
Explicitly declare and isolate dependencies. The manifesto advises against relying on software or libraries on the host machine. Every dependency should be put into pom.xml or build.gradle file.
Store config in the environment. Do never commit your environment-specific configuration (most importantly: password) in the source code repo. Spring Cloud Config provides server and client-side support for externalized configuration in a distributed system. Using Spring Cloud Config Server you have a central place to manage external properties for applications across all environments.
Treat backing services as attached resources. A microservice should treat external services equally, regardless of whether you manage them or some other team. For example, never hard code the absolute url for dependent service in your application code, even if the dependent microservice is developed by your own team. For example, instead of hard coding url for another service in your RestTemplate, use Ribbon (with or without Eureka) to define the url:
Strictly separate build and run stages. In other words, you should be able to build or compile the code, then combine that with specific configuration information to create a specific release, then deliberately run that release. It should be impossible to make code changes at runtime, for e.g. changing the class files in tomcat directly. There should always be a unique id for each version of release, mostly a timestamp. Release information should be immutable, any changes should lead to a new release.
Execute the app as one or more stateless processes. This means that our microservices should be stateless in nature, and should not rely on any state being present in memory or in the filesystem. Indeed the state does not belong in the code. So no sticky sessions, no in-memory cache, no local filesystem storage, etc. Distributed cache like memcache, ehcache or Redis should be used instead
Export services via port binding. This is about having your application as standalone, instead of relying on a running instance of an application server, where you deploy. Spring boot provides a mechanism to create a self-executable uber jar that contains all dependencies and embedded servlet container (jetty or tomcat).
Scale-out via the process model. In the twelve-factor app, processes are a first-class citizen. This does not exclude individual processes from handling their own internal multiplexing, via threads inside the runtime VM, or the async/evented model found in tools such as EventMachine, Twisted, or Node.js. But an individual VM can only grow so large (vertical scale), so the application must also be able to span multiple processes running on multiple physical machines. Twelve-factor app processes should never write PID files, rather it should rely on operating system process manager such as systemd - a distributed process manager on a cloud platform.
The twelve-factor app’s processes are disposable, meaning they can be started or stopped at a moment’s notice. This facilitates fast elastic scaling, rapid deployment of code or config changes, and robustness of production deploys. Processes should strive to minimize startup time. Ideally, a process takes a few seconds from the time the launch command is executed until the process is up and ready to receive requests or jobs. Short startup time provides more agility for the release process and scaling up; and it aids robustness because the process manager can more easily move processes to new physical machines when warranted.
Keep development, staging, and production as similar as possible. Your development environment should almost identical to a production one (for example, to avoid some “works on my machine” issues). That doesn’t mean your OS has to be the OS running in production, though. Docker can be used for creating logical separation for your microservices.
Treat logs as event streams, sending all logs only to stdout. Most Java Developers would not agree to this advice, though.
Run admin/management tasks as one-off processes. For example, a database migration should be run using a separate process altogether.
Microservices architecture is meant for developing large distributed systems that scale with safely. There are many benefits of microservices architecture over monoliths, for example:
As illustrated in the above example, a typical monolith eShop application is usually a big war file deployed in a single JVM process (tomcat/jboss/websphere, etc). Different components of a monolith communicate with each other using in-process communication like direct method invocation. One or more databases are shared among different components of a monolith application.
Microservices should be autonomous and divided based on business capabilities. Each software component should have single well-defined responsibility (a.k.a Single Responsibility Principle) and the principle of Bounded Context (as defined by Domain Driven Design) should be used to create highly cohesive software components.
For example, an e-commerce site can be partitioned into following microservices based on its business capabilities:
Responsible for product information, searching products, filtering products & products facets.
Responsible for managing inventory of products (stock/quantity and facet).
Collecting feedback from users about the products.
Responsible for creating and managing orders.
Process payments both online and offline (Cash On Delivery).
Manage and track shipments against orders.
Market products to relevant users.
Manage users and their preferences.
Suggest new products based on the user’s preference or past purchases.
Email and SMS notification about orders, payments, and shipments.
The client application (browser, mobile app) will interact with these services via API gateway and render the relevant information to the user.
If you want to halt the service when it is not able to locate the config-server during bootstrap, then you need to configure the following property in microservice’s bootstrap.yml:
spring: cloud: config: fail-fast: true
Using this configuration will make microservice startup fail with an exception when config-server is not reachable during bootstrap.
We can enable a retry mechanism where microservice will retry 6 times before throwing an exception. We just need to add spring-retry and spring-boot-starter-aop to the classpath to enable this feature.
build.gradle:-
... dependencies { compile('org.springframework.boot:spring-boot-starter-aop') compile('org.springframework.retry:spring-retry') ... }
Microservices should be autonomous and divided based on business capabilities. Each software component should have single well-defined responsibility (a.k.a Single Responsibility Principle) and the principle of Bounded Context (as defined by Domain Driven Design) should be used to create highly cohesive software components.
For example, an e-shop can be partitioned into following microservices based on its business capabilities:
Responsible for product information, searching products, filtering products & products facets.
Responsible for managing inventory of products (stock/quantity and facet).
Collecting feedback from users about the products.
Responsible for creating and managing orders.
Process payments both online and offline (Cash On Delivery).
Manage and track shipments against orders.
Market products to relevant users.
Manage users and their preferences.
Suggest new products based on the user’s preference or past purchases.
Email and SMS notification about orders, payments, and shipments.
The client application (browser, mobile app) will interact with these services via the API gateway and render the relevant information to the user.
A good, albeit non-specific, rule of thumb is as small as possible but as big as necessary to represent the domain concept they own said by Martin Fowler
Size should not be a determining factor in microservices, instead bounded context principle and single responsibility principle should be used to isolate a business capability into a single microservice boundary.
Microservices are usually small but not all small services are microservices. If any service is not following the Bounded Context Principle, Single Responsibility Principle, etc. then it is not a microservice irrespective of its size. So the size is not the only eligibility criteria for a service to become microservice.
In fact, size of a microservice is largely dependent on the language (Java, Scala, PHP) you choose, as few languages are more verbose than others.
Microservices are often integrated using a simple protocol like REST over HTTP. Other communication protocols can also be used for integration like AMQP, JMS, Kafka, etc.
The communication protocol can be broadly divided into two categories- synchronous communication and asynchronous communication.
RestTemplate, WebClient, FeignClient can be used for synchronous communication between two microservices. Ideally, we should minimize the number of synchronous calls between microservices because networks are brittle and they introduce latency. Ribbon - a client-side load balancer can be used for better utilization of resource on the top of RestTemplate. Hystrix circuit breaker can be used to handle partial failures gracefully without a cascading effect on the entire ecosystem. Distributed commits should be avoided at any cost, instead, we shall opt for eventual consistency using asynchronous communication.
In this type of communication, the client does not wait for a response, instead, it just sends the message to the message broker. AMQP (like RabbitMQ) or Kafka can be used for asynchronous communication across microservices to achieve eventual consistency.
In Orchestration, we rely on a central system to control and call other Microservices in a certain fashion to complete a given task. The central system maintains the state of each step and sequence of the overall workflow. In Choreography, each Microservice works like a State Machine and reacts based on the input from other parts. Each service knows how to react to different events from other systems. There is no central command in this case.
Orchestration is a tightly coupled approach and is an anti-pattern in a microservices architecture. Whereas, Choreography’s loose coupling approach should be adopted where-ever possible.
Example
Let’s say we want to develop a microservice that will send product recommendation email in a fictitious e-shop. In order to send Recommendations, we need to have access to user’s order history which lies in a different microservices.
In Orchestration approach, this new microservice for recommendations will make synchronous calls to order service and fetch the relevant data, then based on his past purchases we will calculate the recommendations. Doing this for a million users will become cumbersome and will tightly couple the two microservices.
In Choreography approach, we will use event-based Asynchronous communication where whenever a user makes a purchase, an event will be published by order service. Recommendation service will listen to this event and start building user recommendation. This is a loosely coupled approach and highly scalable. The event, in this case, does not tell about the action, but just the data.
There is no right answer to this question, there could be a release every ten minutes, every hour or once a week. It all depends on the extent of automation you have at a different level of the software development lifecycle - build automation, test automation, deployment automation and monitoring. And of course on the business requirements - how small low-risk changes you care making in a single release.
In an ideal world where boundaries of each microservices are clearly defined (bounded context), and a given service is not affecting other microservices, you can easily achieve multiple deployments a day without major complexity.
Examples of deployment/release frequency
Cloud-Native Applications (NCA) is a style of application development that encourages easy adoption of best practices in the area of continuous delivery and distributed software development. These applications are designed specifically for a cloud computing architecture (AWS, Azure, CloudFoundary, etc).
DevOps, continuous delivery, microservices, and containers are the key concepts in developing cloud-native applications.
Spring Boot, Spring Cloud, Docker, Jenkins, Git are a few tools that can help you write Cloud-Native Application without much effort.
It is an architectural approach for developing a distributed system as a collection of small services. Each service is responsible for a specific business capability, runs in its own process and communicates via HTTP REST API or messaging (AMQP).
It is a collaboration between software developers and IT operations with a goal of constantly delivering high-quality software as per customer needs.
Its all about automated delivery of low-risk small changes to production, constantly. This makes it possible to collect feedback faster.
Containers (e.g. Docker) offer logical isolation to each microservices thereby eliminating the problem of "run on my machine" forever. It’s much faster and efficient compared to Virtual Machines.
Spring Boot along with Spring Cloud is a very good option to start building microservices using Java language. There are a lot of modules available in Spring Cloud that can provide boiler plate code for different design patterns of microservices, so Spring Cloud can really speed up the development process. Also, Spring boot provides out of the box support to embed a servlet container (tomcat/jetty/undertow) inside an executable jar (uber jar), so that these jars can be run directly from the command line, eliminating the need of deploying war files into a servlet container.
You can also use Docker container to ship and deploy the entire executable package onto a cloud environment. Docker can also help eliminate "works on my machine" problem by providing logical separation for the runtime environment during the development phase. That way you can gain portability across on-premises and cloud environment.
Spring Boot makes it easy to create stand-alone, production-grade Spring-based applications that you can "just run" with an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss.
Main features of Spring Boot
You can create a Spring Boot starter project by selecting the required dependencies for your project using online tool hosted at https://start.spring.io/
Bare minimum dependency for any spring boot application is:
dependencies { compile("org.springframework.boot:spring-boot-starter-web:2.0.4.RELEASE") } The main java class for Spring Boot application will look something like the following: import org.springframework.boot.*; import org.springframework.boot.autoconfigure.*; import org.springframework.stereotype.*; import org.springframework.web.bind.annotation.*; @Controller @EnableAutoConfiguration public class HelloWorldController { @RequestMapping("/") @ResponseBody String home() { return "Hello World!"; } public static void main(String[] args) throws Exception { SpringApplication.run(SampleController.class, args); } }
You can directly run this class, without deploying it to a servlet container.
Useful References
API Gateway is a special class of microservices that meets the need of a single client application (such as android app, web app, angular JS app, iPhone app, etc) and provide it with single entry point to the backend resources (microservices), providing cross-cutting concerns to them such as security, monitoring/metrics & resiliency.
Client Application can access tens or hundreds of microservices concurrently with each request, aggregating the response and transforming them to meet the client application’s needs. Api Gateway can use a client-side load balancer library (Ribbon) to distribute load across instances based on round-robin fashion. It can also do protocol translation i.e. HTTP to AMQP if necessary. It can handle security for protected resources as well.
Features of API Gateway
As the name suggests, zero-downtime deployments do not bring outage in a production environment. It is a clever way of deploying your changes to production, where at any given point in time, at least one service will remain available to customers.
One way of achieving this is blue/green deployment. In this approach, two versions of a single microservice are deployed at a time. But only one version is taking real requests. Once the newer version is tested to the required satisfaction level, you can switch from older version to newer version.
You can run a smoke-test suite to verify that the functionality is running correctly in the newly deployed version. Based on the results of smoke-test, newer version can be released to become the live version.
Lets say you have two instances of a service running at the same time, and both are registered in Eureka registry. Further, both instances are deployed using two distinct hostnames:
/src/main/resources/application.yml spring.application.name: ticketBooks-service --- spring.profiles: blue eureka.instance.hostname: ticketBooks-service -blue.example.com --- spring.profiles: green
eureka.instance.hostname: ticketBooks-service -green.example.com
Now the client app that needs to make api calls to books-service may look like below:
@RestController @SpringBootApplication @EnableDiscoveryClient public class ClientApp { @Bean @LoadBalanced public RestTemplate restTemplate() { return new RestTemplate(); } @RequestMapping("/hit-some-api") public Object hitSomeApi() { return restTemplate().getForObject("https://ticketBooks-service/some-uri", Object.class); }
Now, when ticketBooks-service-green.example.com goes down for upgrade, it gracefully shuts down and delete its entry from Eureka registry. But these changes will not be reflected in the ClientApp until it fetches the registry again (which happens every 30 seconds). So for upto 30 seconds, ClientApp’s @LoadBalanced RestTemplate may send the requests to ticketBooks-service-green.example.com even if its down.
To fix this, we can use Spring Retry support in Ribbon client-side load balancer. To enable Spring Retry, we need to follow the below steps:
Add spring-retry to build.gradle dependencies
compile("org.springframework.boot:spring-boot-starter-aop") compile("org.springframework.retry:spring-retry")
Now enable spring-retry mechanism in ClientApp using @EnableRetry annotation, as shown below:
@EnableRetry @RestController @SpringBootApplication @EnableDiscoveryClient public class ClientApp { ... }
Once this is done, Ribbon will automatically configure itself to use retry logic and any failed request to ticketBooks-service-green.example.com com will be retried to next available instance (in round-robins fashion) by Ribbon. You can customize this behaviour using the below properties:
/src/main/resources/application.yml ribbon: MaxAutoRetries: 5 MaxAutoRetriesNextServer: 5 OkToRetryOnAllOperations: true OkToRetryOnAllErrors: true
The deployment scenario becomes complex when there are database changes during the upgrade. There can be two different scenarios: 1. database change is backward compatible (e.g. adding a new table column) 2. database change is not compatible with an older version of the application (e.g. renaming an existing table column)
Complexity may be much more in a realistic production app, such discussions are beyond the scope of this book.
ACID is an acronym for four primary attributes namely atomicity, consistency, isolation, and durability ensured by the database transaction manager.
In a transaction involving two or more entities, either all of the records are committed or none are.
A database transaction must change affected data only in allowed ways following specific rules including constraints/triggers etc.
Any transaction in progress (not yet committed) must remain isolated from any other transaction.
Committed records are saved by a database such that even in case of a failure or database restart, the data is available in its correct state.
In a distributed system involving multiple databases, we have two options to achieve ACID compliance:
2 Phase Commit should ideally be discouraged in microservices architecture due to its fragile and complex nature. We can achieve some level of ACID compliance in distributed systems through eventual consistency and that should be the right approach to do it.
Spring team has an integrated number of battle-tested open-source projects from companies like Pivotal, Netflix into a Spring project known as Spring Cloud. Spring Cloud provides libraries & tools to quickly build some of the common design patterns of a distributed system, including the following:
Pattern Type | Pattern Name | Spring Cloud Library |
---|---|---|
Development Pattern | Distributed/versioned configuration management | Spring Cloud Config Server |
— | Core Microservices Patterns | Spring Boot |
— | Asynchronous/Distributed Messaging | Spring Cloud Stream (AMQP and Kafka) |
— | Inter-Service Communication | RestTemplate and Spring Cloud Feign |
Routing Pattern | Service Registration & Discovery | Spring Cloud Netflix Eureka & Consul |
Routing Pattern | Service Routing/ API Gateway Pattern | Spring Cloud Netflix Zuul |
Resiliency Pattern | Client-side load balancing | Spring Cloud Netflix Ribbon |
— | Circuit Breaker & Fallback Pattern | Spring Cloud Netflix Hystrix |
— | Bulkhead pattern | Spring Cloud / Spring Cloud Netflix Hystrix |
Logging Patterns | Log Correlation | Spring Cloud Sleuth |
— | Microservice Tracing | Spring Cloud Sleuth/Zipkin |
Security Patterns | Authorization and Authentication | Spring Cloud Security OAuth2 |
— | Credentials Management | Spring Cloud Security OAuth2/ JWT |
— | Distributed Sessions | Spring Cloud OAuth2 and Redis |
Spring Cloud makes it really easy to develop, deploy and operate JVM applications for the Cloud.
A microservice is a small, independently deployable service that performs a specific business function. Each microservice runs its own process and communicates with other services over a network, typically using lightweight protocols like HTTP.
Microservices break down an application into smaller, independent services, each handling a specific function. In contrast, monolithic architecture involves a single, large application where all components are interconnected and interdependent.
Benefits of microservices include improved scalability, easier maintenance, independent deployment, fault isolation, and the ability to use different technologies for different services.
Microservices communicate through lightweight protocols like HTTP/REST for synchronous communication and message brokers like RabbitMQ or Kafka for asynchronous communication.
REST (Representational State Transfer) is an architectural style that uses standard HTTP methods to enable communication between microservices, allowing them to perform CRUD (Create, Read, Update, Delete) operations on resources.
Microservices are monitored using tools like Prometheus, Grafana, ELK Stack, and Jaeger to track performance metrics, logs, and distributed tracing, ensuring the system's health and identifying issues.
An API Gateway acts as a single entry point for client requests, handling tasks such as routing, composition, protocol translation, and security, simplifying client interactions with microservices.
Microservices offer better scalability, flexibility, and resilience compared to monoliths. They allow independent deployment, easier maintenance, fault isolation, and the ability to use different technologies for different services.
Challenges include managing distributed data, ensuring consistency, handling inter-service communication, maintaining security, and dealing with operational complexity due to a large number of services.
Yes, each microservice can be written in a different programming language, allowing teams to choose the best technology for each service, known as polyglot programming.
When you are implementing microservices architecture, there are some challenges that you need to deal with every single microservices. Moreover, when you think about the interaction with each other, it can create a lot of challenges. As well as if you pre-plan to overcome some of them and standardize them across all microservices, then it happens that it also becomes easy for developers to maintain services.
Some of the most challenging things are testing, debugging, security, version management, communication ( sync or async ), state maintenance etc. Some of the cross-cutting concerns which should be standardized are monitoring, logging, performance improvement, deployment, security etc.
It is a very subjective question, but with the best of my knowledge I can say that it should be based on the following criteria.
In real time, it happens that a particular service is causing a downtime, but the other services are functioning as per mandate. So, under such conditions, the particular service and its dependent services get affected due to the downtime.
In order to solve this issue, there is a concept in the microservices architecture pattern, called the circuit breaker. Any service calling remote service can call a proxy layer which acts as an electric circuit breaker. If the remote service is slow or down for ‘n’ attempts then proxy layer should fail fast and keep checking the remote service for its availability again. As well as the calling services should handle the errors and provide retry logic. Once the remote service resumes then the services starts working again and the circuit becomes complete.
This way, all other functionalities work as expected. Only one or the dependent services get affected.
This is related to the automation for cross-cutting concerns. We can standardize some of the concerns like monitoring strategy, deployment strategy, review and commit strategy, branching and merging strategy, testing strategy, code structure strategies etc.
For standards, we can follow the 12-factor application guidelines. If we follow them, we can definitely achieve great productivity from day one. We can also containerize our application to utilize the latest DevOps themes like dockerization. We can use mesos, marathon or kubernetes for orchestrating docker images. Once we have dockerized source code, we can use CI/CD pipeline to deploy our newly created codebase. Within that, we can add mechanisms to test the applications and make sure we measure the required metrics in order to deploy the code.
We can use strategies like blue-green deployment or canary deployment to deploy our code so that we know the impact of code which might go live on all of the servers at the same time. We can do AB testing and make sure that things are not broken when live. In order to reduce a burden on the IT team, we can use AWS / Google cloud to deploy our solutions and keep them on autoscale to make sure that we have enough resources available to serve the traffic we are receiving.
This is a very interesting question. In monolith where HTTP Request waits for a response, the processing happens in memory and it makes sure that the transaction from all such modules work at its best and ensures that everything is done according to expectation. But it becomes challenging in the case of microservices because all services are running independently, their datastores can be independent, REST Apis can be deployed on different endpoints. Each service is doing a bit without knowing the context of other microservices.
In this case, we can use the following measures to make sure we are able to trace the errors easily.
It is an important design decision. The communication between services might or might not be necessary. It can happen synchronously or asynchronously. It can happen sequentially or it can happen in parallel. So, once we have decided what should be our communication mechanism, we can decide the technology which suits the best.
Here are some of the examples which you can consider.
There are mainly two ways to achieve authentication in microservices architecture.
All the microservices can use a central session store and user authentication can be achieved. This approach works but has many drawbacks as well. Also, the centralized session store should be protected and services should connect securely. The application needs to manage the state of the user, so it is called stateful session.
In this approach, unlike the traditional way, information in the form of token is held by the clients and the token is passed along with each request. A server can check the token and verify the validity of the token like expiry, etc. Once the token is validated, the identity of the user can be obtained from the token. However, encryption is required for security reasons. JWT(JSON web token) is the new open standard for this, which is widely used. Mainly used in stateless applications. Or, you can use OAuth based authentication mechanisms as well.
Logging is a very important aspect of any application. If we have done proper logging in an application, it becomes easy to support other aspects of the application as well. Like in order to debug the issues / in order to understand what business logic might have been executed, it becomes very critical to log important details.
Ideally, you should follow the following practices for logging.
Docker helps in many ways for microservices architecture.
As container based deployment involves a single image per microservice, it is a bad idea to bundle the configuration along with the image.
This approach is not at all scalable because we might have multiple environments and also we might have to take care of geographically distributed deployments where we might have different configurations as well.
Also, when there are application and cron application as part of the same codebase, it might need to take additional care on production as it might have repercussions how the crons are architected.
To solve this, we can put all our configuration in a centralized config service which can be queried by the application for all its configurations at the runtime. Spring cloud is one of the example services which provides this facility.
It also helps to secure the information, as the configuration might have passwords or access to reports or database access controls. Only trusted parties should be allowed to access these details for security reasons.
In a production environment, you don’t just deal with the application code/application server. You need to deal with API Gateway, Proxy Servers, SSL terminators, Application Servers, Database Servers, Caching Services, and other dependent services.
As in modern microservice architecture where each microservice runs in a separate container, deploying and managing these containers is very challenging and might be error-prone.
Container orchestration solves this problem by managing the life cycle of a container and allows us to automate the container deployments.
It also helps in scaling the application where it can easily bring up a few containers. Whenever there is a high load on the application and once the load goes down. it can scale down as well by bringing down the containers. It is helpful to adjust cost based on requirements.
Also in some cases, it takes care of internal networking between services so that you need not make any extra effort to do so. It also helps us to replicate or deploy the docker images at runtime without worrying about the resources. If you need more resources, you can configure that in orchestration services and it will be available/deployed on production servers within minutes.
An API Gateway is a service which sits in front of the exposed APIs and acts as an entry point for a group of microservices. Gateway also can hold the minimum logic of routing calls to microservices and also an aggregation of the response.
One should avoid sharing database between microservices, instead APIs should be exposed to perform the change.
If there is any dependency between microservices then the service holding the data should publish messages for any change in the data for which other services can consume and update the local state.
If consistency is required then microservices should not maintain local state and instead can pull the data whenever required from the source of truth by making an API call.
In the microservices architecture, it is possible that due to service boundaries, a lot of times you need to update one or more entities on the state change of one of the entities. In that case, one needs to publish a message and new event gets created and appended to already executed events. In case of failure, one can replay all events in the same sequence and you will get the desired state as required. You can think of event sourcing as your bank account statement.
You will start your account with initial money. Then all of the credit and debit events happen and the latest state is generated by calculating all of the events one by one. In a case where events are too many, the application can create a periodic snapshot of events so that there isn’t any need to replay all of the events again and again.
Servers come and go in a cloud environment, and new instances of same services can be deployed to cater increasing load of requests. So it becomes absolutely essential to have service registry & discovery that can be queried for finding address (host, port & protocol) of a given server. We may also need to locate servers for the purpose of client-side load balancing (Ribbon) and handling failover gracefully (Hystrix).
Spring Cloud solves this problem by providing a few ready-made solutions for this challenge. There are mainly two options available for the service discovery - Netflix Eureka Server and Consul. Let's discuss both of these briefly:
Netflix Eureka Server
Eureka is a REST (Representational State Transfer) based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. The main features of Netflix Eureka are:
Spring Cloud provides two dependencies - eureka-server and eureka-client. Eureka server dependency is only required in eureka server’s build.gradle
On the other hand, each microservice need to include the eureka-client dependencies to enables eureka discovery.
build.gradle - Eureka Client (to be included in all microservices) compile('org.springframework.cloud:spring-cloud-starter-netflix-eureka-client').
Eureka server provides a basic dashboard for monitoring various instances and their health in the service registry. The ui is written in freemarker and provided out of the box without any extra configuration. Screenshot for Eureka Server looks like the following.
It contains a list of all services that are registered with Eureka Server. Each server has information like zone, host, port, and protocol.
Consul Server
It is a REST-based tool for dynamic service registry. It can be used for registering a new service, locating a service and health checkup of a service.
You have the option to choose any one of the above in your spring cloud-based distributed application. In this book, we will focus more on the Netflix Eureka Server option.
If you have 3 different environments (develop/stage/production) in your project setup, then you need to create three different config storage projects. So in total, you will have four projects:
It is the config-server that can be deployed in each environment. It is the Java Code without configuration storage.
It is the git storage for your development configuration. All configuration related to each microservices in the development environment will fetch its config from this storage. This project has no Java code, and t is meant to be used with config-server.
Same as config-dev but its meant to be used only in qa environment.
There are two main components in Eureka project: eureka-server and eureka-client.
The central server (one per zone) that acts as a service registry. All microservices register with this eureka server during app bootstrap.
Eureka also comes with a Java-based client component, the eureka-client, which makes interactions with the service much easier. The client also has a built-in load balancer that does basic round-robin load balancing. Each microservice in the distributed ecosystem much include this client to communicate and register with eureka-server.
There is usually one eureka server cluster per region (US, Asia, Europe, Australia) which knows only about instances in its region. Services register with Eureka and then send heartbeats to renew their leases every 30 seconds. If the service can not renew their lease for a few times, it is taken out of server registry in about 90 seconds. The registration information and the renewals are replicated to all the eureka nodes in the cluster. The clients from any zone can look up the registry information (happens every 30 seconds) to locate their services (which could be in any zone) and make remote calls.
Eureka clients are built to handle the failure of one or more Eureka servers. Since Eureka clients have the registry cache information in them, they can operate reasonably well, even when all of the eureka servers go down.
Microservices often need to make remote network calls to another microservices running in a different process. Network calls can fail due to many reasons, including-
This can lead to cascading failures in the calling service due to threads being blocked in the hung remote calls. A circuit breaker is a piece of software that is used to solve this problem. The basic idea is very simple - wrap a potentially failing remote call in a circuit breaker object that will monitor for failures/timeouts. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, without the protected call being made at all. This mechanism can protect the cascading effects of a single component failure in the system and provide the option to gracefully downgrade the functionality.
A typical use of circuit breaker in microservices architecture looks like the following diagram-
Typical Circuit Breaker Implementation
Here a REST client calls the Recommendation Service which further communicates with Books Service using a circuit breaker call wrapper. As soon as the books-service API calls starts to fail, circuit breaker will trip (open) the circuit and will not make any further call to book-service until the circuit is closed again.
Martin Fowler has beautifully explained this phenomenon in detail on his blog.
Martin Fowler on Circuit Breaker Pattern : https://martinfowler.com/bliki/CircuitBreaker.html
Circuit Breaker wraps the original remote calls inside it and if any of these calls fails, the failure is counted. When the service dependency is healthy and no issues are detected, the circuit breaker is in Closed state. All invocations are passed through to the remote service.
If the failure count exceeds a specified threshold within a specified time period, the circuit trips into the Open State. In the Open State, calls always fail immediately without even invoking the actual remote call. The following factors are considered for tripping the circuit to Open State -
After a predetermined period of time (by default 5 seconds), the circuit transitions into a half-open state. In this state, calls are again attempted to the remote dependency. Thereafter the successful calls transition the circuit breaker back into the closed state, while the failed calls return the circuit breaker into the open state.
Benefits:-
Config first bootstrap and discovery first bootstrap are two different approaches for using Spring Cloud Config client in Spring Cloud-powered microservices. Let’s discuss both of them:
Config First Bootstrap
This is the default behavior for any spring boot application where Spring Cloud Config client is on the classpath. When a config client starts up it binds to the Config Server using the bootstrap configuration property and initializes Spring Environment with remote property sources.
The only configuration that each microservice (except config-server) needs to provide is the following:
Discovery First Bootstrap
If you are using Spring Cloud Netflix and Eureka Service Discovery then you can have Config Server register with the Discovery Service and let all clients get access to config server via discovery service.
This is not the default configuration in Spring Cloud applications, so we need to manually enable it using the below property in bootstrap.yml
Listing 17. /src/main/resources/bootstrap.yml spring: cloud: config: discovery: enabled: true This property should be provided by all microservices so that they can take advantage of discovery first approach.
The benefit of this approach is that now config-server can change its host/port without other microservices knowing about it since each microservice can get the configuration via eureka service now. The downside of this approach is that an extra network round trip is required to locate the service registration at app startup.
Strangulation is used to slowly decommission an older system and migrate the functionality to a newer version of microservices.
Normally one endpoint is Strangled at a time, slowly replacing all of them with the newer implementation. Zuul Proxy (API Gateway) is a useful tool for this because we can use it to handle all traffic from clients of the old endpoints, but redirect only selected requests to the new ones.
Let’s take an example use-case:
/src/main/resources/application.yml zuul: routes:
first:
path: /first/** url: http://first.example.com --1
legacy:
path: /** url: http://legacy.example.com -- 2
This configuration is for API Gateway (zuul reverse proxy), and we are strangling selected endpoints /first/ from the legacy app hosted at http://legacy.example.com slowly to newly created microservice with external URL http://first.example.com
Hystrix is Netflix implementation for circuit breaker pattern, that also employs bulkhead design pattern by operating each circuit breaker within its own thread pool. It also collects many useful metrics about the circuit breaker’s internal state, including -
All these metrics can be aggregated using another Netflix OSS project called Turbine. Hystrix dashboard can be used to visualize these aggregated metrics, providing excellent visibility into the overall health of the distributed system.
Hystrix can be used to specify the fallback method for execution in case the actual method call fails. This can be useful for graceful degradation of functionality in case of failure in remote invocation.
Add hystrix library to build.gradle dependencies { compile('org.springframework.cloud:spring-cloud-starter-hystrix')
1) Enable Circuit Breaker in main application
@EnableCircuitBreaker @RestController @SpringBootApplication public class ReadingApplication { ... }
2) Using HystrixCommand fallback method execution
@HystrixCommand(fallbackMethod = "reliable") public String readingList() { URI uri = URI.create("http://localhost:8090/recommended"); return this.restTemplate.getForObject(uri, String.class); } public String reliable() { 2 return "Cached recommended response"; }
Hystrix library makes our distributed system resilient (adaptable & quick to recover) to failures. It
provides three main features:
It helps stop cascading failures, provide decent fallbacks and graceful degradation of service functionality to confine failures. It works on the idea of fail-fast and rapid recovery. Two different options namely Thread isolation and Semaphore isolation are available for use to confine failures.
Using real-time metrics, you can remain alert, make decisions, affect changes and see results.
Parallel execution, concurrent aware request caching and finally automated batching through request collapsing improves the concurrency performance of your application.
More information on Netflix hystrix library:
Let's say we want to handle service to service failure gracefully without using the Circuit Breaker pattern. The naive approach would be to wrap the REST call in a try-catch clause. But Circuit Breaker does a lot more than try-catch can not accomplish -
So instead of wrapping service to service calls with try/catch clause, we must use the circuit breaker pattern to make our system resilient to failures.
The bulkhead implementation in Hystrix limits the number of concurrent calls to a component/service. This way, the number of resources (typically threads) that are waiting for a reply from the component/service is limited.
Let's assume we have a fictitious web e-commerce application as shown in the figure below. The WebFront communicates with 3 different components using remote network calls (REST over HTTP).
Now let's say due to some problem in Product Review Service, all requests to this service start to hang (or timeout), eventually causing all request handling threads in WebFront Application to hang on waiting for an answer from Reviews Service. This would make the entire WebFront Application non-responsive. The resulting behavior of the WebFront Application would be same if request volume is high and Reviews Service is taking time to respond to each request.
The Hystrix Solution
Hystrix’s implementation for bulkhead pattern would limit the number of concurrent calls to components and would have saved the application in this case by gracefully degrading the functionality. Assume we have 30 total request handling threads and there is a limit of 10 concurrent calls to Reviews Service. Then at most 10 request handling threads can hang when calling Reviews Service, the other 20 threads can still handle requests and use components Products and Orders Service. This will approach will keep our WebFront responsive even if there is a failure in Reviews Service.
Martin Fowler introduced the concept of "smart endpoints & dumb pipes" while describing microservices architecture.
To give context, one of the main characteristic of a based system is to build small utilities and connect them using pipes. For example, a very popular way of finding all java processes in Linux system is Command pipeline in Unix shell ps elf | grep java
Here two commands are separated by a pipe, the pipe’s job is to forward the output of the first command as an input to the second command, nothing more. like a dumb pipe which has no business logic except the routing of data from one utility to another.
In his article Martin Fowler compares Enterprise Service Bus (ESB) to ZeroMQ/RabbitMQ, ESB is a pipe but has a lot of logic inside it while ZeroMQ has no logic except the persistence/routing of messages. ESB is a fat layer that does a lot of things like - security checks, routing, business flow & validations, data transformations, etc. So ESB is a kind of smart pipe that does a lot of things before passing data to next endpoint (service). Smart endpoints & dumb pipes advocate an exactly opposite idea where the communication channel should be stripped of any business-specific logic and should only distribute messages between components. The components (endpoints/services) should do all the data validations, business processing, security checks, etc on those incoming messages.
Microservices team should follow the principles and protocols that worldwide web & Unix is built on.
There are different ways to handle the versioning of your REST api to allow older consumers to still consume the older endpoints. The ideal practice is that any nonbackward compatible change in a given REST endpoint shall lead to a new versioned endpoint.
Different mechanisms of versioning are:
Most common approach in versioning is the URL versioning itself. A versioned URL looks like the following:
Versioned URL
As an API developer you must ensure that only backward-compatible changes are accommodated in a single version of URL. Consumer-Driven-Tests can help identify potential issues with API upgrades at an early stage.
Using config-server, it's possible to refresh the configuration on the fly. The configuration changes will only be picked by Beans that are declared with @RefreshScope annotation.
The following code illustrates the same. The property message is defined in the config-server and changes to this property can be made at runtime without restarting the microservices.
package hello; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.context.config.annotation.RefreshScope; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @SpringBootApplication public class ConfigClientApplication { public static void main(String[] args) { SpringApplication.run(ConfigClientApplication.class, args); } } @RefreshScope 1 @RestController class MessageRestController { @Value("${message:Hello World}") private String message; @RequestMapping("/message") String getMessage() { return this.message; }}
1 @RefreshScope makes it possible to dynamically reload the configuration for this bean.
@HystrixCommand annotation provides attribute ignoreExceptions that can be used to provide a list of ignored exceptions.
Code
@Service
public class HystrixService {
@Autowired
private LoadBalancerClient loadBalancer;
@Autowired
private RestTemplate restTemplate;
@HystrixCommand(fallbackMethod = "reliable", ignoreExceptions = IllegalStateException.class, MissingServletRequestParameterException.class, TypeMismatchException.class)
public String readingList() {
ServiceInstance instance = loadBalancer.choose("product-service"); URI uri = URI.create("http://product-service/product/recommended"); return this.restTemplate.getForObject(uri, String.class);}
public String reliable(Throwable e) { return "Cloud Native Java (O'Reilly)";
In the above example, if the actual method call throws IllegalStateException, MissingServletRequestParameterException or TypeMismatchException then hystrix will not trigger the fallback logic (reliable method), instead the actual exception will be wrapped inside HystrixBadRequestException and re-thrown to the caller. It is taken care by javanica library under the hood.
In a microservices architecture, each microservice shall own its private data which can only be accessed by the outside world through owning service. If we start sharing microservice’s private datastore with other services, then we will violate the principle of Bounded Context.
Practically we have three approaches -
Microservices Architecture can become cumbersome & unmanageable if not done properly. There are best practices that help design a resilient & highly scalable system. The most important ones are
Get to know the domain of your business, that's very very important. Only then you will be able to define the bounded context and partition your microservice correctly based on business capabilities.
Typically, everything from continuous integration all the way to continuous delivery and deployment should be automated. Otherwise, a big pain to manage a large fleet of microservices.
We never know where a new instance of a particular microservice will be spun up for scaling out or for handling failure, so maintaining a state inside service instance is a very bad idea.
Failures are inevitable in distributed systems, so we must design our system for handling failures gracefully. failures can be of different types and must be dealt with accordingly, for example -
We should try to make our services backward compatible, explicit versioning must be used to cater different versions of the RESt endpoints.
Asynchronous communication should be preferred over synchronous communication in inter microservice communication. One of the biggest advantages of using asynchronous messaging is that the service does not block while waiting for a response from another service.
Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.
Since networks are brittle, we should always design our services to accept repeated calls without any side effects. We can add some unique identifier to each request so that service can ignore the duplicate request sent over the network due to network failure/retry logic.
In monolithic applications, sharing is considered to be a best practice but that's not the case with Microservices. Sharing results in a violation of Bounded Context Principle, so we shall refrain from creating any single unified shared model that works across microservices. For example, if different services need a common Customer model, then we should create one for each microservice with just the required fields for a given bounded context rather than creating a big model class that is shared in all services.
The more dependencies we have between services, the harder it is to isolate the service changes, making it difficult to make a change in a single service without affecting other services. Also, creating a unified model that works in all services brings complexity and ambiguity to the model itself, making it hard for anyone to understand the model.
In a way are want to violate the DRY principle in microservices architecture when it comes to domain models.
Caching is a technique of performance improvement for getting query results from a service. It helps minimize the calls to network, database, etc. We can use caching at multiple levels in microservices architecture -
Swagger is a very good open-source tool for documenting APIs provided by microservices. It provides very easy to use interactive documentation.
By the use of swagger annotation on REST endpoint, api documentation can be auto-generated and exposed over the web interface. An internal and external team can use web interface, to see the list of APIs and their inputs & error codes. They can even invoke the endpoints directly from web interface to get the results.
Swagger UI is a very powerful tool for your microservices consumers to help them understand the set of endpoints provided by a given microservice.
Basic Authentication is natively supported by almost all servers and clients, even Spring security has very good support for it and its configured out of the box. But it is not a good fit for Microservices due to many reasons, including -
There are 3 parts in every JWT claim - Header, Claim and Signature. These 3 parts are separated by a dot. The entire JWT is encoded in Base64 format.
JWT = {header}.{payload}.{signature}
A typical JWT is shown here for reference.
Encoded JSON Web Token
Entire JWT is encoded in Base64 format to make it compatible with HTTP protocol. Encoded JWT looks like the following:
Decoded JSON Web Token
Header
Header contains algorithm information e.g. HS256 and type e.g. JWT
{ "alg": "HS256", "typ": "JWT" }
Claim
claim part has an expiry, issuer, user_id, scope, roles, client_id etc. It is encoded as a JSON object. You can add custom attributes to the claim. This is the information that you want to exchange with the third party.
{ "uid": "2ce35360-ef8e-4f69-a8d7-b5d1aec78759", "user_name": "user@mail.com", "scope": ["read"], "exp": 1520017228, "authorities": ["ROLE_USER","ROLE_ADMIN"], "jti": "5b42ca29-8b61-4a3a-8502-53c21e85a117", "client_id": "acme-app" }
Signature
Signature is typically a one way hash of (header + payload), is calculated using HMAC SHA256 algorithm. The secret used for signing the claim should be kept private. Pubic/private key can also be used to encrypt the claim instead of using symmetric cryptography.
HMACSHA256(base64(header) + "." + base64(payload), "secret")
OAuth2.0 is a delegation protocol where the Client (Mobile App or web app) does not need to know about the credentials of Resource Owner (end-user).
Oauth2 defines four roles.
Important Tools and Libraries for testing Spring-based Microservices are -
the standard test runners
the next generation test runner
declarative matchers and assertions
for writing REST Api driven end to end tests
for mocking dependencies
for stubbing thirdparty services
Create API simulation for end-to-end tests.
for writing Spring Integration Tests - includes MockMVC, TestRestTemplate, Webclient like features.
An assertion library for JSON.
The Pact family of frameworks provide support for Consumer Driven Contracts testing.
Selenium automates browsers. Its used for end-to-end automated ui testing.
Gradle helps build, automate and deliver software, fastr.
IDE for Java Development
This starter will import two spring boot test modules spring-boot-test & spring-boot-test- autoconfigure as well as Junit, AssertJ, Hamcrest, Mockito, JSONassert, Spring Test, Spring Boot Test and a number of other useful libraries.
There are many useful scenarios for leveraging the power of JWT-
Authentication is one of the most common scenarios for using JWT, specifically in microservices architecture (but not limited to it). In microservices, the oauth2 server generates a JWT at the time of login and all subsequent requests can include the JWT AccessToken as the means for authentication. Implementing Single Sign-On by sharing JWT b/w different applications hosted in different domains.
JWT can be signed, using public/private key pairs, you can be sure that the senders are who they say they are. Hence JWT is a good way of sharing information between two parties. An example use case could be -
Microservices which is also called Microservice Architecture is an architectural style that structures an application as a collection of small autonomous services, modeled around a business domain. According to a survey conducted by Nginx in the year 2019, 36% of the large organizations are currently using microservices, while 50% of the medium-sized companies, as well as 44% of companies, are using Microservices in development or production. So, this is a good time to get into the Microservices companies. The increasing popularity of Microservices is creating many job opportunities for a developer who is skilled in Microservices technology. You can get a job in top companies like Comcast Cable, Uber, Netflix, Amazon, eBay, PayPal, etc.
According to Neuvoo, the average Java Microservices Developer salary in the USA is $120,900 per year or $62 per hour. Entry-level positions start at $74,531 per year while most experienced workers make up to $160,875 per year.
These Microservices interview questions are specially designed for you after lots of detailed research to help you in your interview. These Microservices interview questions and answers for experienced and freshers alone will help you excel the Microservices job interview and provide you with an edge over your competitors. Therefore, in order to succeed in the interview, you need to go through these questions and practice these Microservices interview questions as much as possible.
Microservices interview questions and answers given here covers almost all the basic and advanced level questions. Every candidate faces jitters when it comes to facing an interview. If you are planning to build a career as a Microservices programmer and are facing troubles in cracking the Microservices interview, then practice these Interview questions on Microservices.
If you want to make your career in Microservices, then you need not worry as the set of Microservices interview questions designed by experts will guide you to get through the Microservices interviews. Stay in tune with the following interview questions and prepare beforehand to become familiar with the interview questions that you may come across while searching for a dream job. Hope these Microservices Interview Questions will help you to freshen up your Microservices knowledge and acquire your dream career as Microservices pro.
All the best!
Submitted questions and answers are subjecct to review and editing,and may or may not be selected for posting, at the sole discretion of Knowledgehut.
Get a 1:1 Mentorship call with our Career Advisor
By tapping submit, you agree to KnowledgeHut Privacy Policy and Terms & Conditions