gRPC, one of the most popular RPC frameworks for inter-process microservices communication, supports both unary and streaming RPC. Contrary to unary RPC, in gRPC streaming RPC, a client sends a single request, and in return, the server sends a stream of messages. In this article, we will see how to implement server streaming RPC and how to handle errors in streaming response.
gRPC, a remote procedure call (RPC) framework, is used for inter microservices communication. The gRPC supports both unary RPC and streaming RPC. In gRPC unary RPC, a client sends a single request and receives a single response. Additionally, an RPC in gRPC can be synchronous or asynchronous. In synchronous RPC, a client call waits for the server to respond. As the name suggests, in asynchronous RPC the server returns the response asynchronously.
A microservices-based software system requires applications to talk to each other using an inter-process communication mechanism. gRPC is a modern inter-process communication system that is scalable and more efficient than the traditional RESTful services.
The Kubernetes, an open-source container orchestration platform, allows us to deploy the containerized microservice application in public, private, or hybrid cloud infrastructure.
A microservice is built around business capability and DDD provides a framework for building microservices around business capabilities. Likewise, Event storming is a workshop-style, lightweight DDD framework. This article explains a recipe for building a microservice application using Spring Boot, DDD, Event Storming, and API-first design.
gRPC works on HTTP/2. The TCP connection on the HTP/2 is long-lived and a single connection can multiplex many requests. That means that connection-level load balancing is not very useful. The default load balancing in Kubernetes is based on connection level load balancing. This article is about how to implement load balancing using Kubernetes headless service.