For past few months we’ve been working on improving gRPC-Go performance. This includes improving network utilization, optimizing CPU usage and memory allocations. Most of our recent effort has been focused around revamping gRPC-Go flow control. After several optimizations and new features we’ve been able to improve quite significantly, especially on high-latency networks. We expect users that are working with high-latency networks and large messages to see an order of magnitude performance gain. Benchmark results at the end.
This blog summarizes the work we have done so far (in chronological order) to improve performance and lays out our near-future plans.
The recent release of Flatbuffers version 1.7 introduced truly zero-copy support for gRPC out of the box.
Flatbuffers is a serialization library that allows you to access serialized data without first unpacking it or allocating any additional data structures. It was originally designed for games and other resource constrained applications, but is now finding more general use, both by teams within Google and in other companies such as Netflix and Facebook.
Next Community Meeting: Thursday, August 31, 2017 11am Pacific Time (US and Canada)
This post describes various load balancing scenarios seen when deploying gRPC. If you use gRPC with multiple backends, this document is for you.
A large scale gRPC deployment typically has a number of identical back-end instances, and a number of clients. Each server has a certain capacity. Load balancing is used for distributing the load from clients optimally across available servers.
Our guest post today comes from Brian Hardock, a software engineer from Deis working on the Helm project.
Helm is the package manager for Kubernetes. Helm provides its users with a customizable mechanism for managing distributed applications and controlling their deployment.
I have the good fortune to be a member of the phenomenal open-source Kubernetes Helm community serving as a core contributor. My first day working with the Helm team was spent prototyping the architecture for the next generation of Helm. By the end of that day, we had procured the preliminary RPC protocol data model used to enable communication between Helm and its in-cluster server component, Tiller.