WePay Engineering: How our microservices use service mesh to communicate
Keeping WePay’s infrastructure running smoothly is a huge challenge, especially under the kinds of volume loads that big e-commerce periods or crowdfunding campaigns can generate. WePay has adopted a modern service mesh architecture to handle the shifts smoothly. As we roll out new microservices and as our business grows, the amount of communication needed between all of our microservices has grown and we have been looking at ways to keep latency down and maintain rapid, secure communications.
The latest WePay Engineering blog post by Mohsen Rezaei and Dinesh Subramani, “Migrating APIs from REST to gRPC at WePay” addresses this issue. It’s part of a series of posts on how we build and optimize our service mesh architecture. The series of articles also covers how WePay manages its service mesh with Google Kubernetes Engine (GKE) and containerization patterns we’ve been experimenting with and using in GKE as well as keeping a service mesh monitored and available. If you are interested in these techniques or how a modern engineering organization puts together a high-performance system, please take a look. And if you are interested in joining our engineering team, we’re hiring.