Scaling Google Cloud Memorystore for high performance | Google Cloud Blog

Memorystore for Redis 
Cloud Memorystore for Redis enables GCP users to quickly deploy a managed Redis instance within a GCP project. A single node Memorystore instance can support a keyspace as large as 300 GB and a maximum network throughput of 16gbps. With Standard Tier you get a highly available Redis instance with built in health checks and fast automatic failover.

Today, we’ll show you how to deploy multiple Standard Tier Cloud Memorystore instances which can be used together to scale beyond the limits of a single instance for an application with increased scale demands. Each individual Memorystore instance will be deployed as a standalone instance that is unaware of the other instances within its shared host project. In this example, you’ll deploy three Standard Tier instances which will be treated as a single unified backend. 

By using Standard Tier instances instead of self-managed Redis instances on GCE, you get the benefit of: 

  1. Highly available backends: Standard Tier provides high availability without requiring any additional work from you. Enabling high availability on self-managed Redis instances on GCE can add additional complexities and failure points.

  2. Integrated monitoring: Memorystore is integrated with Cloud Monitoring and you can easily monitor the individual shards using Cloud Monitoring, compared to having to deploy and manage monitoring agents on self managed instances

Memtier Benchmark 
Memtier Benchmark is a commonly used command line utility for load generation and benchmarking of key-value databases. You will deploy and use this utility to demonstrate the ability to easily scale to high query volume. Similar benchmarking tools or your own Redis client application could be used instead of Memtier Benchmark. 

Envoy​​
Envoy is an open-source network proxy designed for service oriented architectures. Envoy supports many different filters which allow it to support network traffic from many different software applications and protocols. For this use case, you will deploy Envoy with the Redis filter configured. Rather than connecting directly to Memorystore instances, the Redis clients will connect to the Envoy proxy. By appropriately configuring Envoy, you can take a collection of independent Memorystore instances and define them as a cluster where inbound traffic will be load balanced among the individual instances. By leveraging Envoy, you decrease the likelihood of needing a significant application rewrite to leverage more than one Memorystore instance for higher scale. To ensure compatibility with your application, you’ll want to review the list of the Redis commands which Envoy currently supports.  

Let’s get started. 

Prerequisites

To follow along with this walkthrough, you’ll need a GCP project with permissions to do the following: 

  • Deploy Cloud Memorystore for Redis instances (permissions)

  • Deploy GCE instances with SSH access (permissions)

  • Cloud Monitoring viewer access (permissions

  • Access to Cloud Shell or another gCloud authenticated environment 

Deploying the Memorystore Backend 

You’ll start by deploying a backend cache which will serve all of your application traffic. As you’re looking to scale beyond the limits of a single node, you’ll deploy a series of Standard Tier instances. From an authenticated cloud shell environment, this can be done as follows:

$ for i in {1..3}; do gcloud redis instances create memorystore${i} --size=1 --region=us-central1 --tier=STANDARD --async; done

If you do not already have the Memorystore for Redis API enabled in your project, the command will ask you to enable the API before proceeding. While your Memorystore instances deploy, which typically takes a few minutes, you can move onto the next steps. 

Creating a Client and Proxy VM 

Next, you need a VM where you can deploy a Redis client and the Envoy proxy. You’ll be creating a single GCE instance where you deploy these two applications as containers. This type of deployment is referred to as a “sidecar architecture” which is a common Envoy deployment model. Deploying in this fashion nearly eliminates any added network latency as there is no additional physical network hop that takes place. While you are deploying a single vertically scaled client instance, in practice, you’ll likely deploy many clients and proxies, so the steps outlined in the following sections could be used to create a reusable instance template or repurposed for GKE. 

You can start by creating the base VM: 

$ gcloud compute instances create envoy-memtier-client --zone=us-central1-a --machine-type=e2-highcpu-32 --image-family cos-stable --image-project cos-cloud 

We’ve opted for a Container-Optimized OS instance as you’ll be deploying Envoy and Memtier Benchmark as containers on this instance. 

Configure and Deploy the Envoy Proxy 

Before deploying the proxy, you need to gather the necessary information to properly configure the Memorystore endpoints. To do this, you need the host IP addresses for the Memorystore instances you have already created. You can gather these programmatically: 

$ for i in {1..3}; do gcloud redis instances describe memorystore${i} --region us-central1 --format=json | jq -r ".host"; done

Copy these IP addresses somewhere easily accessible as you’ll use them shortly in your Envoy configuration. 

Next, you’ll need to connect to your newly created VM instance, so that you can deploy the Envoy Proxy. You can do this easily via SSH in the Google Cloud Console. More details can be found here.

Source Link

Read in Hindi >>