4.13. Auto Scaling a Presto Cluster#

AWS Auto Scaling offers automatic control over the size of your Presto cluster (CloudFormation stack).

Mange Auto Scaling Groups#

When you create a cluster, an Auto Scaling Group (ASG) is automatically created for all the workers. To view and manage this ASG, refer to the AWS ASG page and log into your AWS account. There, you see a list of ASGs for all workers across all clusters you have running. Here, you can control how Amazon Auto Scaling manages your cluster.

Auto Scaling Models#

There are three types of auto scaling models you can employ to manage your cluster:

  • Static/Manual

  • Static/Scheduled

  • Dynamic

Static/Manual Auto Scaling

The static or manual auto scaling model is managed from the “Details” tab. This model is configured by default. In this tab, there are three main properties: “Desired Capacity”, “Min” and “Max”. Click on the “Edit” button to change those values to your desired values and when you hit “Save” the Auto Scaling mechanism starts to satisfy your requirements – either spinning up new workers or shutting down existing ones.

In the CloudFormation template, by default, all three properties are set to the same value. As a result the number of workers remains constant. When a worker is terminated (or is unavailable for whatever reason), Auto Scaling starts a new one to satisfy the requirements.

Static/Scheduled Auto Scaling

The static or scheduled auto scaling model is controlled from the “Scheduled Actions” tab. There, you can create a number of scheduled actions that allow you to change the size of the cluster based on the time of day. For example, you can keep a small number of nodes during the night, and boost it during different parts of the day.

The configuration of this model is a simple list of actions that are scheduled to execute and change the static values of “Min”, “Max” and “Desired Capacity” properties to some other arbitrary (static) values of your choosing. Such an action is executed with the configured schedule, either once or in a repetitive manner (cron). Continuing on the previous example, you can configure a nightly cooldown – one event to handle lowering the values in the evening and another event every morning to bring them back up.

Dynamic Auto Scaling

Dynamic auto scaling uses policies which you define in the in the “Scaling Policies” tab. Of the three types of policies, “scaling policy with steps” and “target tracking scaling policy” (default policy), are the most useful. The third is a special case of the “with steps” policy that contains a single step. You can change the policy type by clicking a link at the bottom of the “Scaling Policies” tab.

  • Dynamic Target Tracking:

    With the dynamic target tracking policy you: (1) choose a relevant metric (eg., avg CPU utilization) and state the target value; and (2) indicate the time buffer to wait before reassessing the metric to let the new nodes start up and start contributing to the metric value. Additionally, you can disable scale-in to have the mechanism be able to only increase the worker count, not shrink the cluster.

  • Dynamic “With Steps”:

    The dynamic “with steps” policy is more complex, as it consists of an alarm and a number of adjustments. To define an alarm, you must choose a metric and define its breach criteria (eg., avg CPU utilization over a chosen period of time higher than 70%). Additionally, the alarm can optionally send an event to an SNS topic for other systems to observe. Once the alarm is breached, a set of adjustments to the number of nodes are executed. Those adjustments can be either arbitrary (setting the number of nodes to a specific value) or increments. The increments can be a value (eg., add 2 nodes, or remove 1 node) or a percentage of the current number of workers (eg., add 10%, or reduce by 20%).

Auto Scaling Activity#

All events in the Auto Scaling mechanism can be observed in the Activity History tab. This is very useful for debugging purposes. The current instances part of the ASG are listed in the “Instances” tab. There you can observe which instances are currently being started-up or decommissioned.

Manual#

Auto Scaling can also be used for clusters built manually using the Starburst AMIs and not using the CloudFormation stack. The workers need to be manually put into a single Auto Scaling Group, and configured as described above. Graceful Scaledown of workers, as described in the Graceful Scaledown of Workers, does not work for manually setup AutoScaling groups.

Graceful Scaledown of Workers#

When a CloudFormation stack is created using the CloudFormation template all the workers are automatically organized within an AWS AutoScaling Group.

When AWS AutoScaling resizes the cluster it starts decommissioning workers. The CloudFormation stack has features to make sure this process doesn’t disrupt the usage of the cluster, most importantly that no queries fail because of that.

Without this feature if a worker is forcefully shut down, all queries currently running fail and need to be restarted.

How It Works#

With graceful scaledown, when the AutoScalingGroup/Stack is modified to shrink the cluster (number of workers lowered, or the AutoScaling group is configured to do so automatically) then AWS AutoScaling notifies the workers it intends to shut down and let them prepare for this.

The worker enters a special state in which it (1) stops serving new requests, (2) continues processing the current query tasks that are scheduled on it and (3) shuts down after finishing that work. Next after a 2 minute quiet period the worker process automatically exits, and notifies the AutoScaling mechanism to proceed with the termination of its EC2 node.

The maximum time a worker can postpone AWS AutoScaling termination of its node is 48hrs, this is a AWS limitation.

AWS Elements on the Stack#

The CloudFormation Template creates a number of resources on the stack:

  • an AutoScaling Hook

  • an SQS Queue that this hook writes to

  • an IAM Role and an InstanceProfile wrapper to allow AutoScaling to write to SQS

  • an IAM Role to allow the workers to talk to SQS, AutoScaling and EC2 services. The role is fine grained to allow only the necessary actions. It’s discussed in a section below.

All the resources created on the stack are explicit and you can find them and view their settings/permissions. All resources are terminated once the stack is deleted.

Presto Node Role Permissions#

The Presto node role is created automatically by the CloudFormation template on the stack (and deleted when the stack is deleted).

When using SEP via our CloudFormation template, by default, you do not need to provide anything. The template creates all necessary resources automatically.

If you need to provide your own IAM Instance Profile for the Presto instances (IamInstanceProfile field in the Stack creation form), consult the IAM Role Permissions for Presto Cluster Nodes section. Same applies when launching the AMI manually. Make sure you choose an IAM Role that satisfies the requirements.

Graceful Scaledown Limitations#

Presto instances created manually from the AWS Marketplace AMIs and manually setup in a AutoScaling Group do not benefit from this mechanism. At least not without manual setup. They are operating without graceful scaledown, so when AutoScaling kicks in, all queries that are currently running may fail. In that case, at boot time a warning log is recorded in the graceful scaledown handler log saying it’s not running - this is intended behavior.