5.13. Auto Scaling a Running Presto Cluster

AWS Auto Scaling offers automatic control over the size of your Presto cluster (CloudFormation stack).

Mange Auto Scaling Groups

When you create a Presto cluster, an Auto Scaling Group (ASG) is automatically created for all the Presto worker nodes. To view and manage this ASG, please refer to the AWS ASG page and log into your AWS account. There, you will see a list of ASGs for all Presto workers across all Presto clusters you have running. Here, you can control how Amazon Auto Scaling manages your Presto cluster.

Auto Scaling Models

There are three types of auto scaling models you can employ to manage your Presto cluster:

  • Static / Manual
  • Static / Scheduled
  • Dynamic

Static/Manual Auto Scaling

The static or manual auto scaling model is managed from the “Details” tab. This model is configured by default. In this tab, there are three main properties: “Desired Capacity”, “Min” and “Max”. You can click on the “Edit” button to change those values to your desired values and when you hit “Save” the Auto Scaling mechanism will then start to satisfy your requirements – either spinning up new Presto Worker nodes or shutting down existing ones.

In the Starburst CloudFormation template, by default, all three properties are set to the same value, keeping the number of running worker nodes equal to what you chose when spun up the Presto cluster from the Starburst CloudFormation template. This means that when a node gets terminated (or is unavailable for whatever reason), Auto Scaling will start a new one to satisfy the requirements.

Static/Scheduled Auto Scaling

The static or scheduled auto scaling model is controlled from the “Scheduled Actions” tab. There, you can create a number of scheduled actions that will allow you to change the size of the Presto cluster based on the time of day. For example, you could keep a small number of nodes during the night, and boost it during different parts of the day that see peak demand.

The configuration of this model is a simple list of actions that are scheduled to execute and change the static values of “Min”, “Max” and “Desired Capacity” properties to some other arbitrary (static) values of your choosing. Such an action will be executed with the configured schedule, either once or in a repetitive manner (cron). Continuing on the previous example, you would have a nightly cooldown – one event to handle lowering the values in the evening and another event every morning to bring them back up.

Dynamic Auto Scaling

Dynamic auto scaling uses policies which you define in the in the “Scaling Policies” tab. Of the three types of policies, “scaling policy with steps” and “target tracking scaling policy” (default policy), are the most useful. The third is a special case of the “with steps” policy that contains a single step. You can change the policy type by clicking a link at the bottom of the “Scaling Policies” tab.

  • Dynamic Target Tracking:
With the dynamic target tracking policy you: (1) choose a relevant metric (eg., avg CPU utilization) and state the target value; and (2) indicate the time buffer to wait before reassessing the metric to let the new nodes start up and start contributing to the metric value. Additionally, you can disable scale-in to have the mechanism be able to only increase the Starburst Presto worker count, not shrink the cluster.
  • Dynamic “With Steps”:
The dynamic “with steps” policy is more complex, as it consists of an alarm and a number of adjustments. To define an alarm, you must choose a metric and define its breach criteria (eg., avg CPU utilization over a chosen period of time higher than 70%). Additionally, the alarm can optionally send an event to an SNS topic for other systems to observe. Once the alarm is breached, a set of adjustments to the number of nodes are executed. Those adjustments can be either arbitrary (setting the number of nodes to a specific value) or increments. The increments, on the other hand, can be a value (eg., add 2 nodes, or remove 1 node) or a percentage of the current number of nodes (eg., add 10%, or reduce by 20%).

Auto Scaling Activity

All events in the Auto Scaling mechanism can be observed in the Activity History tab. This is very useful for debugging purposes. The current instances part of the ASG are listed in the “Instances” tab. There you can observe which instances are currently being started-up or decommissioned.

Manual

Auto Scaling can also be used for Starburst Presto clusters built manually using the Starburst AMIs and not using the Cloud Formation Stack. The workers need to be manually put into a single Auto Scaling Group, and configured as described above. Graceful Scaledown of Presto Worker nodes, as described in the Graceful Scaledown of Presto Workers, will not work for manually setup AutoScaling groups.

Graceful Scaledown of Presto Workers

When a Cloud Formation Stack is created using the Starburst Cloud Formation template all the Presto Workers are automatically organized within an AWS AutoScaling Group, as described in other sections of this documentation.

When AWS AutoScaling resizes the cluster it starts decommissioning Presto Workers. The Starburst Presto Cloud Formation Stack has features to make sure this process doesn’t disrupt the usage of the cluster, most importantly that no queries fail because of that.

Without this feature if a node is forcefully shut down, all queries currently running will fail and will need to be restarted.

How it works

With graceful scaledown, when the AutoScalingGroup/Stack is modified to shrink the cluster (number of workers lowered, or the AutoScaling group is configured to do so automatically) then AWS AutoScaling will notify the Presto Worker nodes it intends to shut down and let them prepare for this.

The Presto Worker enters a special state in which it (1) stops serving new requests, (2) continues processing the current query tasks that were scheduled on it and (3) shuts down after upon finishing that work. Next after a 2 minute quiet period the Presto Worker process automatically exits, and notifies the AutoScaling mechanism to proceed with the termination of its EC2 node.

The maximum time a Presto Worker can postpone AWS AutoScaling termination of its node is 48hrs, this is a AWS limitation.

AWS elements on the Stack

The Starburst Presto Cloud Formation Template creates a number of resources on the Stack:

  • an AutoScaling Hook
  • an SQS Queue that this hook writes to
  • an IAM Role and an InstanceProfile wrapper to allow AutoScaling to write to SQS
  • an IAM Role to allow the Presto Worker nodes talk to SQS, AutoScaling and EC2 services. The role is fine grained to allow only the necessary actions. It’s discussed in a section below

All the resources created on the Stack are explicit and the user can find them and view their settings/permissions. All resources will be terminated once the Stack is deleted.

Starburst Presto Worker Role Permissions

The Presto Worker role is created automatically by the Starburst Cloud Formation template on the Stack (and deleted when the Stack is deleted).

When using Starburst Presto via our Cloud Formation template by default you do not need to provide anything, the template will create all necessary resources automatically.

If however you need to provide your own IAM Instance Profile for the Presto instances (IamInstanceProfile field in the Stack creation form) please consult the AWS Security Prerequisites section. Same applies when launching the AMI manually, make sure you choose a IAM Role that satisfies the requirements.

Graceful Scaledown limitations

Presto instances created manually from the AWS Marketplace AMIs and manually setup in a AutoScaling Group will not benefit from this mechanism. At least not without manual setup. They will be operating without graceful scaledown, so when AutoScaling kicks in, all queries that are currently running may fail. In that case, at boot time a warning log will be recorded in the graceful scaledown handler log saying it’s not running - this is intended behaviour.