Congratulations! Your cluster is up and running. Now it is time to test it out by connecting your clients like the CLI or some BI application using the JDBC driver.
With that usage you gain insights on the correct configuration, data sources and sizing of the cluster for the desired usage and performance. This in turn leads you to further work such as updating configuration, scaling your cluster or even upgrading to a newer release.
This document covers these and other aspects associated to running your SEP cluster, or even clusters.
Once you have successfully installed SEP,
you can confirm that the coordinator pod is running with tools such as
kubectl or Octant.
You can also check the log in the server pod. A successful start of the coordinator, and also each worker, shows with a log entry similar to the following snippet:
INFO main io.prestosql.server.Server ======== SERVER STARTED ========
A default deployment exposes the UI via HTTP and does not require a password. Production deployments should use HTTPS and have authentication configured.
The main page of the Web UI displays the number of connected workers. Confirm that the number is identical to the number specific in the worker section. The default number of workers is two.
With the knowledge that you can access the Web UI, and therefore the coordinator, and that some workers are running, you are ready to connect client tools for running queries.
Use the protocol, FQDN and port with the
--server option for starting the
CLI to access Presto:
presto --server https://presto.example.com:9999
The same URL can be used to configure the JDBC driver or ODBC driver connection and connection from any other client.
Initial installation should use the simplest possible configuration to allow you cluster to start and run, with some of the following characteristics:
only SEP deployment
limited or no catalogs
minimal worker count without scaling
This allows you to update the configuration incrementally:
update the values YAML file by adding or updating configuration
run a helm upgrade just like for installation
verify that the desired changes are applied successfully
repeat the process with the next desired change
Typically Helm applies the relevant changes in a granular fashion. For example, if you change the worker configuration, no coordinator changes are performed and the coordinator continues to run while workers are updated.
If you need to ensure that pods are recreated completely, you can use the
--recreate-pods option for your
helm command. This essentially restarts
all workers as well as the coordinator and therefore restarts the cluster. This
includes a downtime for users.
Alternatively, you can enable autoscaling in your your worker configuration and updating the deployment. Sufficient resources available in the cluster is a requirement.
Alternatively you can increase the memory resources allocated to the workers and updating your deployment.