Starburst Hive connector#

The Starburst Hive connector is an extended version of the Hive Connector with configuration and usage identical.

Note

The additional features of the connector require a valid Starburst Enterprise Presto license, unless otherwise noted.

The following improvements are included:

HDFS permissions#

Before running any CREATE TABLE or CREATE TABLE ... AS statements for Hive tables in Presto, you need to check that the operating system user running the Presto server has access to the Hive warehouse directory on HDFS.

The Hive warehouse directory is specified by the configuration variable hive.metastore.warehouse.dir in hive-site.xml, and the default value is /user/hive/warehouse. If that is not the case, either add the following to jvm.config on all of the nodes: -DHADOOP_USER_NAME=USER, where USER is an operating system user that has proper permissions for the Hive warehouse directory, or start the Presto server as a user with similar permissions. The hive user generally works as USER, since Hive is often started with the hive user. If you run into HDFS permissions problems on CREATE TABLE ... AS, remove /tmp/presto-* on HDFS, fix the user as described above, then restart all of the Presto servers.

Apache Sentry-based authorization#

The connector supports Apache Sentry usage for authorization with the details documented in Apache Sentry-based authorization.

Storage Caching#

The connector supports the default storage caching. In addition, if HDFS Kerberos authentication is enabled in your catalog properties file with the following setting, caching takes the relevant permissions into account and operates accordingly:

hive.hdfs.authentication.type=KERBEROS

Additional configuration for Kerberos is required.

If HDFS Kerberos authentication is enabled, you can also enable user impersonation using:

hive.hdfs.impersonation.enabled=true

The service user assigned to Presto needs to be able to access data files in underlaying storage. Access permissions are checked against impersonated user, yet with caching in place, some read operations happen in context of system user.

Transactional and ORC ACID tables#

When connecting to Hive metastore version 3, the Hive connector supports reading from the following types of transactional tables:

  • insert-only and ACID,

  • partitioned and not partitioned,

  • bucketed and not bucketed.

Materialized views#

The Hive connector supports reading from Hive materialized views. In Presto, these views are presented as regular, read-only tables.

Amazon Glue support#

Statistics collection is supported for Hive Metastore and Amazon Glue.

Configuring and using Presto with AWS Glue is described in the AWS Glue support documentation section.

HDFS erasure coding#

Hive connector supports Hadoop 3’s HDFS erasure coding.

Cloudera compatibility matrix#

The Starburst Hive connector can query the Cloudera Data Platform (CDP), formerly the Clouderia Distributed Hadoop (CDH) platform. Support and compatibility vary based on the version you use, see the following table for details:

Cloudera Distributed Hadoop and Starburst Enterprise Presto compatibility matrix#

CDP

338-e

332-e

323-e

312-e

CDP 7

Yes

Best effort

Best effort

Best effort

CDP 6.3

Best effort

Best effort

Best effort

Best effort

CDP 6.2

Best effort

Best effort

Best effort

Best effort

CDP 6.1

Best effort

Best effort

Best effort

Best effort

CDP 6.0

Best effort

Best effort

Best effort

Best effort

CDP 5.16

Yes

Yes

Yes

Yes

CDP 5.15

Yes

Yes

Yes

Yes

CDP 5.14

Yes

Yes

Yes

Yes

CDP 5.13

Best effort

Best effort

Best effort

Best effort

CDP 5.12

No

No

Best effort

Best effort

CDP 5.11

No

No

No

No

Limitations#

The following limitation apply in addition to the limitations of the Hive Connector.

  • Writing to and creation of transactional tables is not supported.

  • Reading ORC ACID tables created with Hive Streaming ingest is not supported.

  • For security reasons, sys system catalog is not accessible in Presto.

  • Hive’s timestamp with local zone data type is not supported in Presto. It is possible to read from a table having a column of this type, but the column itself will not be accessible. Writing to such a table is not supported.

  • Prest does not correctly read timestamp values from Parquet, RCFile with binary serde and Avro file formats created by Hive 3.1 or later due to Hive issues HIVE-21002, HIVE-22167. When reading from these file formats, Presto returns different results.