Ranger configuration#

The starburst-ranger Helm chart configures the Ranger serverin the cluster with the values.yaml file detailed in the following sections.

A minimal values file adds the registry credentials and overrides any defaults to suitable values.

Ranger needs to be enabled in the SEP configuration and deployed with Helm after SEP.

Docker registry access#

Same as Docker image and registry section for the SEP Helm chart.

registryCredentials:
  enabled: false
  registry:
  username:
  password:

Ranger server#

ranger admin UI

admin:
  image:
    repository: "harbor.starburstdata.net/starburstdata/starburst-ranger-admin"
    tag: "2.0.33"
    pullPolicy: "IfNotPresent"
  port: 6080
  resources:
    requests:
      memory: "1Gi"
      cpu: 2
    limits:
      memory: "1Gi"
      cpu: 2
  # serviceUser is used by Presto to access Ranger
  serviceUser: "presto_service"
  passwords:
    admin: "RangerPassword1"
    tagsync: "TagSyncPassword1"
    usersync: "UserSyncPassword1"
    keyadmin: "KeyAdminPassword1"
    service: "PrestoServicePassword1"
  # optional truststore containing CA certificates to use instead of default one
  truststore:
    # existing secret containing truststore.jks key
    secret:
    # password to truststore
    password:
  env:
    # Additional env variables to pass to Ranger Admin.
    # To pass Ranger install property, use variable with name RANGE__<property_name>,
    # for example RANGER__authentication_method.

LDAP user synchronization server#

usersync:
  enabled: true
  image:
    repository: "harbor.starburstdata.net/starburstdata/ranger-usersync"
    tag: "2.0.33"
    pullPolicy: "IfNotPresent"
  name: "ranger-usersync"
  resources:
    requests:
      memory: "1Gi"
      cpu: 1
    limits:
      memory: "1Gi"
      cpu: 1
  tls:
    # set true to enable tls
    enabled: false
    # optional truststore containing CA certificate for ldap server
    truststore:
      # existing secret containing truststore.jks key
      secret:
      # password to truststore
      password:
  # env is a map of ranger config variables
  env:
    # Use RANGER__<property_name> variables to set Ranger install properties.
    RANGER__SYNC_LDAP_URL: "ldap://ranger-ldap:389"
    RANGER__SYNC_LDAP_BIND_DN: "cn=admin,dc=ldap,dc=example,dc=org"
    RANGER__SYNC_LDAP_BIND_PASSWORD: "cieX7moong3u"
    RANGER__SYNC_LDAP_SEARCH_BASE: "dc=ldap,dc=example,dc=org"
    RANGER__SYNC_LDAP_USER_SEARCH_BASE: "ou=users,dc=ldap,dc=example,dc=org"
    RANGER__SYNC_LDAP_USER_OBJECT_CLASS: "person"
    RANGER__SYNC_GROUP_SEARCH_ENABLED: "true"
    RANGER__SYNC_GROUP_USER_MAP_SYNC_ENABLED: "true"
    RANGER__SYNC_GROUP_SEARCH_BASE: "ou=groups,dc=ldap,dc=example,dc=org"
    RANGER__SYNC_GROUP_OBJECT_CLASS: "groupOfNames"

Backing database server#

Internal database:

database:
  type: "internal"
  internal:
    image:
      repository: "library/postgres"
      tag: "10.6"
      pullPolicy: "IfNotPresent"
    volume:
      # use one of:
      # - existingVolumeClaim to specify existing PVC
      # - persistentVolumeClaim to specify spec for new PVC
      # - other volume type inline configuration, e.g. emptyDir
      # Examples:
      # existingVolumeClaim: "my_claim"
      # emptyDir: {}
      persistentVolumeClaim:
        storageClassName:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: "2Gi"
    resources:
      requests:
        memory: "1Gi"
        cpu: 2
      limits:
        memory: "1Gi"
        cpu: 2
    port: 5432
    databaseName: "ranger"
    databaseUser: "ranger"
    databasePassword: "RangerPass123"
    databaseRootUser: "rangeradmin"
    databaseRootPassword: "RangerAdminPass123"

External postgresql database

database:
  # type is internal | external
  type: "internal"

  external:
    port:
    host:
    databaseName:
    databaseUser:
    databasePassword:
    databaseRootUser:
    databaseRootPassword:

Exposing the cluster to outside network#

The expose section for Ranger works identical to the SEP server expose section. It exposes the Ranger user interface for configuring and managing policies outside the cluster.

Differences are isolated to the configured default values. The default type is clusterIp:

expose:
  type: "clusterIp"
  clusterIp:
    name: "ranger"
    ports:
      http:
        port: 6080

The following section shows the default values with an activated nodePort type:

expose:
  type: "nodePort"
  nodePort:
    name: "ranger"
    ports:
      http:
        port: 6080
        nodePort: 30680

The following section shows the default values with an activated loadBalancer type:

expose:
  type: "loadBalancer"
  loadBalancer:
    name: "ranger"
    IP: ""
    ports:
      http:
        port: 6080
    annotations: {}
    sourceRanges: []

The following section shows the default values with an activated ingress type:

expose:
  type: "ingress"
  ingress:
    tls:
      enabled: true
      secretName:
    host:
    path: "/"
    annotations: {}

Datasources#

# datasources - list of Presto datasources to configure Ranger # services. It is mounted as file /config/datasources.yaml inside # container and processed by init script.

datasources:
  - name: "fake-presto-1"
    host: "presto.fake-presto-1-namespace"
    port: 8080
    username: "presto_service1"
    password: "Password123"
  - name: "fake-presto-2"
    host: "presto.fake-presto-2-namespace"
    port: 8080
    username: "presto_service2"
    password: Password123

Server start up configuration#

# initFile - optional startup script, called with container name # as parameter - either ranger-admin or ranger-usersync # Use “files/initFile.sh” to enable Presto integration using datasources section

initFile:

# List of extra arguments to be passed to initFile

extraArguments:

Extra secrets#

# Below secret will be mounted in /extra-secret/ within containers

extraSecret:
  # Replace this with secret name that should be used from namespace you are deploying to
  name:
  # Optionally 'file' may be provided which will be deployed as secret with given 'name' in used namespace.
  file:

Node assignment#

You can add configuration to determine the node and pod to use:

nodeSelector: {}
tolerations: []
affinity: {}