Skip to main content
This page documents all of the configuration properties for each Druid service type. A recommended way of organizing Druid configuration files can be seen in the conf directory in the Druid package root:
$ ls -R conf
druid

conf/druid:
_common       broker        coordinator   historical    middleManager overlord

conf/druid/_common:
common.runtime.properties log4j2.xml

conf/druid/broker:
jvm.config         runtime.properties

conf/druid/coordinator:
jvm.config         runtime.properties

conf/druid/historical:
jvm.config         runtime.properties

conf/druid/middleManager:
jvm.config         runtime.properties

conf/druid/overlord:
jvm.config         runtime.properties
Each directory has a runtime.properties file containing configuration properties for the specific Druid service corresponding to the directory, such as historical. The jvm.config files contain JVM flags such as heap sizing properties for each service. Common properties shared by all services are placed in _common/common.runtime.properties.

Configuration Interpolation

Configuration values can be interpolated from System Properties, Environment Variables, or local files:
druid.metadata.storage.type=${env:METADATA_STORAGE_TYPE}
druid.processing.tmpDir=${sys:java.io.tmpdir}
druid.segmentCache.locations=${file:UTF-8:/config/segment-cache-def.json}
Interpolation is also recursive:
druid.segmentCache.locations=${file:UTF-8:${env:SEGMENT_DEF_LOCATION}}
If the property is not set, an exception will be thrown on startup, but a default can be provided:
druid.metadata.storage.type=${env:METADATA_STORAGE_TYPE:-mysql}
druid.processing.tmpDir=${sys:java.io.tmpdir:-/tmp}
To escape interpolation, add another $:
config.name=$${value}

Common Configurations

The properties under this section are common configurations that should be shared across all Druid services in a cluster.

JVM Configuration Best Practices

There are four JVM parameters that we set on all of our services:
  • -Duser.timezone=UTC: Sets the default timezone of the JVM to UTC. We always set this and do not test with other default timezones.
  • -Dfile.encoding=UTF-8: We test assuming UTF-8. Local encodings might work but may result in unexpected bugs.
  • -Djava.io.tmpdir=<a path>: Various parts of Druid use temporary files. These files can become quite large. Ensure this points to a location with ample space.
    • The temp directory should not be volatile tmpfs
    • Should have good read and write speed
    • Avoid NFS mounts
    • The org.apache.druid.java.util.metrics.SysMonitor requires execute privileges on files in java.io.tmpdir
  • -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager: Allows log4j2 to handle logs for non-log4j2 components like Jetty.

Extensions

Many of Druid’s external dependencies can be plugged in as modules. Extensions can be provided using the following configs:
PropertyDescriptionDefault
druid.extensions.directoryThe root extension directory where user can put extensions related files.extensions (relative to Druid’s working directory)
druid.extensions.hadoopDependenciesDirThe root Hadoop dependencies directory.hadoop-dependencies (relative to Druid’s working directory)
druid.extensions.loadListA JSON array of extensions to load. If not specified, value is null and Druid loads all extensions. If empty list [], no extensions are loaded.null
druid.extensions.searchCurrentClassloaderWhether Druid will search the main classloader for extensions.true
druid.extensions.useExtensionClassloaderFirstWhether Druid extensions should prefer loading classes from their own jars.false
druid.extensions.hadoopContainerDruidClasspathExplicitly set user classpath for Hadoop jobs.null
druid.extensions.addExtensionsToHadoopContainerAdd extensions from loadList to Hadoop container classpath.false
For more details about available extensions, see the Extensions page.

Modules

PropertyDescriptionDefault
druid.modules.excludeListJSON array of canonical class names of modules which shouldn’t be loaded.[]

ZooKeeper

We recommend setting the base ZK path and the ZK service host:
PropertyDescriptionDefault
druid.zk.paths.baseBase ZooKeeper path./druid
druid.zk.service.hostThe ZooKeeper hosts to connect to. REQUIREDnone
druid.zk.service.userUsername to authenticate with ZooKeeper.none
druid.zk.service.pwdPassword Provider or string password for ZooKeeper.none
druid.zk.service.authSchemeAuthentication scheme.digest

ZooKeeper Behavior

PropertyDescriptionDefault
druid.zk.service.sessionTimeoutMsZooKeeper session timeout in milliseconds.30000
druid.zk.service.connectionTimeoutMsZooKeeper connection timeout in milliseconds.15000
druid.zk.service.compressWhether created Znodes should be compressed.true
druid.zk.service.aclEnable ACL security for ZooKeeper.false

TLS

General Configuration

PropertyDescriptionDefault
druid.enablePlaintextPortEnable/Disable HTTP connector.true
druid.enableTlsPortEnable/Disable HTTPS connector.false

Jetty Server TLS Configuration

Required properties:
PropertyDescriptionRequired
druid.server.https.keyStorePathFile path or URL of the TLS/SSL KeyStore.yes
druid.server.https.keyStoreTypeType of the KeyStore.yes
druid.server.https.certAliasAlias of TLS/SSL certificate.yes
druid.server.https.keyStorePasswordPassword Provider or String password for KeyStore.yes

Authentication and Authorization

PropertyTypeDescriptionDefault
druid.auth.authenticatorChainJSON ListList of Authenticator type names["allowAll"]
druid.escalator.typeStringType of Escalator for internal communicationsnoop
druid.auth.authorizersJSON ListList of Authorizer type names["allowAll"]
druid.auth.unsecuredPathsListPaths where security checks are not performed[]
druid.auth.allowUnauthenticatedHttpOptionsBooleanSkip authentication for HTTP OPTIONS requestsfalse

Metadata Storage

These properties specify the JDBC connection for the metadata storage:
PropertyDescriptionDefault
druid.metadata.storage.typeType of metadata storage: mysql, postgresql, or derbyderby
druid.metadata.storage.connector.connectURIJDBC URI for the databasenone
druid.metadata.storage.connector.userUsername to connect withnone
druid.metadata.storage.connector.passwordPassword Provider or String passwordnone
druid.metadata.storage.connector.createTablesCreate tables if they don’t existtrue
druid.metadata.storage.tables.baseBase name for tablesdruid

Deep Storage

These configurations control how to push and pull segments from deep storage:
PropertyDescriptionDefault
druid.storage.typeType of deep storage: local, noop, s3, hdfs, cassandralocal

Local Deep Storage

PropertyDescriptionDefault
druid.storage.storageDirectoryDirectory on disk to use as deep storage/tmp/druid/localStorage

S3 Deep Storage

Note: Requires the druid-s3-extensions extension.
PropertyDescriptionDefault
druid.storage.bucketS3 bucket namenone
druid.storage.baseKeyS3 object key prefixnone
druid.storage.disableAclDisable ACL (if false, grants full control to bucket owner)false

HDFS Deep Storage

Note: Requires the druid-hdfs-storage extension.
PropertyDescriptionDefault
druid.storage.storageDirectoryHDFS directory for deep storagenone
druid.storage.compressionFormatCompression format: zip or lz4zip

Startup Logging

PropertyDescriptionDefault
druid.startup.logging.logPropertiesLog all properties on startupfalse
druid.startup.logging.maskPropertiesMasks sensitive properties containing these words["password"]

Request Logging

PropertyDescriptionDefault
druid.request.logging.typeHow to log queries: noop, file, emitter, slf4j, filtered, composing, switchingnoop

File Request Logging

PropertyDescriptionDefault
druid.request.logging.dirDirectory to store request logsnone
druid.request.logging.filePatternJoda datetime format for each file"yyyy-MM-dd'.log'"

JavaScript

PropertyDescriptionDefault
druid.javascript.enabledEnable JavaScript functionalityfalse
JavaScript-based functionality is disabled by default. See the JavaScript programming guide for more information.

Service-Specific Configuration

For detailed configuration options specific to each Druid service:

Extensions

Learn about core and community extensions

Logging

Configure logging for Druid services

Build docs developers (and LLMs) love