Recommended Configuration File Organization
A recommended way of organizing Druid configuration files can be seen in theconf directory in the Druid package root:
runtime.properties file containing configuration properties for the specific Druid service corresponding to the directory, such as historical.
The jvm.config files contain JVM flags such as heap sizing properties for each service.
Common properties shared by all services are placed in _common/common.runtime.properties.
Configuration Interpolation
Configuration values can be interpolated from System Properties, Environment Variables, or local files:$:
Common Configurations
The properties under this section are common configurations that should be shared across all Druid services in a cluster.JVM Configuration Best Practices
There are four JVM parameters that we set on all of our services:-Duser.timezone=UTC: Sets the default timezone of the JVM to UTC. We always set this and do not test with other default timezones.-Dfile.encoding=UTF-8: We test assuming UTF-8. Local encodings might work but may result in unexpected bugs.-Djava.io.tmpdir=<a path>: Various parts of Druid use temporary files. These files can become quite large. Ensure this points to a location with ample space.- The temp directory should not be volatile tmpfs
- Should have good read and write speed
- Avoid NFS mounts
- The
org.apache.druid.java.util.metrics.SysMonitorrequires execute privileges on files injava.io.tmpdir
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager: Allows log4j2 to handle logs for non-log4j2 components like Jetty.
Extensions
Many of Druid’s external dependencies can be plugged in as modules. Extensions can be provided using the following configs:| Property | Description | Default |
|---|---|---|
druid.extensions.directory | The root extension directory where user can put extensions related files. | extensions (relative to Druid’s working directory) |
druid.extensions.hadoopDependenciesDir | The root Hadoop dependencies directory. | hadoop-dependencies (relative to Druid’s working directory) |
druid.extensions.loadList | A JSON array of extensions to load. If not specified, value is null and Druid loads all extensions. If empty list [], no extensions are loaded. | null |
druid.extensions.searchCurrentClassloader | Whether Druid will search the main classloader for extensions. | true |
druid.extensions.useExtensionClassloaderFirst | Whether Druid extensions should prefer loading classes from their own jars. | false |
druid.extensions.hadoopContainerDruidClasspath | Explicitly set user classpath for Hadoop jobs. | null |
druid.extensions.addExtensionsToHadoopContainer | Add extensions from loadList to Hadoop container classpath. | false |
For more details about available extensions, see the Extensions page.
Modules
| Property | Description | Default |
|---|---|---|
druid.modules.excludeList | JSON array of canonical class names of modules which shouldn’t be loaded. | [] |
ZooKeeper
We recommend setting the base ZK path and the ZK service host:| Property | Description | Default |
|---|---|---|
druid.zk.paths.base | Base ZooKeeper path. | /druid |
druid.zk.service.host | The ZooKeeper hosts to connect to. REQUIRED | none |
druid.zk.service.user | Username to authenticate with ZooKeeper. | none |
druid.zk.service.pwd | Password Provider or string password for ZooKeeper. | none |
druid.zk.service.authScheme | Authentication scheme. | digest |
ZooKeeper Behavior
| Property | Description | Default |
|---|---|---|
druid.zk.service.sessionTimeoutMs | ZooKeeper session timeout in milliseconds. | 30000 |
druid.zk.service.connectionTimeoutMs | ZooKeeper connection timeout in milliseconds. | 15000 |
druid.zk.service.compress | Whether created Znodes should be compressed. | true |
druid.zk.service.acl | Enable ACL security for ZooKeeper. | false |
TLS
General Configuration
| Property | Description | Default |
|---|---|---|
druid.enablePlaintextPort | Enable/Disable HTTP connector. | true |
druid.enableTlsPort | Enable/Disable HTTPS connector. | false |
Jetty Server TLS Configuration
Required properties:| Property | Description | Required |
|---|---|---|
druid.server.https.keyStorePath | File path or URL of the TLS/SSL KeyStore. | yes |
druid.server.https.keyStoreType | Type of the KeyStore. | yes |
druid.server.https.certAlias | Alias of TLS/SSL certificate. | yes |
druid.server.https.keyStorePassword | Password Provider or String password for KeyStore. | yes |
Authentication and Authorization
| Property | Type | Description | Default |
|---|---|---|---|
druid.auth.authenticatorChain | JSON List | List of Authenticator type names | ["allowAll"] |
druid.escalator.type | String | Type of Escalator for internal communications | noop |
druid.auth.authorizers | JSON List | List of Authorizer type names | ["allowAll"] |
druid.auth.unsecuredPaths | List | Paths where security checks are not performed | [] |
druid.auth.allowUnauthenticatedHttpOptions | Boolean | Skip authentication for HTTP OPTIONS requests | false |
Metadata Storage
These properties specify the JDBC connection for the metadata storage:| Property | Description | Default |
|---|---|---|
druid.metadata.storage.type | Type of metadata storage: mysql, postgresql, or derby | derby |
druid.metadata.storage.connector.connectURI | JDBC URI for the database | none |
druid.metadata.storage.connector.user | Username to connect with | none |
druid.metadata.storage.connector.password | Password Provider or String password | none |
druid.metadata.storage.connector.createTables | Create tables if they don’t exist | true |
druid.metadata.storage.tables.base | Base name for tables | druid |
Deep Storage
These configurations control how to push and pull segments from deep storage:| Property | Description | Default |
|---|---|---|
druid.storage.type | Type of deep storage: local, noop, s3, hdfs, cassandra | local |
Local Deep Storage
| Property | Description | Default |
|---|---|---|
druid.storage.storageDirectory | Directory on disk to use as deep storage | /tmp/druid/localStorage |
S3 Deep Storage
Note: Requires thedruid-s3-extensions extension.
| Property | Description | Default |
|---|---|---|
druid.storage.bucket | S3 bucket name | none |
druid.storage.baseKey | S3 object key prefix | none |
druid.storage.disableAcl | Disable ACL (if false, grants full control to bucket owner) | false |
HDFS Deep Storage
Note: Requires thedruid-hdfs-storage extension.
| Property | Description | Default |
|---|---|---|
druid.storage.storageDirectory | HDFS directory for deep storage | none |
druid.storage.compressionFormat | Compression format: zip or lz4 | zip |
Startup Logging
| Property | Description | Default |
|---|---|---|
druid.startup.logging.logProperties | Log all properties on startup | false |
druid.startup.logging.maskProperties | Masks sensitive properties containing these words | ["password"] |
Request Logging
| Property | Description | Default |
|---|---|---|
druid.request.logging.type | How to log queries: noop, file, emitter, slf4j, filtered, composing, switching | noop |
File Request Logging
| Property | Description | Default |
|---|---|---|
druid.request.logging.dir | Directory to store request logs | none |
druid.request.logging.filePattern | Joda datetime format for each file | "yyyy-MM-dd'.log'" |
JavaScript
| Property | Description | Default |
|---|---|---|
druid.javascript.enabled | Enable JavaScript functionality | false |
Service-Specific Configuration
For detailed configuration options specific to each Druid service:- Coordinator Configuration
- Overlord Configuration
- Broker Configuration
- Historical Configuration
- MiddleManager Configuration
- Indexer Configuration
Related Pages
Extensions
Learn about core and community extensions
Logging
Configure logging for Druid services