Skip to main content
The Druid web console provides a comprehensive UI for managing your cluster, loading data, executing queries, and monitoring system health.
The web console is hosted by the Router service and accessed at http://<ROUTER_IP>:<ROUTER_PORT>.

Prerequisites

The web console requires the following configurations (enabled by default):
1

Router Management Proxy

2

Druid SQL

Enable Druid SQL for Broker processes.
# Typically already enabled in default configurations
druid.router.managementProxy.enabled=true
druid.sql.enable=true
Security Note: Without proper authentication and authorization configured, any user accessing the web console has the same privileges as the OS user running Druid. Always configure security for production environments.

Accessing the Web Console

Navigate to the Router’s address in your web browser:
http://<ROUTER_IP>:8888
If you’ve enabled authentication, you’ll be prompted to log in with your credentials.

Home View

The Home view provides a high-level overview of your cluster with clickable cards for quick navigation.

Status

Druid version and loaded extensions

Datasources

All datasources and their sizes

Segments

Segment distribution and status

Supervisors

Streaming ingestion supervisors

Tasks

Ingestion and processing tasks

Services

Cluster nodes and their status

Lookups

Query-time lookup tables
Web console home view

Query View

The enhanced Query view supports multi-stage query execution with SQL-based ingestion.

Key Features

  • Open multiple query tabs simultaneously
  • Each tab maintains its own query context
  • Click + to open a new tab
  • Right-click tab names to rename, duplicate, or close

Query Execution

1

Write Your Query

SELECT
  __time,
  COUNT(*) AS events
FROM wikipedia
WHERE __time >= CURRENT_TIMESTAMP - INTERVAL '1' HOUR
GROUP BY 1
ORDER BY 1 DESC
2

Select Engine

Choose the query engine from the dropdown:
  • Auto (recommended): Automatically selects appropriate engine
  • SQL Native: Traditional Druid SQL
  • SQL MSQ Task: Multi-stage query engine for ingestion and complex queries
3

Run or Preview

  • Run: Execute the full query
  • Preview: For INSERT/REPLACE queries, runs without inserting and with a LIMIT

Live Query Reports

When using the MSQ engine, the console displays real-time query progress:
  • Overall Progress: Main progress bar showing total completion
  • Current Stage: Progress of the currently executing stage
  • Stage Details: Expand each stage to see:
    • Worker statistics
    • Partition-level metrics
    • Rows processed and output
    • Memory usage
Multi-stage query view

Additional Query Tools

Access advanced features from the More (…) menu:

Explain SQL Query

View the logical plan with EXPLAIN PLAN FOR

Query History

Access previously executed queries

Convert Ingestion Spec

Convert native batch specs to SQL

Attach from Task ID

Open existing query task in new tab

Data Loader

The data loader provides a wizard-based approach to creating ingestion specs.

Supported Data Sources

  • Local files: Upload files directly
  • HTTP: Ingest from HTTP(S) URLs
  • S3: Amazon S3 buckets
  • Google Cloud Storage: GCS buckets
  • Azure: Azure Blob Storage
Data loader source selection

Data Loader Steps

1

Connect

Select data source and provide connection details
2

Parse

Configure data format and parser settings
3

Parse Time

Define the primary timestamp column
4

Transform

Apply filters and transformations
5

Filter

Set up additional filtering rules
6

Configure Schema

Define dimensions and metrics
7

Partition

Configure segment partitioning
8

Tune

Adjust performance parameters
9

Publish

Review and submit the ingestion spec
The wizard generates incremental previews of your data at each step, helping you validate configuration before submission.
Data loader configuration

Datasources View

Manage all datasources in your cluster from a centralized view.

Features

Datasource List

View all datasources with:
  • Total size
  • Segment count
  • Availability percentage
  • Row count estimates

Segment Timeline

Toggle Show segment timeline to visualize:
  • Time-based segment distribution
  • Segment gaps
  • Replication status

Retention Rules

Configure rules to control:
  • Data retention periods
  • Segment replication
  • Historical tier assignments

Compaction

Enable and configure:
  • Automatic compaction
  • Compaction schedules
  • Segment size optimization
Datasources view

Actions

Right-click any datasource to:
  • Query: Open in Query view
  • Edit retention rules: Modify load/drop rules
  • Configure compaction: Set up automatic compaction
  • Drop datasource: Permanently delete
  • Mark as unused: Soft delete (can be restored)
Dropping a datasource permanently deletes all segments and metadata. This action cannot be undone.

Segments View

View detailed information about all segments in the cluster.
Segments view

Segment Information

  • Datasource: Parent datasource
  • Start/End: Time range covered
  • Version: Segment version timestamp
  • Partition: Partition number
  • Size: Segment size in bytes
  • Num Rows: Row count

Supervisors View

Monitor and manage streaming ingestion supervisors.
Supervisors view

Supervisor Management

1

View Status

Monitor supervisor health and task distribution
2

Submit Supervisor

Click Submit JSON supervisor to create new supervisors
3

Control Supervisors

Right-click supervisors to:
  • Suspend: Pause ingestion temporarily
  • Resume: Restart suspended supervisor
  • Reset: Clear offsets and restart
  • Terminate: Stop and remove supervisor

Detailed Reports

Click the magnifying glass icon to view:
  • Task allocation
  • Lag metrics (Kafka/Kinesis)
  • Partition assignments
  • Recent errors
Supervisor status details

Tasks View

Track all ingestion and processing tasks.
Tasks view

Task Organization

Group tasks by:
  • Type: Index, compact, kill, etc.
  • Datasource: Group by target datasource
  • Status: Running, pending, success, failed

Task Actions

Submit Task

Click Submit JSON task to manually submit task specs

Task Details

Click the magnifying glass to view:
  • Task payload
  • Status details
  • Logs
  • Created/updated timestamps
Task status details

Services View

Monitor the health and status of all cluster nodes.
Services view

Node Information

Group services by role:
  • Coordinators
  • Overlords
  • Brokers
  • Historicals
  • MiddleManagers/Indexers
  • Routers

Health Monitoring

Healthy Service Indicators:
  • Status shows as green/active
  • Current size matches max size (Historicals)
  • No error messages in details

Lookups View

Manage query-time lookup tables for dimension value enrichment.
Lookups view

Creating Lookups

1

Open Lookups

Access from Home view or top navigation menu
2

Create Lookup

Click Add lookup and configure:
  • Name
  • Version
  • Type (map, JDBC, etc.)
  • Lookup data
3

Apply

Lookups are loaded into Broker and Historical memory

Using Lookups in Queries

SELECT
  LOOKUP(country_code, 'country_names') AS country,
  COUNT(*) AS events
FROM events
GROUP BY 1

Tips and Tricks

Most views are powered by Druid SQL. Click View SQL query for table to see and customize the underlying query.
In the Query view:
  • Ctrl/Cmd + Enter: Run query
  • Ctrl/Cmd + K: Open command palette
  • Ctrl/Cmd + /: Toggle comments
Query results can be exported as:
  • CSV
  • TSV
  • JSON
Toggle dark mode from the user menu in the top-right corner.

Build docs developers (and LLMs) love