The web console is hosted by the Router service and accessed at
http://<ROUTER_IP>:<ROUTER_PORT>.Prerequisites
The web console requires the following configurations (enabled by default):Router Management Proxy
Enable the Router’s management proxy.
Druid SQL
Enable Druid SQL for Broker processes.
Accessing the Web Console
Navigate to the Router’s address in your web browser:Home View
The Home view provides a high-level overview of your cluster with clickable cards for quick navigation.Status
Druid version and loaded extensions
Datasources
All datasources and their sizes
Segments
Segment distribution and status
Supervisors
Streaming ingestion supervisors
Tasks
Ingestion and processing tasks
Services
Cluster nodes and their status
Lookups
Query-time lookup tables

Query View
The enhanced Query view supports multi-stage query execution with SQL-based ingestion.Key Features
- Multi-Tab Interface
- Schema Browser
- External Data
- Open multiple query tabs simultaneously
- Each tab maintains its own query context
- Click + to open a new tab
- Right-click tab names to rename, duplicate, or close
Query Execution
Select Engine
Choose the query engine from the dropdown:
- Auto (recommended): Automatically selects appropriate engine
- SQL Native: Traditional Druid SQL
- SQL MSQ Task: Multi-stage query engine for ingestion and complex queries
Live Query Reports
When using the MSQ engine, the console displays real-time query progress:Progress Indicators
Progress Indicators
- Overall Progress: Main progress bar showing total completion
- Current Stage: Progress of the currently executing stage
- Stage Details: Expand each stage to see:
- Worker statistics
- Partition-level metrics
- Rows processed and output
- Memory usage

Additional Query Tools
Access advanced features from the More (…) menu:Explain SQL Query
View the logical plan with EXPLAIN PLAN FOR
Query History
Access previously executed queries
Convert Ingestion Spec
Convert native batch specs to SQL
Attach from Task ID
Open existing query task in new tab
Data Loader
The data loader provides a wizard-based approach to creating ingestion specs.Supported Data Sources
- Batch
- Streaming
- Local files: Upload files directly
- HTTP: Ingest from HTTP(S) URLs
- S3: Amazon S3 buckets
- Google Cloud Storage: GCS buckets
- Azure: Azure Blob Storage

Data Loader Steps

Datasources View
Manage all datasources in your cluster from a centralized view.Features
Datasource List
View all datasources with:
- Total size
- Segment count
- Availability percentage
- Row count estimates
Segment Timeline
Toggle Show segment timeline to visualize:
- Time-based segment distribution
- Segment gaps
- Replication status
Retention Rules
Configure rules to control:
- Data retention periods
- Segment replication
- Historical tier assignments
Compaction
Enable and configure:
- Automatic compaction
- Compaction schedules
- Segment size optimization

Actions
Right-click any datasource to:- Query: Open in Query view
- Edit retention rules: Modify load/drop rules
- Configure compaction: Set up automatic compaction
- Drop datasource: Permanently delete
- Mark as unused: Soft delete (can be restored)
Segments View
View detailed information about all segments in the cluster.
Segment Information
- Columns
- Filtering
- Details
- Datasource: Parent datasource
- Start/End: Time range covered
- Version: Segment version timestamp
- Partition: Partition number
- Size: Segment size in bytes
- Num Rows: Row count
Supervisors View
Monitor and manage streaming ingestion supervisors.
Supervisor Management
Detailed Reports
Click the magnifying glass icon to view:- Task allocation
- Lag metrics (Kafka/Kinesis)
- Partition assignments
- Recent errors

Tasks View
Track all ingestion and processing tasks.
Task Organization
Group tasks by:- Type: Index, compact, kill, etc.
- Datasource: Group by target datasource
- Status: Running, pending, success, failed
Task Actions
Submit Task
Click ⋮ → Submit JSON task to manually submit task specs
Task Details
Click the magnifying glass to view:
- Task payload
- Status details
- Logs
- Created/updated timestamps

Services View
Monitor the health and status of all cluster nodes.
Node Information
- By Type
- By Tier
- Metrics
Group services by role:
- Coordinators
- Overlords
- Brokers
- Historicals
- MiddleManagers/Indexers
- Routers
Health Monitoring
Healthy Service Indicators:
- Status shows as green/active
- Current size matches max size (Historicals)
- No error messages in details
Lookups View
Manage query-time lookup tables for dimension value enrichment.
Creating Lookups
Using Lookups in Queries
Tips and Tricks
View Underlying SQL
View Underlying SQL
Most views are powered by Druid SQL. Click ⋮ → View SQL query for table to see and customize the underlying query.
Keyboard Shortcuts
Keyboard Shortcuts
In the Query view:
Ctrl/Cmd + Enter: Run queryCtrl/Cmd + K: Open command paletteCtrl/Cmd + /: Toggle comments
Export Results
Export Results
Query results can be exported as:
- CSV
- TSV
- JSON
Dark Mode
Dark Mode
Toggle dark mode from the user menu in the top-right corner.