Prerequisites
Before you follow the steps in this tutorial, download Druid as described in the quickstart using the automatic single-machine configuration and have it running on your local machine. You don’t need to have loaded any data.Download and start Kafka
Apache Kafka is a high-throughput message bus that works well with Druid. For this tutorial, use Kafka 2.7.0.Clean up existing Kafka data
If you’re already running Kafka on the machine you’re using for this tutorial, delete or rename the
kafka-logs directory in /tmp.Start Kafka broker
Druid and Kafka both rely on Apache ZooKeeper to coordinate and manage services. Because Druid is already running, Kafka attaches to the Druid ZooKeeper instance when it starts up.In a production environment where you’re running Druid and Kafka on different machines, start the Kafka ZooKeeper before you start the Kafka broker.
Load data into Kafka
In this section, you download sample data to the tutorial’s directory and send the data to your Kafka topic.Load data into Druid
Now that you have data in your Kafka topic, you can use Druid’s Kafka indexing service to ingest the data into Druid. To do this, you can use the Druid console data loader or you can submit a supervisor spec. Follow the steps below to try each method.Load data with the console data loader
The Druid console data loader presents you with several screens to configure each section of the supervisor spec, then creates an ingestion task to ingest the Kafka data.Connect to Kafka
Navigate to localhost:8888 and click Load data > Streaming.
Click Apache Kafka and then Connect data.Enter
Click Next: Parse data.
Click Apache Kafka and then Connect data.Enter localhost:9092 as the bootstrap server and kttm as the topic, then click Apply and make sure you see data similar to the following:
Click Next: Parse data.Parse the data
The data loader automatically tries to determine the correct parser for the data. For the sample data, it selects input format json. You can play around with the different options to get a preview of how Druid parses your data.With the json input format selected, click Next: Parse time. You may need to click Apply first.Configure timestamp

Druid’s architecture requires that you specify a primary timestamp column. Druid stores the timestamp in the
__time column in your Druid datasource. In a production environment, if you don’t have a timestamp in your data, you can select Parse timestamp from: None to use a placeholder value.timestamp column in the raw data as the primary time column.Click Next: … three times to go past the Transform and Filter steps to Configure schema. You don’t need to enter anything in these two steps because applying transforms and filters is out of scope for this tutorial.Configure schema
In the Configure schema step, you can select data types for the columns and configure dimensions and metrics to ingest into Druid. The console does most of this for you. Notice that the dimensions event, agent and geo_ip are of the type json.Click Next: Partition to configure how Druid partitions the data into segments.Configure partitioning
Select day as the Segment granularity. Since this is a small dataset, you don’t need to make any further adjustments.Click Next: Tune to fine tune how Druid ingests data.Review and submit
The console presents the spec you’ve constructed. You can click the buttons above the spec to make changes in previous steps and see how the changes update the spec. You can also edit the spec directly and see it reflected in the previous steps.Click Submit to create an ingestion task.Monitor ingestion
Druid displays the task view with the focus on the newly created supervisor.The task view auto-refreshes, so wait until the supervisor launches a task. The status changes from Pending to Running as Druid starts to ingest data.
Navigate to the Datasources view from the header.
When the
Navigate to the Datasources view from the header.
When the kttm-kafka datasource appears here, you can query it. See Query your data for details.Submit a supervisor spec
As an alternative to using the data loader, you can submit a supervisor spec to Druid. You can do this in the console or using the Druid API.Use the console
To submit a supervisor spec using the Druid console:Open submission dialog
Click Ingestion in the console, then click the ellipses next to the refresh button and select Submit JSON supervisor.
Paste spec and submit
Paste this spec into the JSON window and click Submit:This starts the supervisor—the supervisor spawns tasks that start listening for incoming data.
Use the API
You can also use the Druid API to submit a supervisor spec.Submit the spec
Run the following command to submit the spec in the After Druid successfully creates the supervisor, you get a response containing the supervisor ID:
kttm-kafka-supervisor.json file:{"id":"kttm-kafka-supervisor-api"}.Query your data
After Druid sends data to the Kafka stream, it is immediately available for querying. Click Query in the Druid console to run SQL queries against the datasource. Since this tutorial ingests a small dataset, you can run the querySELECT * FROM "kttm-kafka" to return all of the data in the dataset you created.
Check out the Querying data tutorial to run some example queries on the newly loaded data.
Further reading
For more information, see the following topics:- Apache Kafka ingestion for information on loading data from Kafka streams and maintaining Kafka supervisors for Druid.

