node1 node2 node3
Role Server Producer Consumer
Mem 2048 MB 2048 MB 2048 MB
OS Ubuntu 14.04 64 Ubuntu 14.04 64 Ubuntu 14.04 64
Public IP - -
Internal IP

Apache Kafka: kafka_2.11-



  • node1: Kafka cluster
  • node2: Producer
  • node3: Comsumer



Download kafka_2.11- and extract to each node.

scp kafka_2.11- [user]@[node]:[path]
tar xvzf kafka_2.11-

Start Kafka Server


If you don’t have Zoopkeeper running in your cluster, don’t forget to lauch Zoopkeeper before starting Kafka server.

# In the root directory of Kafka
./bin/zookeeper-server-start.sh config/zookeeper.properties

Launch Kafka server in node1

./bin/kafka-server-start.sh ./config/server.properties

You can check the status of the default port for Kafka Server

netstat -npl 9092

Setup Producer

I developed a simple log generator web service with Node.js, you can clone the repo for a quick demo.

git clone https://github.com/HaohanStacks/haohan_data_producer.git generator && cd generator
npm i && npm start

For more details about this repo, please check the document here: https://github.com/HaohanStacks/haohan_data_producer.

In node2, logs will be generated in ./generator/logs/access.log. Now we need to send this log content to Kafka Server.

tail -n 0 -f ../haohan_data_producer/logs/access.log | bin/kafka-console-producer.sh --broker-list --topic haohan

Pay attention to the path of log file and binary files. is node1’s ip address, haohan is the topic of this producer. Now we have pushed logs to Kafka Server.

Setup Comsumer

In node3, go to the root directory of Kafka

bin/kafka-console-consumer.sh --zookeeper --topic haohan --from-beginning

2181 is the default port for zookeeper, pay attention to the topic name, it should be the same as what you have named when you setup the producer in node2.


Open another terminal, send http request to your web service (This web service is launched by Node.js, the default port is 3000).


And voilà, in node3 you will see the received log text. :D