文章目录

一、zookeeper服务搭建1. 下载2. 解压3. 创建目录4. 调整配置5. 配置myid6. 开放防火墙7.启动验证zk

二、kafka集群搭建2.1. 下载软件2.2. 解压2.3. 配置2.5. 启动kafka

三、测试验证3.1. 创建一个主题3.2. 发送消息3.3. 消费消息

预先准备:

上传软件至服务器192.168.105.125、192.168.105.129、192.168.130

kafka_2.12-2.2.0、zookeeper-3.4.8.tar.gz

一、zookeeper服务搭建

1. 下载

同时在125、129、130节点依次执行

mkdir /app

cd /app

wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.8/zookeeper-3.4.8.tar.gz

2. 解压

tar -zxvf zookeeper-3.4.8.tar.gz

3. 创建目录

cd /app/zookeeper-3.4.8

mkdir data logs

4. 调整配置

cp conf/zoo_sample.cfg conf/zoo.cfg

vim conf/zoo.cfg

第一处:配置dataDir

dataDir=/app/zookeeper-3.4.8/data

第二处:新增配置

server.1=192.168.105.125:2288:3388

server.2=192.168.105.129:2288:3388

server.3=192.168.105.130:2288:3388

5. 配置myid

125节点

vim /app/zookeeper-3.4.8/data/myid

内容:1

129节点

vim /app/zookeeper-3.4.8/data/myid

内容:2

130节点

vim /app/zookeeper-3.4.8/data/myid

内容:3

6. 开放防火墙

125/129/130节点依次执行

firewall-cmd --zone=public --add-port=2181/tcp --permanent

firewall-cmd --zone=public --add-port=2288/tcp --permanent

firewall-cmd --zone=public --add-port=3388/tcp --permanent

firewall-cmd --reload

firewall-cmd --list-ports

7.启动验证zk

/app/zookeeper-3.4.8/bin/zkServer.sh start /app/zookeeper-3.4.8/conf/zoo.cfg

/app/zookeeper-3.4.8/bin/zkServer.sh status /app/zookeeper-3.4.8/conf/zoo.cfg

二、kafka集群搭建

同时在125、129、130节点依次执行

2.1. 下载软件

wget https://archive.apache.org/dist/kafka/2.2.0/kafka_2.12-2.2.0.tgz

2.2. 解压

tar -zxvf kafka_2.12-2.2.0.tgz

2.3. 配置

cd /app/kafka_2.12-2.2.0/config

cp server.properties server.properties.bak

125节点

vim server.properties

内容如下:

broker.id=1

listeners=PLAINTEXT://192.168.105.125:9092

advertised.listeners=PLAINTEXT://192.168.105.125:9092

port=9092

host.name=192.168.105.125

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/app/kafka/data

num.partitions=1

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=3

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.retention.hours=24

log.retention.bytes=1073741824

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=192.168.105.125:2181,192.168.105.129:2181,192.168.105.130:2181

zookeeper.connection.timeout.ms=6000

auto.create.topics.enable = false

delete.topic.enable=true

message.max.byte=52428880

log.cleanup.policy=delete

log.segment.delete.delay.ms=0

group.initial.rebalance.delay.ms=0

129节点

vim server.properties

内容如下:

broker.id=2

listeners=PLAINTEXT://192.168.105.129:9092

advertised.listeners=PLAINTEXT://192.168.105.129:9092

port=9092

host.name=192.168.105.125

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/app/kafka/data

num.partitions=1

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=3

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.retention.hours=24

log.retention.bytes=1073741824

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=192.168.105.125:2181,192.168.105.129:2181,192.168.105.130:2181

zookeeper.connection.timeout.ms=6000

auto.create.topics.enable = false

delete.topic.enable=true

message.max.byte=52428880

log.cleanup.policy=delete

log.segment.delete.delay.ms=0

group.initial.rebalance.delay.ms=0

130节点

vim server.properties

内容如下:

broker.id=3

listeners=PLAINTEXT://192.168.105.130:9092

advertised.listeners=PLAINTEXT://192.168.105.130:9092

port=9092

host.name=192.168.105.130

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/app/kafka/data

num.partitions=1

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=3

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.retention.hours=24

log.retention.bytes=1073741824

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=192.168.105.125:2181,192.168.105.129:2181,192.168.105.130:2181

zookeeper.connection.timeout.ms=6000

auto.create.topics.enable = false

delete.topic.enable=true

message.max.byte=52428880

log.cleanup.policy=delete

log.segment.delete.delay.ms=0

group.initial.rebalance.delay.ms=0

``

###### 2.4. 防火墙

125/129/130节点依次执行

```bash

firewall-cmd --zone=public --add-port=9092/tcp --permanent

firewall-cmd --reload

firewall-cmd --list-ports

2.5. 启动kafka

启动kafka(第一次前台)

cd /app/kafka_2.12-2.2.0

./bin/kafka-server-start.sh config/server.properties

启动kafka(前台)

cd /app/kafka_2.12-2.2.0

./bin/kafka-server-start.sh -daemon config/server.properties

125节点 129节点

130节点

三、测试验证

3.1. 创建一个主题

使用kafka,创建一个主题(topic)

bin/kafka-topics.sh --create --zookeeper 192.168.105.125:2181,192.168.105.129:2181,192.168.105.130:2181 --replication-factor 1 --partitions 1 --topic pis-business

3.2. 发送消息

bin/kafka-console-producer.sh --broker-list 192.168.105.125:9092 --topic pis-business

3.3. 消费消息

bin/kafka-console-consumer.sh --bootstrap-server 192.168.105.125:9092 --from-beginning --topic pis-business

好文链接

评论可见,请评论后查看内容,谢谢!!!
 您阅读本篇文章共花了: