Centos 安装Redis7.2

1、下载Redis

Redis 下载页面:https://redis.io/download/

wget https://github.com/redis/redis/archive/7.2.4.tar.gz

2、解压并编译

# 1、进入目录

cd /export/servers/

tar -xzvf redis-7.2.4

# 2、安装依赖

yum install gcc make openssl-devel

# 3、安装redis 本地依赖

cd deps

make hiredis jemalloc linenoise lua

# 4、执行make

cd ../

make && make install

3、修改配置

daemonize yes

requirepass xxx

dir /export/backup/redis

4、修改内存提交限制

vim /etc/sysctl.conf

# 启用内存可重复提交限制

vm.overcommit_memory=1

执行:sysctl vm.overcommit_memory=1

5、自动启动

Centos 安装 MinIO

1、通过yum安装

官网地址 min.io

# 1、下载rpm

wget https://dl.min.io/server/minio/release/linux-amd64/archive/minio-20240226093348.0.0-1.x86_64.rpm -O minio.rpm

# 2、安装

sudo dnf install minio.rpm

2. 创建文件存储目录

mkdir /export/data/minio

# 创建用户组

groupadd -r minio-user

# 创建用户

useradd -M -r -g minio-user minio-user

# 设置路劲访问权限

chown minio-user:minio-user /export/data/minio

# 启动minio(使用默认账号启动,非后台进程)

minio server /export/data/minio --address 0.0.0.0:9000 --console-address 0.0.0.0:9001

3、MonIO做为服务后端系统

参考文档: Create the systemd Service File

3.1、 创建环境变量

# 创建并编辑服务环境变量

vim /etc/default/minio

# 设置路劲访问权限

chown minio-user:minio-user /etc/default/minio

环境变量内容

# MINIO_ROOT_USER and MINIO_ROOT_PASSWORD sets the root account for the MinIO server.

# This user has unrestricted permissions to perform S3 and administrative API operations on any resource in the deployment.

# Omit to use the default values 'minioadmin:minioadmin'.

# MinIO recommends setting non-default values as a best practice, regardless of environment.

MINIO_ROOT_USER=myminioadmin

MINIO_ROOT_PASSWORD=minio-secret-key-change-me

# MINIO_VOLUMES sets the storage volumes or paths to use for the MinIO server.

# The specified path uses MinIO expansion notation to denote a sequential series of drives between 1 and 4, inclusive.

# All drives or paths included in the expanded drive list must exist *and* be empty or freshly formatted for MinIO to start successfully.

MINIO_VOLUMES="/export/data/minio"

# MINIO_OPTS sets any additional commandline options to pass to the MinIO server.

# For example, `--console-address :9001` sets the MinIO Console listen port

MINIO_OPTS="--address :9000 --console-address :9001"

# MINIO_SERVER_URL sets the hostname of the local machine for use with the MinIO Server.

# MinIO assumes your network control plane can correctly resolve this hostname to the local machine.

# Uncomment the following line and replace the value with the correct hostname for the local machine.

#MINIO_SERVER_URL="http://minio.example.net"

3.2、创建systemctl 服务

vim /usr/lib/systemd/system/minio.service

输入如下配置信息:

[Unit]

Description=MinIO

Documentation=https://min.io/docs/minio/linux/index.html

Wants=network-online.target

After=network-online.target

AssertFileIsExecutable=/usr/local/bin/minio

[Service]

WorkingDirectory=/usr/local

User=minio-user

Group=minio-user

ProtectProc=invisible

EnvironmentFile=/etc/default/minio

ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /etc/default/minio\"; exit 1; fi"

ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES

# MinIO RELEASE.2023-05-04T21-44-30Z adds support for Type=notify (https://www.freedesktop.org/software/systemd/man/systemd.service.html#Type=)

# This may improve systemctl setups where other services use `After=minio.server`

# Uncomment the line to enable the functionality

# Type=notify

# Let systemd restart this service always

Restart=always

# Specifies the maximum file descriptor number that can be opened by this process

LimitNOFILE=65536

# Specifies the maximum number of threads this process can create

TasksMax=infinity

# Disable timeout logic and wait until process is stopped

TimeoutStopSec=infinity

SendSIGKILL=no

[Install]

WantedBy=multi-user.target

# Built for ${project.name}-${project.version} (${project.name})

3.3、启动服务

# 重新加载服务

systemctl daemon-reload

# 启动服务

sudo systemctl start minio.service

# 服务状态

systemctl status minio.service

# 服务自启动

systemctl enable minio.service

4、MinIO 通过Nginx代理二级域名

参考文档:https://min.io/docs/minio/linux/integrations/setup-nginx-proxy-with-minio.html

Centos 安装Kafka

1、安装Zookeeper

1.1 下载地址:https://downloads.apache.org/zookeeper/

wget https://downloads.apache.org/zookeeper/zookeeper-3.8.4/apache-zookeeper-3.8.4-bin.tar.gz

1.2 解压安装

tar -xzvf apache-zookeeper-3.8.4-bin.tar.gz

mv apache-zookeeper-3.8.4-bin zookeeper

1.3 添加到环境变量

vim /etc/profile

# 添加如下内容

export ZOOKEEPER_HOME=/export/servers/zookeeper

export PATH=$PATH:$ZOOKEEPER_HOME/bin

1.4 修改zookeeper配置文件

cp /export/servers/zookeeper/conf/zoo_sample.cfg /export/servers/zookeeper/conf/zoo.cfg

vim /export/servers/zookeeper/conf/zoo.cfg

修改如下内容:

# 数据存放路径

dataDir=/export/data/zookeeper

# the basic time unit in milliseconds used by ZooKeeper. It is used to do heartbeats and the minimum session timeout will be twice the tickTime.

tickTime=2000

# the port to listen for client connections

clientPort=2181

# 末尾追加(多节点需要配置)

# server.1=node2:2888:3888

# server.2=node3:2888:3888

# server.3=node4:2888:3888

1.5 创建节点ID

mkdir /export/data/zookeeper

echo "1" > /export/data/zookeeper/myid

1.6 启动zookeeper

zkServer.sh start

# 使用jps 检测是否启动成功, QuorumPeerMain

1.7 开机自启动

cat > /etc/systemd/system/zookeeper.service << EOF

[Unit]

Description=zookeeper

After=syslog.target network.target

[Service]

Type=forking

# 指定zookeeper 日志文件路径,也可以在zkServer.sh 中定义

Environment=ZOO_LOG_DIR=/export/Logs/zookeeper

# 指定JDK路径,也可以在zkServer.sh 中定义

Environment=JAVA_HOME=/export/servers/jdk1.8.0_401

ExecStart=/export/servers/zookeeper/bin/zkServer.sh start

ExecStop=/export/servers/zookeeper/bin/zkServer.sh stop

Restart=always

User=root

Group=root

[Install]

WantedBy=multi-user.target

EOF

重新加载服务

systemctl daemon-reload

开机自启动

systemctl enable zookeeper

查看zookeeper状态

systemctl status zookeeper

1.8 zookeeper 查看器

https://issues.apache.org/jira/secure/attachment/12436620/ZooInspector.zip

1.9 设置zookeeper SASL认证

1、编写认证文件

vim /export/servers/zookeeper/conf/zk_server_jaas.conf

Server {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="user" password="user-password"

user_kafka="kafka-password";

};

Client {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="kafka" password="kafka-password";

};

这里Server和Client 都使用kafka认证模式,需要导入kafka-clients-x.x.x.jar 到 zookeeper的lib目录下。

2、编写java.env 文件

vim /export/servers/zookeeper/conf/java.env

CLIENT_JVMFLAGS="${CLIENT_JVMFLAGS} -Djava.security.auth.login.config=/export/servers/zookeeper/conf/zk_server_jaas.conf"

SERVER_JVMFLAGS="-Djava.security.auth.login.config=/export/servers/zookeeper/conf/zk_server_jaas.conf"

zookeeper 自动时会使用该文件

3、修改配置文件

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

requireClientAuthScheme=sasl

zookeeper.sasl.client=true

allowSaslFailedClients=false

sessionRequireClientSASLAuth=true

启动服务之后即可使用安全认证

4、导入 Kafka 客户端

cp /export/servers/kafka/libs/kafka-clients-3.7.0.jar /export/servers/zookeeper/lib

5、重启Zookeeper

systemctl restart zookeeper

2、安装Kafka

2.1 下载地址:https://downloads.apache.org/kafka/

wget https://downloads.apache.org/kafka/3.7.0/kafka_2.13-3.7.0.tgz

2.2 安装

tar -xzvf kafka_2.13-3.7.0.tgz

mv kafka_2.13-3.7.0 kafka

2.3 添加到环境变量中

vim /etc/profile

JAVA_HOME=/opt/jdk

ZOOKEEPER_HOME=/opt/zookeeper

KAFKA_HOME=/opt/kafka

PATH=$PATH:$ZOOKEEPER_HOME/bin:$JAVA_HOME/bin:$KAFKA_HOME/bin

export JAVA_HOME ZOOKEEPER_HOME PATH

source /etc/profile

2.4 修改配置文件

cp /export/servers/kafka/config/server.properties /export/servers/kafka/config/server.properties.backup

cd /opt/kafka/config

vim /opt/kafka/config/server.properties

修改如下信息

#broker 的全局唯一编号,不能重复

broker.id=0

#删除 topic 功能使能

delete.topic.enable=true

#kafka 运行日志存放的路径

log.dirs=/export/Logs/kafka/logs

#配置连接 Zookeeper 集群地址

zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181

# -------------不需要修改------------------

#处理网络请求的线程数量

num.network.threads=3

#用来处理磁盘 IO 的现成数量

num.io.threads=8

#发送套接字的缓冲区大小

socket.send.buffer.bytes=102400

#接收套接字的缓冲区大小

socket.receive.buffer.bytes=102400

#请求套接字的缓冲区大小

socket.request.max.bytes=104857600

#topic 在当前 broker 上的分区个数

num.partitions=1

#用来恢复和清理 data 下数据的线程数量

num.recovery.threads.per.data.dir=1

#segment 文件保留的最长时间,超时将被删除

log.retention.hours=168

2.5 启动集群

启动集群

cd /export/servers/kafka

# 启动

bin/kafka-server-start.sh -daemon config/server.properties

# 关闭

bin/kafka-server-stop.sh stop

kafka 群起脚本(部分)

for i in hadoop102 hadoop103 hadoop104

do

echo "========== $i =========="

ssh $i '/export/servers/kafka/bin/kafka-server-start.sh -daemon

/export/servers/kafka/config/server.properties'

done

2.6 kafka 开机自启动

cat > /etc/systemd/system/kafka.service << EOF

[Unit]

Description=kafka

After=syslog.target network.target zookeeper.service

[Service]

Type=simple

# 指定JDK路径,也可以在zkServer.sh 中定义

Environment=JAVA_HOME=/export/servers/jdk1.8.0_401

ExecStart=/export/servers/kafka/bin/kafka-server-start.sh /export/servers/kafka/config/server.properties

ExecStop=/export/servers/kafka/bin/kafka-server-stop.sh stop

Restart=always

User=root

Group=root

[Install]

WantedBy=multi-user.target

EOF

刷新配置文件

systemctl daemon-reload

# 开机启动

systemctl enable kafka

# 启动服务

systemctl start kafka

# 关闭服务

systemctl stop kafka

# 查看状态

systemctl status kafka

3. 启动安全认证

3.1 添加认证文件

在config目录下添加kafka_server_jaas.conf文件,内容如下:

KafkaServer {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="admin" password="admin-pwd"

user_admin="admin-pwd"

user_producer="producer-pwd"

user_consumer="customer-pwd";

};

Client {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="kafka"

password="Jby@2024";

};

这里,KafkaServer中username和password 为 broker 内部通信的用户名密码。user_producer和user_consumer分别是为生产者和消费者用户设置的凭证。您可以根据需要创建更多的用户和密码。

其中 user_admin = “admin_pwd” 非常重要且必须与 username 和 password 一致。 否则会出现如下错误:

[2024-03-06 10:42:59,070] INFO [Controller id=0, targetBrokerId=0] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient)

[2024-03-06 10:42:59,070] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (iZwz94rqv754l5q4mca9nbZ/127.0.0.1:9092) failed authentication due to: Authentication failed: Invalid username or password (org.apache.kafka.clients.NetworkClient)

3.2 配置Kafka服务器属性

编辑Kafka的server.properties文件,添加或修改以下配置以启用SASL(Simple Authentication and Security Layer)和设置监听器:

listeners=SASL_PLAINTEXT://host.name:port

security.inter.broker.protocol=SASL_PLAINTEXT

sasl.mechanism.inter.broker.protocol=PLAIN

sasl.enabled.mechanisms=PLAIN

3.3 修改启动脚本

编辑Kafka的启动脚本(通常是kafka-server-start.sh),找到export KAFKA_HEAP_OPTS行,并在其后添加JVM参数,指向您的JAAS配置文件:

export KAFKA_HEAP_OPTS="-Djava.security.auth.login.config=/export/servers/kafka/config $KAFKA_HEAP_OPTS"

同样,确保将/path/to/kafka_server_jaas.conf替换为您实际的JAAS配置文件路径。

3.4 修改客户端配置

producer.properties or consumer.properties

security.protocol=SASL_PLAINTEXT

sasl.mechanism=PLAIN

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \

username="producer" \

password="producer-pwd";

3.5 启动kafka服务

systemctl restart kafka

推荐文章

评论可见,请评论后查看内容,谢谢!!!
 您阅读本篇文章共花了: