本文主要记录SuperMap 基于Spark的分布式空间分析功能,在分析功能之前,会先安装Spark分布式环境。 Hadoop环境是Spark安装的前置环境,所以会先安装Hadoop,再安装Spark。 持续更新中…

SuperMap Hadoop3.3 + Spark3.3

1. Hadoop1.1. 安装前环境准备1.2. 下载安装1.3. 集群部署1.3.1 安装前网络环境准备工作1.3.2 安装JDK1.3.3 安装Zookeeper集群1.3.4 安装Hadoop集群

2.Scala2.1 下载安装

3. Spark3.1. 单节点安装3.2. 安装Spark集群

4. SuperMap iObject for Spark5. SuperMap iServer GPA 自动化处理

1. Hadoop

1.1. 安装前环境准备

在安装hadoop之前,必须安装java,haadoop所支持的java版本如下:

Apache Hadoop 3.3及以上版本支持Java 8和Java 11(仅运行时) 请用Java 8编译Hadoop。不支持使用Java 11编译Hadoop: Hadoop -16795 - Java 11编译支持OPEN Apache Hadoop 3.0版。X到3.2。x现在只支持Java 8 Apache Hadoop从2.7。X到2.10。x同时支持Java 7和Java 8

这里以安装java8为例,java包下载不再详细描述。

# 解压

[root@VM-4-16-centos java]# tar -zxvf jdk-8u351-linux-x64.tar.gz

# 配置java环境变量

[root@VM-4-16-centos jdk1.8.0_351]# vim /etc/profile

[root@VM-4-16-centos jdk1.8.0_351]# more /etc/profile | grep export

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL

# 添加JAVA_HOME环境

export JAVA_HOME=/data/java/jdk1.8.0_351

export PATH=$PATH:$JAVA_HOME/bin

# 验证环境

[root@VM-4-16-centos jdk1.8.0_351]# java -version

java version "1.8.0_351"

Java(TM) SE Runtime Environment (build 1.8.0_351-b10)

Java HotSpot(TM) 64-Bit Server VM (build 25.351-b10, mixed mode)

1.2. 下载安装

注: 下载之前先确认好要安装的spark版本,根据spark版本选择合适的hadoop版本。

apach官网下载速度限制,建议使用国内镜像下载地址:清华大学 下载地址:https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/stable/

# 下载

[root@VM-4-16-centos hadoop]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/stable/hadoop-3.3.4.tar.gz --no-check-certificate

# 解压

[root@VM-4-16-centos hadoop]# tar -zxvf hadoop-3.3.4.tar.gz

# 配置hadoop环境变量

[root@VM-4-16-centos bin]# vim /etc/profile

[root@VM-4-16-centos bin]# more /etc/profile | grep HADOOP

export HADOOP_HOME=/data/hadoop/hadoop-3.3.4

# export HADOOP_INSTALL=$HADOOP_HOME

# export HADOOP_MAPRED_HOME=$HADOOP_HOME

# export HADOOP_HDFS_HOME=$HADOOP_HOME

# export HADOOP_COMMON_HOME=$HADOOP_HOME

# export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export PATH=$PATH:HADOOP_HOME/sbin:$HADOOP_HOME/bin

# 刷新shell环境使配置生效

[root@VM-4-16-centos bin]# source /etc/profile

# 检查hadoop安装是否成功

[root@VM-4-16-centos bin]# hadoop version

Hadoop 3.3.4

Source code repository https://github.com/apache/hadoop.git -r a585a73c3e02ac62350c136643a5e7f6095a3dbb

Compiled by stevel on 2022-07-29T12:32Z

Compiled with protoc 3.7.1

From source with checksum fb9dd8918a7b8a5b430d61af858f6ec

This command was run using /data/hadoop/hadoop-3.3.4/share/hadoop/common/hadoop-common-3.3.4.jar

1.3. 集群部署

集群环境服务器个数推荐使用奇数(zk集群为了防止脑裂,默认采用Quorums法定人数方式 ,即只有集群中超过半数节点投票才能选举出Leader。比如3台服务器,Quorums = 2, 也就是说集群可以容忍1个节点失效,这时候还能选举出1个 lead,集群还可用,则可容错为1台。4台的Quorums = (4/2) + 1 ,集群的容忍度还是1台。集群服务器数量大的情况下可采用Weight方式)

1.3.1 安装前网络环境准备工作

Hadoop集群部署之前,请做好网络规划,参与集群的服务器需要配置网络映射、免密登录、关闭防火墙并做好时间同步。网络环境配置请参考: https://blog.csdn.net/qq_41134073/article/details/128094438

1.3.2 安装JDK

JDK安装请参考上面,在三台服务器都安装好

1.3.3 安装Zookeeper集群

Zookeeper是一个分布式的、开源的分布式应用程序协调服务,是Hadoop和HBase的重要组件,起到桥接作用,相当于一个监控器组件。它是一个为分布式应用提供一致性服务的软件,提供的功能包括:配置维护、域名服务、分布式同步、组服务等。 Zookeeper在集群中的作用是负责管理集群中包括:namenode,ReduceManger的主备切换作用,以及一些分布式组件的配置信息和状态信息,还提供发布/订阅功能,其主要目的是在Zookeeper集群上通过创建znode节点记录集群中的一些位置信息、状态信息的变化,然后通过zookeeper的watcher机制把变化的节点中的内容通知到客户端,然后客户端从hadoop集群中的hdfs上下载内容到客户端并显示出来。zookeeper还负责监控集群的namenode状态。

Zookeeper的基本运转流程

选举Leader同步数据选举Leader过程中算法有很多,但要达到的选举标准是一致的Leader要具有最高的执行ID,类似root权限集群中大多数的机器得到响应并接受选出的Leader

下载

Zookeeper安装的时候不需要考虑兼容性,官网下载最新稳定版即可。下载地址:https://zookeeper.apache.org/releases.html

[root@VM-4-16-centos zookeeper]# wget https://dlcdn.apache.org/zookeeper/zookeeper-3.7.1/apache-zookeeper-3.7.1-bin.tar.gz --no-check-certificate

安装

# 解压

[root@VM-4-16-centos zookeeper]# tar -zxvf apache-zookeeper-3.7.1-bin.tar.gz

[root@VM-4-16-centos zookeeper]# mv apache-zookeeper-3.7.1-bin zookeeper

# 配置环境变量

[root@VM-4-16-centos zookeeper]# vim /etc/profile

[root@VM-4-16-centos zookeeper]# more /etc/profile | grep ZOOKEEPER

export ZOOKEEPER_HOME=/data/zookeeper/zookeeper

export PATH=$PATH:$ZOOKEEPER_HOME/bin

# 修改配置文件

[root@VM-4-16-centos zookeeper]# cd /data/zookeeper/zookeeper/conf/

[root@VM-4-16-centos conf]# ls

configuration.xsl log4j.properties zoo_sample.cfg

[root@VM-4-16-centos conf]# mv zoo_sample.cfg zoo.cfg

[root@VM-4-16-centos conf]# vim zoo.cfg

[root@VM-4-16-centos conf]# more zoo.cfg | grep zookeeper

dataDir=/data/zookeeper/zookeeper/data

dataLogDir=/data/zookeeper/zookeeper/log

[root@VM-4-16-centos zookeeper]# mkdir -p log data

[root@VM-4-16-centos zookeeper]# ls

bin conf data docs lib LICENSE.txt log NOTICE.txt README.md README_packaging.md

# 修改zoo.cfg,添加服务器标识,server.1 中的1 标识服务器,可以是任意有效数字,标识是第几个服务器,这个标识要写入dataDir的myid文件夹,2777:3888一个是集群通讯端口,一个是选举端口

[root@VM-4-16-centos zookeeper]# vim conf/zoo.cfg

[root@VM-4-16-centos zookeeper]# more conf/zoo.cfg

# The number of milliseconds of each tick

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/data/zookeeper/zookeeper/data

dataLogDir=/data/zookeeper/zookeeper/log

# the port at which the clients will connect

clientPort=2181

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

#

server.1=sleve1:2777:3888

server.2=sleve2:2777:3888

server.3=master:2777:3888

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable auto purge feature

#autopurge.purgeInterval=1

## Metrics Providers

#

# https://prometheus.io Metrics Exporter

#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider

#metricsProvider.httpPort=7000

#metricsProvider.exportJvmInfo=true

# 在dataDir下写入集群服务节点标识,Zookeeper集群通过myid文件识别集群节点,并通过配置的节点通信端口和选举端口来进行节点通信,选举出Leader节点

[root@VM-4-16-centos zookeeper]# echo "1" > /data/zookeeper/zookeeper/data/myid

[root@VM-4-16-centos zookeeper]# more data/myid

1

启动集群

[root@VM-4-16-centos ~]# cd /data/zookeeper/zookeeper/bin/

# 启动zookeeper

[root@VM-4-16-centos bin]# ./zkServer.sh

# 检查zookeeper状态

[root@VM-4-16-centos bin]# zkServer.sh status

1.3.4 安装Hadoop集群

2.Scala

Spark 采用Scala语言开发。

2.1 下载安装

# 下载scala安装包:

[root@VM-4-16-centos scala]# wget https://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz

# 解压

[root@VM-4-16-centos scala]# tar -zxvf scala-2.11.8.tgz

# 配置环境变量

[root@VM-4-16-centos scala-2.11.8]# vim /etc/profile

[root@VM-4-16-centos scala-2.11.8]# more /etc/profile | grep SCALA

export SCALA_HOME=/data/scala/scala-2.11.8

export PATH=$PATH:$SCALA_HOME/bin

# 刷新系统环境变量

[root@VM-4-16-centos scala-2.11.8]# source /etc/profile

# 输入scala 测试是否配置成功

[root@VM-4-16-centos scala-2.11.8]# scala

Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_351).

Type in expressions for evaluation. Or try :help.

scala>

3. Spark

3.1. 单节点安装

下载地址:https://dlcdn.apache.org/spark/spark-3.3.1/spark-3.3.1-bin-hadoop3-scala2.13.tgz

# 下载

[root@VM-4-16-centos hadoop]# wget https://dlcdn.apache.org/spark/spark-3.3.1/spark-3.3.1-bin-hadoop3-scala2.13.tgz

# 解压

[root@VM-4-16-centos hadoop]# tar -zxvf spark-3.3.1-bin-hadoop3-scala2.13.tgz

# 修改文件名

[root@VM-4-16-centos hadoop]# mv spark-3.3.1-bin-hadoop3-scala2.13.tgz spark

# 配置环境变量

[root@VM-4-16-centos bin]# vim /etc/profile

[root@VM-4-16-centos bin]# more /etc/profile | grep SPARK

# 添加spark环境变量

export SPARK_HOME=/data/spark/spark

export PATH=$PATH:$SPARK_HOME/bin

# 刷新系统环境变量

[root@VM-4-16-centos bin]# source /etc/profile

[root@VM-4-16-centos spark]# spark-shell

23/01/13 14:47:14 WARN Utils: Your hostname, VM-4-16-centos resolves to a loopback address: 127.0.0.1; using 10.0.4.16 instead (on interface eth0)

23/01/13 14:47:14 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address

Setting default log level to "WARN".

To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).

Welcome to

____ __

/ __/__ ___ _____/ /__

_\ \/ _ \/ _ `/ __/ '_/

/___/ .__/\_,_/_/ /_/\_\ version 3.3.1

/_/

Using Scala version 2.13.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_351)

Type in expressions to have them evaluated.

Type :help for more information.

23/01/13 14:47:22 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Spark context Web UI available at http://10.0.4.16:4040

Spark context available as 'sc' (master = local[*], app id = local-1673592444274).

Spark session available as 'spark'.

scala>

浏览器访问4040端口验证:

3.2. 安装Spark集群

4. SuperMap iObject for Spark

SuperMap iObjects for Spark 是超图空间大数据GIS组件包,是基于Spark大数据技术基础至上,将GIS技术及其能力与大数据技术进行深度融合,作为连接大数据与GIS行业应用的中间桥梁。

5. SuperMap iServer GPA 自动化处理

SuperMap iServer 内置了处理建模器,用于可视化构建处理自动化模型。通过建模器,可以快速构建大数据空间数据处理模型。 通过iServer的处理自动化服务进入处理建模器页面:

处理建模器页面如下:

相关阅读

评论可见,请评论后查看内容,谢谢!!!
 您阅读本篇文章共花了: