Environment preparation before using iServer built-in Spark |
After copy the SuperMap iServer package to the cluster member machine Then you can configure the environment before useing. If you are using the iServer package for Windows (only support 64 bit operating system), it only needs to open the port.
The recommended hardware configuration for the iServer distributed analysis service includes:
In order to work properly of the distributed cluster, Windows and Linux system need to modify the firewall configuration, and open the following port:
If you are using Linux package, except the needed port, you need to modify the Linux host name. Here it takes three Ubuntu virtual machines as an example. The IP is
Master:192.168.177.136
Worker1: 192.168.177.135
Worker2:192.168.177.137
The following processes should be performed as root. You can switch the general user to root user through the following command:
sudo -i
Input the current user password.
Open the hostname configuration file:
vi /etc/hostname
Modify the host name to sparkmaster. The host name in Worker node is modified to sparkworker1 and sparkworker2.
After save and ecit the editing, view the IP address through the following command:
ifconfig
Set the corresponding relationship between the host name and IP. Open the /etc/hosts file:
vi /etc/hosts
Add the following configuration:
192.168.177.136 sparkmaster
192.168.177.135 sparkworker1
192.168.177.137 sparkworker2
The hosts files in three nodes are the same. Save and exit the editing. Start the virtual machine again.
You can view whether the current host name is modified successfully throgh the following command:
hostname
View whether the relationship is correct between the host name and IP:
ping [hostname]
Possible use case:
ping sparkmaster
Please confirm whether the environment configuration of war package is correct. After the war deploy is success, you also need to configure the environment for distributed analysis services by performing the following operations on each platform:
1. Unzip the support decompression file and you can see the hadoop and spark folders.
2. Create a new support in the folder higher lever of iserver. Put the hadoop and spark folders in it.
1. Add the HADOOP_HOME to the environment variable. It can be set to iServer built-in hadoop directory. And add the %HADOOP_HOME%\bin to the system PATH. (there is no need to set the hadoop environment variable on Linux)
2. Add the SPARK_HOME to the environment variable. It can be set to iServer built-in spark directory. And add the %SPARK_HOME%\bin to the system PATH.
3. Add the SPARK_SCALA_VERSION to the environment variable. The valus is 2.11.8.