In addition to running on the Mesos or YARN cluster managers, Spark also provides a simple standalone deploy mode. You can launch a standalone cluster either manually, by starting a master and workers by hand, or use our providedlaunch scripts. It is also possible to run these daemons on a single machine for testing.
Installing Spark Standalone to a Cluster
To install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release orbuild it yourself.
You can start a standalone master server by executing:
./sbin/start-master.sh
Once started, the master will print out aspark://HOST:PORTURL for itself, which you can use to connect workers to it, or pass as the “master” argument toSparkContext. You can also find this URL on the master’s web UI, which ishttp://localhost:8080by default.
Similarly, you can start one or more workers and connect them to the master via:
./sbin/start-slave.sh
Once you have started a worker, look at the master’s web UI (http://localhost:8080by default). You should see the new node listed there, along with its number of CPUs and memory (minus one gigabyte left for the OS).
To launch a Spark standalone cluster with the launch scripts, you should create a file called conf/slaves in your Spark directory, which must contain the hostnames of all the machines where you intend to start Spark workers, one per line. If conf/slaves does not exist, the launch scripts defaults to a single machine (localhost), which is useful for testing. Note, the master machine accesses each of the worker machines via ssh. By default, ssh is run in parallel and requires password-less (using a private key) access to be setup. If you do not have a password-less setup, you can set the environment variable SPARK_SSH_FOREGROUND and serially provide a password for each worker.
Once you’ve set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop’s deploy scripts, and available inSPARK_HOME/sbin:
sbin/start-master.sh- Starts a master instance on the machine the script is executed on.
sbin/start-slaves.sh- Starts a slave instance on each machine specified in theconf/slavesfile.
sbin/start-slave.sh- Starts a slave instance on the machine the script is executed on.
sbin/start-all.sh- Starts both a master and a number of slaves as described above.
sbin/stop-master.sh- Stops the master that was started via thesbin/start-master.shscript.
sbin/stop-slaves.sh- Stops all slave instances on the machines specified in theconf/slavesfile.
sbin/stop-all.sh- Stops both the master and the slaves as described above.
可以使用standalone模式了
网友评论