HPC: Are launched nodes the clones of interface node? - io

When a SLURM batch job which requests several nodes is submitted, do launched nodes are clones of interface computer? Do they copy everything in interface computer? Here, the interface computer is the one which I ssh to, store files on and submit jobs from.
Especially, in IO context, suppose that I read data from text files named as data_N.dat where N is the process rank. In other words, each process (node) reads its own unique file. When I submit a SLURM job, do all these files get copied to launched nodes? Or nodes read input files line by line from interface computer?

Each node act as if you just connected via ssh from the interface.
If there isn't a shared filesystem (that are very common on HPC clusters but not provided by SLURM) or outside shared folders, you'll want to copy them via ssh/a shared folder.
You could just start an interactive task in SLURM and check if you can access your files.

Related

How to load some files into Spark nodes without duplication?

I have some text files on master server to be processed by a Spark cluster for some statistics purpose.
For example, I have 1.txt, 2.txt,3.txt on master server in a specified directory like /data/.I want use a Spark cluster to process all of them one times. If I use sc.textFile("/data/*.txt") to load all files, other node in cluster cannot find these files on local file system. However if I use sc.addFile and SparkFiles.get to achieve them on each node, the 3 text files will be downloaded to each node and all of them will be processed multi times.
How to solve it without HDFS? Thanks.
According to official document, just copy all files to all nodes.
http://spark.apache.org/docs/1.2.1/programming-guide.html#external-datasets
If using a path on the local filesystem, the file must also be accessible at the same path on worker nodes. Either copy the file to all workers or use a network-mounted shared file system.

Output file is getting generated on slave machine in apache spark

I am facing some issue while running a spark java program that reads a file, do some manipulation and then generates output file at a given path.
Every thing works fine when master and slaves are on same machine .ie: in Standalone-cluster mode.
But problem started when I deployed same program in multi machine multi node cluster set up. That means the master is running at x.x.x.102 and slave is running on x.x.x.104.
Both the master -slave have shared their SSH keys and are reachable from each other.
Initially slave was not able to read input file , for that I came to know I need to call sc.addFile() before sc.textFile(). that solved issue. But now I see output is being generated on slave machine in a _temporary folder under the output path. ie: /tmp/emi/_temporary/0/task-xxxx/part-00000
In local cluster mode it works fine and generates output file in /tmp/emi/part-00000.
I came to know that i need to use SparkFiles.get(). but i am not able to understand how and where to use this method.
till now I am using
DataFrame dataobj = ...
dataObj.javaRDD().coalesce(1).saveAsTextFile("file:/tmp/emi");
Can any one please let me know how to call SparkFiles.get()?
In short how can I tell slave to create output file in the machine where driver is running?
Please help.
Thanks a lot in advance.
There is nothing unexpected here. Each worker writes its own part of the data separately. Using file scheme only means that data is writer to a file in the file system local from the worker perspective.
Regarding SparkFiles it is not applicable in this particular case. SparkFiles can be used to distribute common files to the worker machines not to deal with the results.
If for some reason you want to perform writes on the machine used to run driver code you'll have to fetch data to the driver machine first (either collect which requires enough memory to fit all data or toLocalIterator which collects partition at the time and requires multiple jobs) and use standard tools to write results to local file system. In general though writing to driver is not a good practice and most of the time is simply useless.

csv data processing in spark standalone mode

I have two node and Let's call A(192.168.2.100) and B(192.168.2.200).
A is for a master and a worker.
in A node
./bin/spark-class org.apache.spark.deploy.worker
./bin/spark-class org.apache.spark.deploy.master
B is for a woker
./bin/spark-class org.apache.spark.deploy.worker
my app is need to load cav file to process
in node A,
./spark-submit --class "myApp" --master spark://192.168.2.100:7077 /spark/app.jar
But It comes error with "need csv file in B".
Is there any way to to share this file to node B?
Do Really I need yarn of mesos to do this?
as the diagram shows bellow: all the data files you want to process should be accessible from all of your workers [ and be sure that your driver can be reachable from your worker ]
so here, you need to put your data files into a place from where workers can read data, in most situations, we put data files into HDFS.
As stated before that file has to be available on every node. So you either have multiple copies, one per node or you use an external hadoop data source (HDFS, Cassandra, Amazon s3). There is another easier solution. You can use NFS and mount a remote drive/partition/location to every node. This way you don't need to have multiple copies and you don't have to learn about an external storage. You can even use sshfs if you want to have a secure mount point over ssh.

Using Spark Shell (CLI) in standalone mode on distributed files

I am using Spark 1.3.1 in standalone mode (No YARN/HDFS involved - Only Spark) on a cluster with 3 machines. I have a dedicated node for master (no workers running on it) and 2 separate worker nodes.
The cluster starts healthy, and I am just trying to test my installation by running some simple examples via spark-shell (CLI - which I started on the master machine) : I simply put a file on the localfs on the master node (workers do NOT have a copy of this file) and I simply run:
$SPARKHOME/bin/spark-shell
...
scala> val f = sc.textFile("file:///PATH/TO/LOCAL/FILE/ON/MASTER/FS/file.txt")
scala> f.count()
and it returns the words count results correctly.
My Questions are:
1) This contradicts with what spark documentation (on using External Datasets) say as:
"If using a path on the local filesystem, the file must also be accessible at the same path on worker nodes. Either copy the file to all workers or use a network-mounted shared file system."
I am not using NFS and I did not copy the file to workers, so how does it work ? (Is it because spark-shell does NOT really launch jobs on the cluster, and does the computation locally (It is weird as I do NOT have a worker running on the node, I started shell on)
2) If I want to run SQL scripts (in standalone mode) against some large data files (which do not fit into one machine) through Spark's thrift server (like the way beeline or hiveserver2 is used in Hive) , do I need to put the files on NFS so each worker can see the whole file, or is it possible that I create chunks out of the files, and put each smaller chunk (which would fit on a single machine) on each worker, and then use multiple paths (comma separated) to pass them all to the submitted queries ?
The problem is that you are running the spark-shell locally. The default for running a spark-shell is as --master local[*], which will run your code locally on as many cores as you have. If you want to run against your workers, then you will need to run with the --master parameter specifying the master's entry point. If you want to see the possible options you can use with spark-shell, just type spark-shell --help
As to whether you need to put the file on each server, the short answer is yes. Something like HDFS will split it up across the nodes and the manager will handle the fetching as appropriate. I am not as familiar with NFS and if it has this capability, though

how to distribute batch mode jobs to individual boards running linux mounted over NFS

Few boards running linux kernel with common root file system mounted over NFS.
A file, in NFS mounted file system, holds the job description to be run in batch mode.
Each line in the File holds one job.
How to distribute the jobs evenly among the boards.
Any sample code or references is useful..
You might want to install a resource manager/job scheduler such as Slurm or Condor. Such tool allows you to submit jobs, described by a shell script, to several computers and handle load balancing, input/output management, etc.

Resources