Specifying Parallel Environment on Google Compute Engine using Elasticluster - multithreading

I recently created a Grid Engine cluster on Compute Engine using Elasticluster (http://googlegenomics.readthedocs.org/en/latest/use_cases/setup_gridengine_cluster_on_compute_engine/index.html).
I was wondering what is the appropriate command to run shared-memory multithreaded batch jobs on a cluster of Compute Engine virtual machine running Grid Engine.
In other words, what is the name (i.e. pe_name) of the Grid Engine parallel environment.
Let's say I want to run a job requesting 4 cpus on 1 node, what would be the right qsub command.
So far I tried the following command:
qsub -cwd -l h_vmem=800G -pe smp 6 run.sh
Unable to run job: job rejected: the requested parallel environment "smp" does not exist.
qsub -cwd -l h_vmem=800G -pe omp 6 run.sh
Unable to run job: job rejected: the requested parallel environment "omp" does not exist.
Thank you for your help!

I don't believe that Elasticluster's Ansible playbook includes a parallel environment. You can see the main configuration run on the master here:
https://github.com/gc3-uzh-ch/elasticluster/blob/master/elasticluster/providers/ansible-playbooks/roles/gridengine/tasks/master.yml
I believe you can simply connect to the master and issue the "add parallele environment" command:
$ qconf -ap smp
and write a configuration file like:
pe_name smp
slots 9999
user_lists NONE
xuser_lists NONE
start_proc_args /bin/true
stop_proc_args /bin/true
allocation_rule $fill_up
control_slaves FALSE
job_is_first_task FALSE
urgency_slots min
accounting_summary FALSE
and then modify the queue configuration for all.q:
$ qconf -mq all.q
...
pe_list make smp
...
I would also suggest filing an issue with Elasticluster here:
https://github.com/gc3-uzh-ch/elasticluster/issues
I would expect that someone has already done this in a fork of Elasticluster and may be able to provide a pull request to the master fork.
Hope that helps.
-Matt

Related

Databricks init scripts not working sometimes

Ok, it is very strange. I have some init scripts that I would like to run when a cluster starts
cluster has the init script , which is in a file (in dbfs)
basically this
dbfs:/databricks/init-scripts/custom-cert.sh
Now , when I make the init script like this, it works (no ssl errors for my endpoints. Also, the event logs for the cluster shows the duration as 1 second for the init script
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
However, if I just put the init script in an bash script and upload it to DBFS through a pipeline, the init script does not do anything. It executes , as per the event log but the execution duration is 0 sec.
I have the sh script in a file named
custom-cert.sh
with the same contents as above, i.e.
#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt"
but when I check /usr/local/share/ca-certificates/ , it does not contain /dbfs/orgcertificates/orgcerts.crt, even though the cluster init script has run.
Also, I have compared the contents of the init script in both cases and it least to the naked eye, I can't figure out any difference
i.e.
%sh
cat /dbfs/databricks/init-scripts/custom-cert.sh
shows the same contents in both the scenarios. What is the problem with the 2nd case?
EDIT: I read a bit more about init scripts and found that the logs of init scripts are written here
%sh
ls /databricks/init_scripts/
Looking at the err file in that location, it seems there is an error
sudo: update-ca-certificates
: command not found
Why is it that update-ca-certificates found in the first case but not when I put the same script in a sh script and upload it to dbfs (instead of executing the dbutils.fs.put within a notebook) ?
EDIT 2: In response to the first answer. After running the command
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
the output is the file custom-cert.sh and then I restart the cluster with the init script location as dbfs:/databricks/init-scripts/custom-cert.sh and then it works. So, it is essentially the same content that the init script is reading (which is the generated sh script). Why can't it read it if I do not use dbfs put but just put the contents in bash file and upload it during the CI/CD process?
As we aware, An init script is a shell script that runs during startup of each cluster node before the Apache Spark driver or worker JVM start. case-2 When you run bash
command by using of %sh magic command means you are trying to execute this command in Local driver node. So that workers nodes is not able to access . But based on
case-1 , By using of %fs magic command you are trying run copy command (dbutils.fs.put )from root . So that along with driver node , other workers node also can access path .
Ref : https://docs.databricks.com/data/databricks-file-system.html#summary-table-and-diagram
It seems that my observations I made in the comments section of my question is the way to go.
I now create the init script using a databricks job that I run during the CI/CD pipeline from Azure DevOps.
The notebook has the commands
dbutils.fs.rm("/databricks/init-scripts/custom-cert.sh")
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/internal-certificates/certs.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
I then create a Databricks job (pointing to this notebook), the cluster is a job cluster which is just temporary . Of course , in my case , even this job creation is automated using a powershell script.
I then call this Databricks job in the release pipeline using again a Powershell script.
This creates the file
/databricks/init-scripts/custom-cert.sh
I then use this file in any other cluster that accesses my org's endpoints (without certificate errors).
I do not know (or still understand), why can't the same script file be just part of a repo and uploaded during the release process (instead of it being this Databricks job calling a notebook). I would love to know the reason . The other answer on this question does not hold true as you can see, that the cluster script is created by a job cluster and then accessed from another cluster as part of its init script.
It simply boils down to how the init script gets created.
But I get my job done. Just if it helps someone get their job done too.
I have raised a support case though to understand the reason.

Packer failed when executed on Gitlab-runner

I have a packer file to deploy Centos 7 using vSphere-Iso builder that works Ok when executed directly on a linux server but when I try to run the same packer file using a gitlab-runner it fails as it does not wait until the OS is installed. It fails after waiting for 1 minute but if I run the packer command with -on-error=run-cleanup-provisioner the OS install finish succesuflly so clear the issue is that packer is just not waiting.
2021/07/20 12:02:40 packer.io plugin: [INFO] Waiting for IP, up to total timeout: 30m0s, settle timeout: 5m0s
==> vsphere-iso.autogenerated_1: Waiting for IP...
==> vsphere-iso.autogenerated_1: Clear boot order...
==> vsphere-iso.autogenerated_1: Power off VM...
==> vsphere-iso.autogenerated_1: Destroying VM...
2021/07/20 12:03:12 [INFO] (telemetry) ending
==> Wait completed after 1 minute 2 seconds
2021/07/20 12:03:12 machine readable: error-count []string{"1"}
==> Some builds didn't complete successfully and had errors:
My boot command is the following as I do not use DHCP.
boot_command = ["<up><tab> text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/vmware-ks.cfg ip=10.118.12.117::10.118.12.1:255.255.255.0:{{ .Name }}.localhost:ens192:none<enter><wait>"]
I have tested using options like ssh_host, ip_wait_address, ip_settle_timeout, ssh_wait_timeout, pause_before_connecting but nothing seems to work.
As I said, the same packer pkr.hcl file works OK if run it manually on a regular linux but not on my gitlab-runner that is a runner installed directly on my Gitlab server (Yes, I know is not the best practice but I only use the runner for this task)
Packer versions 1.7.2 and 1.7.3 tested, gitlab-runner 14.0.0 and 14.0.1 tested.
Managed to make it work by changing the las wait on my boot command for wait5m. This will give the OS enough time to get installed and the VM rebooted.
New boot command boot_command = ["<up><tab> text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/vmware-ks.cfg ip=10.118.12.117::10.118.12.1:255.255.255.0:{{ .Name }}.localhost:ens192:none<enter><wait5m>"]
All the other wait options from packer are no longer needed with this boot command.
Doing some test I managed to make it work as well by creating a Firewall drop rule for the VM just after the kickstar file was loaded and removing the FW rules once the OS was installed. Definitelly, packer is just ignoring all the wait machanism native to packer when running on the gitlab-runner
EDIT: After having the same issue with my Windows Templates y tested using a different gitlab-runner installed on a different server instead of the one in the same gitlab server and it worked perfectly with my initial contifiguration for both, windows and centos.

Proper way for keep process running on a container

I'm not aware if this could be considered as a duplicate since it's a problem for an specific case.
Currently, I have created a docker outside docker image for handling my Jenkins agent which will perform auto restarts without using supervisor as a solution ( lack of python 3.7 support ), and by that, since I'm using openjdk:slim as base image and I don't want to install any additional dependencies I opted to compensate the lack of tools like lsof and ps, or others for checking if the process is running or not, by writing the started process pid on a file which will be used for validating if the process exists or not under the path /proc/pid/status. Currently this works and the main reason of creating this solution for handling the auto start of the agents.
But my question is, Is this the best or more appropriated approach?
Please find the following code with the implementation:
#!/bin/bash
set -e
agent_runner() {
while :
do
if [ ! -f "/proc/$(cat /tmp/agent.pid)/status" ]
then
curl $JNLP_AGENT_DOWNLOAD_URL -o agent.jar
java \
-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=300 \
-Dhttps.protocols=TLSv1.2 \
-jar agent.jar \
-jnlpUrl $JNLP_AGENT_URL \
-secret $JENKINS_SECRET \
-workDir "$JENKINS_WORKDIR" &
echo $! > /tmp/agent.pid
else
:
fi
sleep 10
done
}
while :
do
if [ cat < /dev/tcp/"$TARGET" ]; then
echo "Starting Agent"
agent_runner
else
echo "Jenkins master is offline, waiting...."
fi
sleep 10
done
Link for the repository: https://github.com/thcp/jenkins-agent-dod
If the main process in the container dies, you should let the container die with it.
Docker and the various layers above it have functionality to restart whole containers. There is a docker run --restart option for the basic Docker CLI, and equivalent Docker Compose option, and restarting dying containers after some backoff is the default behavior for Kubernetes pods.
So, if you just let a container die on its own, you’ll have out-of-the-box support for the container engine to restart itself, without adding any special support into your image; just set the CMD to the thing you actually need the container to do and go. This approach also has the benefit that if you detect your environment has become unstable (“I depend on a database and it’s unreachable”) the process can choose to abort itself and let it be restarted later when hopefully the environment has improved.

How do I run Linux tasks without Docker (on the underlying system)?

Tasks image_resource property is marked as optional in the documentation, but GNU/Linux tasks fail without it.
Also, the docs for the type property of image_resource say:
Required. The type of the resource. Usually docker-image
But I couldn't find any information about other supported types.
How can I run tasks on the underlying system without any container technology, like in my Windows and macOS workers?
In Concourse, you really are not supposed to do anything outside of Docker. That is one of the main features. Concourse runs in Docker containers and starts new containers for each build. If you want to run one or more Linux commands in sh or bash in the container, you can try something like this below, for your task config.
- task: linux
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: '18.04'}
run:
dir: /<path-to-dir>
path: sh
user: root
args:
- -exc
- |
echo "Running in Linux!"
ls
scp <you#your-host-machine:file> .
telnet <your-host-machine>
<whatever>
...

Spark is not started automatically on the AWS cluster - how to launch it?

A spark cluster has been launched using the ec2/spark-ec2 script from within the branch-1.4 codebase. I have logged onto it.
I can login to it - and it reflects 1 master, 2 slaves:
11:35:10/sparkup2 $ec2/spark-ec2 -i ~/.ssh/hwspark14.pem login hwspark14
Searching for existing cluster hwspark14 in region us-east-1...
Found 1 master, 2 slaves.
Logging into master ec2-54-83-81-165.compute-1.amazonaws.com...
Warning: Permanently added 'ec2-54-83-81-165.compute-1.amazonaws.com,54.83.81.165' (RSA) to the list of known hosts.
Last login: Tue Jun 23 20:44:05 2015 from c-73-222-32-165.hsd1.ca.comcast.net
__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-ami/2013.03-release-notes/
Amazon Linux version 2015.03 is available.
But .. where are they?? The only java processes running are:
Hadoop: NameNode and SecondaryNode
Tachyon: Master and Worker
It is a surprise to me that the Spark Master and Workers are not started. When looking for the processes to start them manually it is not at all obvious where they are located.
Hints on
why spark did not start automatically
and
where the launch scripts live
would be appreciated. (In the meantime i will do an exhaustive
find / -name start-all.sh
And .. survey says:
root#ip-10-151-25-94 etc]$ find / -name start-all.sh
/root/persistent-hdfs/bin/start-all.sh
/root/ephemeral-hdfs/bin/start-all.sh
Which means to me that spark were not even installed??
Update I wonder: is this a bug in 1.4.0? I ran same set of commands in 1.3.1 and the spark cluster came up.
There was a bug in spark 1.4.0 provisioning script which is cloned from github repository by spark-ec2 (https://github.com/mesos/spark-ec2/) with similar symptoms - apache spark haven't started. The reason was - provisioning script failed to download spark archive.
Check was spark downloaded and uncompressed on the master host ls -altr /root/spark there should be several directories there. From your description looks like /root/spark/sbin/start-all.sh script is missing - which is missing there.
Also check the contents of the file cat /tmp/spark-ec2_spark.log it should has information about uncompressing step.
Another thing to try is to run spark-ec2 with other provisioning script branch by adding --spark-ec2-git-branch branch-1.4 into the spark-ec2 command line argument.
Also when you run spark-ec2 save all output and check is there something suspicious:
spark-ec2 <...args...> 2>&1 | tee start.log

Resources