Unable to use preloaded uWSGI cheaper algorithm - python-3.x

I'm unable to use uWSGI's busyness cheaper algorithm although it appears to be pre-loaded in my installation. Does it still merit an explicit install?
Is so, where can I download the standalone plugin package from?
Any help is appreciated, thank you.
uWSGI Information
uwsgi --version
2.0.19.1
uwsgi --cheaper-algos-list
*** uWSGI loaded cheaper algorithms ***
busyness
spare
backlog
manual
--- end of cheaper algorithms list ---
uWSGI Configuration File
[uwsgi]
module = myapp:app
socket = /path/to/myapp.sock
stats = /path/to/mystats.sock
chmod-socket = 766
socket-timeout = 60 ; Set internal sockets timeout
logto = /path/to/logs/%n.log
log-maxsize = 5000000 ; Max size before rotating file
disable-logging = true ; Disable built-in logging
log-4xx = true ; But log 4xx
log-5xx = true ; And 5xx
strict = true ; Enable strict mode (placeholder cannot be used)
master = true ; Enable master process
enable-threads = true ; Enable threads
vacuum = true ; Delete sockets during shutdown
single-interpreter = true ; Do not use multiple interpreters (single web app per uWSGI process)
die-on-term = true ; Shutdown when receiving SIGTERM (default is respawn)
need-app = true ; Exit if no app can be loaded
harakiri = 300 ; Forcefully kill hung workers after desired time in seconds
max-requests = 1000 ; Restart workers after this many requests
max-worker-lifetime = 3600 ; Restart workers after this many seconds
reload-on-rss = 1024 ; Restart workers after this much resident memory (this is per worker)
worker-reload-mercy = 60 ; How long to wait for workers to reload before forcefully killing them
cheaper-algo = busyness ; Specify the cheaper algorithm here
processes = 16 ; Maximum number of workers allowed
threads = 4 ; Number of threads per worker allowed
thunder-lock = true ; Specify thunderlock activation
cheaper = 8 ; Number of workers to keep idle
cheaper-initial = 8 ; Workers created at startup
cheaper-step = 4 ; Number of workers to spawn at once
cheaper-overload = 30 ; Check the busyness of the workers at this interval (in seconds)
uWSGI Log
*** Operational MODE: preforking+threaded ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x15e8f10 pid: 14479 (default app)
spawned uWSGI master process (pid: 14479)
spawned uWSGI worker 1 (pid: 14511, cores: 4)
spawned uWSGI worker 2 (pid: 14512, cores: 4)
spawned uWSGI worker 3 (pid: 14516, cores: 4)
spawned uWSGI worker 4 (pid: 14520, cores: 4)
spawned uWSGI worker 5 (pid: 14524, cores: 4)
spawned uWSGI worker 6 (pid: 14528, cores: 4)
spawned uWSGI worker 7 (pid: 14529, cores: 4)
spawned uWSGI worker 8 (pid: 14533, cores: 4)
THIS LINE --> unable to find requested cheaper algorithm, falling back to spare <-- THIS LINE
OS Information
Red Hat Enterprise Linux Server release 7.7 (Maipo)
Other Details
uWSGI was installed using pip

The comments were messing things up for me - the below format fixed the issue
########################################################
# #
# Cheaper Algo and Worker Count #
# #
########################################################
cheaper-algo = busyness
processes = 16
threads = 4
thunder-lock = true
cheaper = 8
cheaper-initial = 8
cheaper-step = 4
cheaper-overload = 30

Related

Problems configuring LivyOperator in Airflow

For LivyOperator we set the following parameters:
polling_interval=60
retries_num_timeout=100
We set it up according to this documentation: https://airflow.apache.org/docs/apache-airflow-providers-apache-livy/stable/_api/airflow/providers/apache/livy/operators/livy/index.html
But, in this configuration after 100 * 60 seconds = 6000 seconds = 1 hour 40 minutes Livy-session is interrupted, operator becomes failed, loading is interrupted. Is there any way to resove such inconsistency on Airflow/Livy side?

snakemake allocates memory twice

I am noticing that all my rules request memory twice, one at a lower maximum than what I requested (mem_mb) and then what I actually requested (mem_gb). If I run the rules as localrules they do run faster. How can I make sure the default settings do not interfere?
resources: mem_mb=100, disk_mb=8620, tmpdir=/tmp/pop071.54835, partition=h24, qos=normal, mem_gb=100, time=120:00:00
The rules are as follows:
rule bwa_mem2_mem:
input:
R1 = "data/results/qc/{species}.{population}.{individual}_1.fq.gz",
R2 = "data/results/qc/{species}.{population}.{individual}_2.fq.gz",
R1_unp = "data/results/qc/{species}.{population}.{individual}_1_unp.fq.gz",
R2_unp = "data/results/qc/{species}.{population}.{individual}_2_unp.fq.gz",
idx= "data/results/genome/genome",
ref = "data/results/genome/genome.fa"
output:
bam = "data/results/mapped_reads/{species}.{population}.{individual}.bam",
log:
bwa ="logs/bwa_mem2/{species}.{population}.{individual}.log",
sam ="logs/samtools_view/{species}.{population}.{individual}.log",
benchmark:
"benchmark/bwa_mem2_mem/{species}.{population}.{individual}.tsv",
resources:
time = parameters["bwa_mem2"]["time"],
mem_gb = parameters["bwa_mem2"]["mem_gb"],
params:
extra = parameters["bwa_mem2"]["extra"],
tag = compose_rg_tag,
threads:
parameters["bwa_mem2"]["threads"],
shell:
"bwa-mem2 mem -t {threads} -R '{params.tag}' {params.extra} {input.idx} {input.R1} {input.R2} | "
"samtools sort -l 9 -o {output.bam} --reference {input.ref} --output-fmt CRAM -# {threads} /dev/stdin 2> {log.sam}"
and the config is:
cluster:
mkdir -p logs/{rule} && # change the log file to logs/slurm/{rule}
sbatch
--partition={resources.partition}
--time={resources.time}
--qos={resources.qos}
--cpus-per-task={threads}
--mem={resources.mem_gb}
--job-name=smk-{rule}-{wildcards}
--output=logs/{rule}/{rule}-{wildcards}-%j.out
--parsable # Required to pass job IDs to scancel
default-resources:
- partition=h24
- qos=normal
- mem_gb=100
- time="04:00:00"
restart-times: 3
max-jobs-per-second: 10
max-status-checks-per-second: 1
local-cores: 1
latency-wait: 60
jobs: 100
keep-going: True
rerun-incomplete: True
printshellcmds: True
scheduler: greedy
use-conda: True # Required to run with local conda enviroment
cluster-status: status-sacct.sh # Required to monitor the status of the submitted jobs
cluster-cancel: scancel # Required to cancel the jobs with Ctrl + C
cluster-cancel-nargs: 50
Cheers,
Angel
Right now there are two separate memory resource requirements:
mem_mb
mem_gb
From the perspective of snakemake these are different, so both will be passed to the cluster. A quick fix is to use the same units, e.g. if the resource really requires only 100 mb, then the default resource should be changed to:
default-resources:
- partition=h24
- qos=normal
- mem_mb=100

Ceph-rgw Service stop automatically after installation

in my local cluster (4 Raspberry PIs) i try to configure a rgw gateway. Unfortunately the services disappears automatically after 2 minutes.
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host OSD1 and default port 7480
cephuser#admin:~/mycluster $ ceph -s
cluster:
id: 745d44c2-86dd-4b2f-9c9c-ab50160ea353
health: HEALTH_WARN
too few PGs per OSD (24 < min 30)
services:
mon: 1 daemons, quorum admin
mgr: admin(active)
osd: 4 osds: 4 up, 4 in
rgw: 1 daemon active
data:
pools: 4 pools, 32 pgs
objects: 80 objects, 1.09KiB
usage: 4.01GiB used, 93.6GiB / 97.6GiB avail
pgs: 32 active+clean
io:
client: 5.83KiB/s rd, 0B/s wr, 7op/s rd, 1op/s wr
After one minute the service(rgw: 1 daemon active) is no longer visible:
cephuser#admin:~/mycluster $ ceph -s
cluster:
id: 745d44c2-86dd-4b2f-9c9c-ab50160ea353
health: HEALTH_WARN
too few PGs per OSD (24 < min 30)
services:
mon: 1 daemons, quorum admin
mgr: admin(active)
osd: 4 osds: 4 up, 4 in
data:
pools: 4 pools, 32 pgs
objects: 80 objects, 1.09KiB
usage: 4.01GiB used, 93.6GiB / 97.6GiB avail
pgs: 32 active+clean
Many thanks for the help
Solution:
On the gateway node, open the Ceph configuration file in the /etc/ceph/ directory.
Find an RGW client section similar to the example:
[client.rgw.gateway-node1]
host = gateway-node1
keyring = /var/lib/ceph/radosgw/ceph-rgw.gateway-node1/keyring
log file = /var/log/ceph/ceph-rgw-gateway-node1.log
rgw frontends = civetweb port=192.168.178.50:8080 num_threads=100
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/object_gateway_guide_for_red_hat_enterprise_linux/index

mpi4py irecv causes segmentation fault

I'm running following code which sends an array from rank 0 to 1 using command mpirun -n 2 python -u test_irecv.py > output 2>&1.
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
asyncr = 1
size_arr = 10000
if comm.Get_rank()==0:
arrs = np.zeros(size_arr)
if asyncr: comm.isend(arrs, dest=1).wait()
else: comm.send(arrs, dest=1)
else:
if asyncr: arrv = comm.irecv(source=0).wait()
else: arrv = comm.recv(source=0)
print('Done!', comm.Get_rank())
Running in synchronous mode with asyncr = 0 gives the expected output
Done! 0
Done! 1
However running in asynchronous mode with asyncr = 1 gives errors as follows.
I need to know why it runs okay in synchronous mode and not so in asynchronous mode.
Output with asyncr = 1:
Done! 0
[nia1477:420871:0:420871] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x138)
==== backtrace ====
0 0x0000000000010e90 __funlockfile() ???:0
1 0x00000000000643d1 ompi_errhandler_request_invoke() ???:0
2 0x000000000008a8b5 __pyx_f_6mpi4py_3MPI_PyMPI_wait() /tmp/eb-A2FAdY/pip-req-build-dvnprmat/src/mpi4py.MPI.c:49819
3 0x000000000008a8b5 __pyx_f_6mpi4py_3MPI_PyMPI_wait() /tmp/eb-A2FAdY/pip-req-build-dvnprmat/src/mpi4py.MPI.c:49819
4 0x000000000008a8b5 __pyx_pf_6mpi4py_3MPI_7Request_34wait() /tmp/eb-A2FAdY/pip-req-build-dvnprmat/src/mpi4py.MPI.c:83838
5 0x000000000008a8b5 __pyx_pw_6mpi4py_3MPI_7Request_35wait() /tmp/eb-A2FAdY/pip-req-build-dvnprmat/src/mpi4py.MPI.c:83813
6 0x00000000000966a3 _PyMethodDef_RawFastCallKeywords() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Objects/call.c:690
7 0x000000000009eeb9 _PyMethodDescr_FastCallKeywords() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Objects/descrobject.c:288
8 0x000000000006e611 call_function() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Python/ceval.c:4563
9 0x000000000006e611 _PyEval_EvalFrameDefault() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Python/ceval.c:3103
10 0x0000000000177644 _PyEval_EvalCodeWithName() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Python/ceval.c:3923
11 0x000000000017774e PyEval_EvalCodeEx() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Python/ceval.c:3952
12 0x000000000017777b PyEval_EvalCode() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Python/ceval.c:524
13 0x00000000001aab72 run_mod() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Python/pythonrun.c:1035
14 0x00000000001aab72 PyRun_FileExFlags() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Python/pythonrun.c:988
15 0x00000000001aace6 PyRun_SimpleFileExFlags() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Python/pythonrun.c:430
16 0x00000000001cad47 pymain_run_file() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Modules/main.c:425
17 0x00000000001cad47 pymain_run_filename() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Modules/main.c:1520
18 0x00000000001cad47 pymain_run_python() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Modules/main.c:2520
19 0x00000000001cad47 pymain_main() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Modules/main.c:2662
20 0x00000000001cb1ca _Py_UnixMain() /dev/shm/mboisson/avx2/Python/3.7.0/dummy-dummy/Python-3.7.0/Modules/main.c:2697
21 0x00000000000202e0 __libc_start_main() ???:0
22 0x00000000004006ba _start() /tmp/nix-build-glibc-2.24.drv-0/glibc-2.24/csu/../sysdeps/x86_64/start.S:120
===================
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 420871 on node nia1477 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
The versions are as follows:
Python: 3.7.0
mpi4py: 3.0.0
mpiexec --version gives mpiexec (OpenRTE) 3.1.2
mpicc -v gives icc version 18.0.3 (gcc version 7.3.0 compatibility)
Running with asyncr = 1 in another system with MPICH gave the following output.
Done! 0
Traceback (most recent call last):
File "test_irecv.py", line 14, in <module>
if asyncr: arrv = comm.irecv(source=0).wait()
File "mpi4py/MPI/Request.pyx", line 235, in mpi4py.MPI.Request.wait
File "mpi4py/MPI/msgpickle.pxi", line 411, in mpi4py.MPI.PyMPI_wait
mpi4py.MPI.Exception: MPI_ERR_TRUNCATE: message truncated
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[23830,1],1]
Exit code: 1
--------------------------------------------------------------------------
[master:01977] 1 more process has sent help message help-mpi-btl-base.txt / btl:no-nics
[master:01977] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Apparently this is a known problem in mpi4py as described in https://bitbucket.org/mpi4py/mpi4py/issues/65/mpi_err_truncate-message-truncated-when. Lisandro Dalcin says
The implementation of irecv() for large messages requires users to pass a buffer-like object large enough to receive the pickled stream. This is not documented (as most of mpi4py), and even non-obvious and unpythonic...
The fix is to pass a large enough pre-allocated bytearray to irecv. A working example is as follows.
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
size_arr = 10000
if comm.Get_rank()==0:
arrs = np.zeros(size_arr)
comm.isend(arrs, dest=1).wait()
else:
arrv = comm.irecv(bytearray(1<<20), source=0).wait()
print('Done!', comm.Get_rank())

Torque cannot communicate with host

I have been attempting to setup the torque scheduler for a small cluster. I followed the steps to setup the scheduler from http://docs.adaptivecomputing.com/torque/archive/3-0-2/1.2configuring_torque_on_server.php
However when i attempt
qterm -t quick
I get the following error
$ sudo qterm -t quick
Unable to communicate with Terra(192.168.1.25)
Cannot connect to specified server host 'Terra'.
qterm: could not connect to server '' (111) Connection refused
but the server starts just fine. However when I attempt to run a command that runs on multiple nodes such as
qsub -l nodes=2:ppn=4 /home/user/scripts/someScript
it prints out somethign like
7.Terra
where Terra is the name of the head node, but is also a node in the cluster. This isn't the problem. The problem is that it does not run. nor does it have any output anywhere :/
The torque server log: https://ptpb.pw/EaKo
The terra node log: https://ptpb.pw/9w5M
and the Marte log: https://ptpb.pw/o4PT
I can get it to run with a pbs script but only with one node....
#!/bin/bash
#PBS -l pmem=1gb,nodes=1:ppn=4
#PBS -m abe
cd Documents/
wc -l largeTest.csv
Here is the ouput of qstat after submitting a job
Job ID Name User Time Use S
Queue
------------------------- ---------------- --------------- -------- - -----
16.Terra testPerformance justin 0 R batch
the output of pbsnodes -a
Terra
state = free
power_state = Running
np = 4
properties = Tower
ntype = cluster
status = opsys=linux,uname=Linux Terra 4.17.14-arch1-1-ARCH #1 SMP PREEMPT Thu Aug 9 11:56:50 UTC 2018 x86_64,sessions=11525 22029,nsessions=2,nusers=1,idletime=57964,totmem=8111556kb,availmem=7539284kb,physmem=8111556kb,ncpus=4,loadave=0.00,gres=,netload=30570521372,state=free,varattr= ,cpuclock=Fixed,macaddr=e0:3f:49:44:72:20,version=6.1.1.1,rectime=1534937388,jobs=
mom_service_port = 15002
mom_manager_port = 15003
gpus = 1
Marte
state = free
power_state = Running
np = 4
properties = NFSServer
ntype = cluster
status = opsys=linux,uname=Linux Marte 4.18.1-arch1-1-ARCH #1 SMP PREEMPT Wed Aug 15 21:11:55 UTC 2018 x86_64,sessions=366 556 563,nsessions=3,nusers=2,idletime=58140,totmem=7043404kb,availmem=6703808kb,physmem=7043404kb,ncpus=4,loadave=0.02,gres=,netload=36500663511,state=free,varattr= ,cpuclock=Fixed,macaddr=c8:5b:76:4a:65:91,version=6.1.1.1,rectime=1534937359,jobs=
mom_service_port = 15002
mom_manager_port = 15003
and the /var/spool/torque/server_priv/nodes
Terra np=4 gpus=1 Tower
Marte np=4 NFSServer
Edit: Here are the most recent logs as well
Mom Log for Node: https://ptpb.pw/DhKi
Mom Log for head node: https://ptpb.pw/MTlD
and the server log: https://ptpb.pw/HPkE

Resources