Parallel programming fortran, problems with a hello program - cygwin

I have problems with this basic program:
!Fortran example
program hello
implicit none
include 'mpif.h' ! If I try 'use mpi' the console don't find this module.
integer :: rank, size, ierror, tag, status(MPI_STATUS_SIZE)
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
print*, 'node', rank, ': Hello world'
call MPI_FINALIZE(ierror)
end program hello
When I compile it, I get these errors:
$ mpirun -np 1 hello
[DESKTOP-SLDH1SC:00140] PMIX ERROR: INIT in file /pub/devel/openmpi/v4.0/openmpi-4.0.5-
1.x86_64/src/openmpi-4.0.5/opal/mca/pmix/pmix3x/pmix/src/mca/gds/ds21/gds_ds21_lock_pthread.c at line 188
[DESKTOP-SLDH1SC:00140] PMIX ERROR: SUCCESS in file /pub/devel/openmpi/v4.0/openmpi-4.0.5-
1.x86_64/src/openmpi-4.0.5/opal/mca/pmix/pmix3x/pmix/src/mca/common/dstore/dstore_base.c at line 2450
[DESKTOP-SLDH1SC:00140] PMIX ERROR: INIT in file /pub/devel/openmpi/v4.0/openmpi-4.0.5-
1.x86_64/src/openmpi-4.0.5/opal/mca/pmix/pmix3x/pmix/src/mca/gds/ds12/gds_ds12_lock_pthread.c at line 138
[DESKTOP-SLDH1SC:00140] PMIX ERROR: SUCCESS in file /pub/devel/openmpi/v4.0/openmpi-4.0.5-
1.x86_64/src/openmpi-4.0.5/opal/mca/pmix/pmix3x/pmix/src/mca/common/dstore/dstore_base.c at line 2450
[DESKTOP-SLDH1SC:00141] PMIX ERROR: ERROR in file /pub/devel/openmpi/v4.0/openmpi-4.0.5-
1.x86_64/src/openmpi-4.0.5/opal/mca/pmix/pmix3x/pmix/src/mca/gds/ds12/gds_ds12_lock_pthread.c at line 168
node 0 : Hello world
[DESKTOP-SLDH1SC:00140] [[39760,0],0] unable to open debugger attach fifo
What's wrong?
Finally, what's the difference between mpi and omp_lib?

Related

installation errors for Perl Spreadsheet::Write

I'm trying to install Spreadsheet::Write using Strawberry Perl but getting an error. Please help if you can.
perl -v reports
This is perl 5, version 32, subversion 1 (v5.32.1) built for MSWin32-x64-multi-thread
Spreadsheet::Write is failing the tests with the error message "Spreadsheet data does not match reference for 'csv'":
Running make test for AMALTSEV/Spreadsheet-Write-1.03.tar.gz
"C:\Strawberry\perl\bin\perl.exe" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib\lib', 'blib\arch')" t/*.t
t/all_tests.t .. ====== Expected:
Column1,Column#2,"Column 3","Column 4"
1,"Cell #2/1",C.3/1,"C.4/1/Γÿ║"
2,"Cell #2/2",C.3/2,"C.4/2/Γÿ║"
======
====== Expected:
Column1,Column#2,"Column 3","Column 4"
1,"Cell #2/1",C.3/1,"C.4/1/Γÿ║"
2,"Cell #2/2",C.3/2,"C.4/2/Γÿ║"
======
# t\tlib/Common.pm:79 - WriteCSVTest(test_text_format)
# Boolean assertion failed
# Spreadsheet data does not match reference for 'csv'
# at t\tlib/Common.pm line 79, <DATA> line 1.
# Common::spreadsheet_test(WriteCSVTest=HASH(0x2686580), "csv", GLOB(0x2692390)) called at t\tlib/WriteCSVTest.pm line 15
# WriteCSVTest::test_text_format(WriteCSVTest=HASH(0x2686580)) called at C:/Strawberry/perl/site/lib/Test/Unit/Lite.pm line 602
# eval {...} called at C:/Strawberry/perl/site/lib/Test/Unit/Lite.pm line 601
# Test::Unit::TestSuite::run(Test::Unit::TestSuite=HASH(0x2685e30), Test::Unit::Result=HASH(0x2685e60), Test::Unit::HarnessUnit=HASH(0x78a960)) called at C:/Strawberry/perl/site/lib/Test/Unit/Lite.pm line 750
# Test::Unit::TestRunner::start(Test::Unit::HarnessUnit=HASH(0x78a960), "Test::Unit::Lite::AllTests") called at t/all_tests.t line 23
t/all_tests.t .. Failed 1/3 subtests

Ansible with Mitogen: Crash when two or more maschines

I'm trying to use the ansible strategy mitogen_linear. Once I try to run a playbook with more then one maschine its crashes as follows.
I'm running this on a Debian 11 over WSL v2 with kernel version 5.10.102.1.
ERROR! [mux 12903] 17:51:03.129739 E mitogen: <Stream ssh.proxyserver02 #5730> crashed
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/mitogen/core.py", line 3488, in _call
func(self)
File "/usr/lib/python3/dist-packages/mitogen/core.py", line 1726, in on_transmit
self.protocol.on_transmit(broker)
File "/usr/lib/python3/dist-packages/mitogen/core.py", line 2174, in on_transmit
self._writer.on_transmit(broker)
File "/usr/lib/python3/dist-packages/mitogen/core.py", line 1914, in on_transmit
written = self._protocol.stream.transmit_side.write(buf)
File "/usr/lib/python3/dist-packages/mitogen/core.py", line 2040, in write
written, disconnected = io_op(os.write, self.fd, s)
File "/usr/lib/python3/dist-packages/mitogen/core.py", line 553, in io_op
return func(*args), None
BlockingIOError: [Errno 11] Resource temporarily unavailable
fatal: [proxyserver02]: UNREACHABLE! => changed=false
msg: Channel was disconnected while connection attempt was in progress; this may be caused by an abnormal Ansible exit, or due to an unreliable target.
unreachable: true

Mimicking bash wc functionalities using python

I have written a very simple python programme, called wc.py, which mimics "bash wc" behaviour to count the number of words, lines and bytes in a file. My programme is as follow:
import sys
path = sys.argv[1]
w = 0
l = 0
b = 0
for currentLine in file:
wordsInLine = currentLine.strip().split(' ')
wordsInLine = [word for word in wordsInLine if word != '']
w += len(wordsInLine)
b += len(currentLine.encode('utf-8'))
l += 1
#output
print(str(l) + ' ' + str(w) + ' ' + str(b))
In order to execute my programme you should execute the following command:
python3 wc.py [a file to read the data from]
As the result it shows
[The number of lines in the file] [The number of words in the file] [The number of bytes in the file] [the file directory path]
The files I used to test my code is as follow:
file.txt which contains the following data:
1
2
3
4
Executing "wc file.txt" returns
4 4 8
Executing "python3 wc.py file.txt" returns 4 4 8
Download "Annual enterprise survey: 2020 financial year (provisional) – CSV" from CSV file download
Executing "wc [fileName].csv" returns
37081 500273 5881081
Executing "python3 wc.py [fileName].csv" returns
37081 500273 5844000
and a [something].pdf file
Executing "wc [something].pdf" works.
Executing "python3 code.py" throws the following errors:
Traceback (most recent call last):
File "code.py", line 10, in <module>
for currentLine in file:
File "/usr/lib/python3.8/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbf in position 10: invalid start byte
As you can see, the output of python3 code.py [something].pdf and python3 code.py [something].csv is not the same as what wc returns. Could you help me to find the reason of this erroneous behaviour in my code?
Regarding the CSV file, if you look at the difference between your result and that of wc:
5881081 - 5844000 = 37081 which is exactly the number of lines.
That is, every line has one additional character in the original file. That character is the carriage return \r which got lost in Python because you iterate over lines and don't specify the linebreaks. If you want a byte-correct result, you have to first identify the type of linebreaks used in the file (and watch out for inconsistencies throughout the document).

os.read on inotify file descriptor: reading 32 bytes works but 31 raises an exception

I'm writing a program that should respond to file changes using inotify. The below skeleton program works as I expect...
# test.py
import asyncio
import ctypes
import os
IN_CLOSE_WRITE = 0x00000008
async def main(loop):
libc = ctypes.cdll.LoadLibrary('libc.so.6')
fd = libc.inotify_init()
os.mkdir('directory-to-watch')
wd = libc.inotify_add_watch(fd, 'directory-to-watch'.encode('utf-8'), IN_CLOSE_WRITE)
loop.add_reader(fd, handle, fd)
with open(f'directory-to-watch/file', 'wb') as file:
pass
def handle(fd):
event_bytes = os.read(fd, 32)
print(event_bytes)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
... in that it outputs...
b'\x01\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00file\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
However, if I change it to attempt to read 31 bytes...
event_bytes = os.read(fd, 31)
... then it raises an exception...
Traceback (most recent call last):
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/t.py", line 19, in handle
event_bytes = os.read(fd, 31)
OSError: [Errno 22] Invalid argument
... and similarly for all numbers smaller than 31 that I have tried, including 1 byte.
Why is this? I would have thought it should be able to attempt to read any number of bytes, and just return whatever is in the buffer, up to the length given by the second argument of os.read.
I'm running this in Alpine linux 3.10 in a docker container on Mac OS, with very basic Dockerfile:
FROM alpine:3.10
RUN apk add --no-cache python3
COPY test.py /
and running it by
docker build . -t test && docker run -it --rm test python3 /test.py
It's because it's written to only allow reads that can return information about the next event. From http://man7.org/linux/man-pages/man7/inotify.7.html
The behavior when the buffer given to read(2) is too small to return
information about the next event depends on the kernel version: in
kernels before 2.6.21, read(2) returns 0; since kernel 2.6.21,
read(2) fails with the error EINVAL.
and from https://github.com/torvalds/linux/blob/f1a3b43cc1f50c6ee5ba582f2025db3dea891208/include/uapi/asm-generic/errno-base.h#L26
#define EINVAL 22 /* Invalid argument */
which presumably maps to the Python OSError with Errno 22.

logging multithreading deadlock in python

I run 10 processes with 10 threads per each, and they constantly and quite often write to 10 log file (one per process) using logging.info() & logging.debug() during 30 seconds.
Once, usually after 10 seconds, there happens a deadlock. Processes stops processing (all of them).
gdp python [pid] with py-bt & info threads shows that it stuck here:
Id Target Id Frame
* 1 Thread 0x7ff50f020740 (LWP 1622) "python" 0x00007ff50e8276d6 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x564f17c8aa80)
at ../sysdeps/unix/sysv/linux/futex-internal.h:205
2 Thread 0x7ff509636700 (LWP 1624) "python" 0x00007ff50eb57bb7 in epoll_wait (epfd=8, events=0x7ff5096351d0, maxevents=256, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
3 Thread 0x7ff508e35700 (LWP 1625) "python" 0x00007ff50eb57bb7 in epoll_wait (epfd=12, events=0x7ff508e341d0, maxevents=256, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
4 Thread 0x7ff503fff700 (LWP 1667) "python" 0x00007ff50e8276d6 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x564f17c8aa80)
at ../sysdeps/unix/sysv/linux/futex-internal.h:205
...[threads 5-6 like 4]...
7 Thread 0x7ff5027fc700 (LWP 1690) "python" 0x00007ff50eb46187 in __GI___libc_write (fd=2, buf=0x7ff50967bc24, nbytes=85) at ../sysdeps/unix/sysv/linux/write.c:27
...[threads 8-13 like 4]...
Stack of thread 7:
Traceback (most recent call first):
File "/usr/lib/python2.7/logging/__init__.py", line 889, in emit
stream.write(fs % msg)
...[skipped useless lines]...
And this code (I guess the code of logging __init__ function):
884 #the codecs module, but fail when writing to a
885 #terminal even when the codepage is set to cp1251.
886 #An extra encoding step seems to be needed.
887 stream.write((ufs % msg).encode(stream.encoding))
888 else:
>889 stream.write(fs % msg)
890 except UnicodeError:
891 stream.write(fs % msg.encode("UTF-8"))
892 self.flush()
893 except (KeyboardInterrupt, SystemExit):
894 raise
Stack of the rest threads is similar -- waiting for GIL:
Traceback (most recent call first):
Waiting for the GIL
File "/usr/lib/python2.7/threading.py", line 174, in acquire
rc = self.__block.acquire(blocking)
File "/usr/lib/python2.7/logging/__init__.py", line 715, in acquire
self.lock.acquire()
...[skipped useless lines]...
Its written that package logging is multithreaded without additional locks. So why does package logging may deadlock? Does it open too many file descriptors or limit anything else?
That's how I initialize it (if it is important):
def get_logger(log_level, file_name='', log_name=''):
if len(log_name) != 0:
logger = logging.getLogger(log_name)
else:
logger = logging.getLogger()
logger.setLevel(logger_state[log_level])
formatter = logging.Formatter('%(asctime)s [%(levelname)s][%(name)s:%(funcName)s():%(lineno)s] - %(message)s')
# file handler
if len(file_name) != 0:
fh = logging.FileHandler(file_name)
fh.setLevel(logging.DEBUG)
fh.setFormatter(formatter)
logger.addHandler(fh)
# console handler
console_out = logging.StreamHandler()
console_out.setLevel(logging.DEBUG)
console_out.setFormatter(formatter)
logger.addHandler(console_out)
return logger
Problem was because I've been writing output to console & to file, but all those processes were initialized with redirection to pipe, which was never listened.
p = Popen(proc_params,
stdout=PIPE,
stderr=STDOUT,
close_fds=ON_POSIX,
bufsize=1
)
So it seems pipes in this case have it's buffer size limit, and after filling it deadlocks.
Here the explanation: https://docs.python.org/2/library/subprocess.html
Note
Do not use stdout=PIPE or stderr=PIPE with this function as that can deadlock based on the child process output volume. Use Popen with the communicate() method when you need pipes.
It's done for functions which I don't use, but it seems valid for Popen run, if then you don't read out the pipes.

Resources