error on running mongodb on linux server - linux

I am trying to run the command:
mongod --storageEngine wiredTiger --dbpath data --logpath logs/mongo.log
in a linux server but it gives me the error
cannot execute binary file: Exec format error
when I try to see the dependences of the file mongod with readelf -d it gives too many errors:
readelf: Warning: The e_shentsize field in the ELF header is larger than the size of an ELF section header
readelf: Error: Reading 0x9c00000 bytes extends past end of file for section headers
I didn't find any information about this error

The problem was on linux versions I was working with mongodb 32bit in my linux server 64bit

Related

"NotImplementedError" in theHarvester

When running the command theHarvester -d microsoft.com -l 200 -b baidu , I do not get successful result instead I get the error shared in the screenshot.
Error
Note: I have updated my system to the latest version along with dist-upgrade as well.
Operating System Details:
PRETTY_NAME="Parrot OS 5.1 (Electro Ara)"
NAME="Parrot OS"
VERSION_ID="5.1"
VERSION="5.1 (Electro Ara)"
VERSION_CODENAME=ara
ID=parrot
ID_LIKE=debian
HOME_URL="https://www.parrotsec.org/"
SUPPORT_URL="https://community.parrotsec.org/"
BUG_REPORT_URL="https://community.parrotsec.org/"
This is the whole command with error:
Whole Page

nbd-client failed to setup device

I've created a simple nbd-server instance which is sharing a single 1GB file which I created with:
dd if=/dev/zero of=nbd.file bs=1048576 count=1024
The nbd.conf file looks like this:
[generic]
[export1]
exportname = /Users/michael/Downloads/nbd-3.21/nbd.file
I start the server on my Mac as follows:
nbd-server -d -C /Users/michael/Downloads/nbd-3.21/nbd.conf
But when I try to connect the Linux client I get an error:
$ nbd-client -p -b 4096 nbd-server.local -N export1 /dev/nbd0
Negotiation: ..size = 1024MB
Error: Failed to setup device, check dmesg
Exiting.
There is nothing in dmesg and I can't find any documentation on exactly what went wrong. The server output looks like this, showing no obvious errors:
** Message: 20:05:55.820: virtstyle ipliteral
** Message: 20:05:55.820: connect from 192.168.1.105, assigned file is /Users/michael/Downloads/nbd-3.21/nbd.file
** Message: 20:05:55.820: No authorization file, granting access.
** Message: 20:05:55.820: Size of exported file/device is 1073741824
** Message: 20:05:55.821: Starting to serve
Error: Connection dropped: Connection reset by peer
Exiting.
All of these error messages lead me to believe the issue is on the client: it doesn't like something, so it terminates the connection. If I daemonize the server it happily lets the client try to reconnect.
I thought perhaps I should have more lines in my config file, but I don't see any obvious optional config items that would help. I thought perhaps there was some minimum file size, so I bumped it up from 16MB to 1GB.
What does the error "Failed to setup device" mean? How can I troubleshoot what is going wrong or fix it?
Try to run the client as root: sudo nbd-client ...

Problems reading slurm configuration file with Singularity

I'm trying to run an application in Singularity across nodes (864 MPI tasks) on an HPC system, namely the S4 machine at the University of Wisconsin's Space Science and Engineering Center (SSEC).
I'm using what Singularity describes as the hybrid model 1, meaning that I'm using the native (system) MPI but I also have MPI installed in the container. The mpi versions are compatible - I'm using Intel MPI version 17.0.6 outside the container and Intel MPI version 17.0.1 inside the container. The code in the container is compiled with Intel 17.0.1 compilers (C++, C, and Fortran).
So here's the problem. When I first ran the code, it complained about not finding the slurm configuration file:
fv3jedi_var.x: error: s_p_parse_file: unable to status file /etc/slurm-llnl/slurm.conf: No such file or directory, retrying in 1sec up to 60sec
So I found the system slurm.conf file in /etc/slurm and mounted this directory in the container as /etc/slurm-llnl. It now finds the configuration file but it does not understand the site-specific configuration:
fv3jedi_var.x: error: "ALL" is not a valid option for "EnforcePartLimits"
fv3jedi_var.x: error: Parsing error at unrecognized key: Features
fv3jedi_var.x: error: Parse error in file /etc/slurm-llnl/slurm.conf line 225: " Features=ivy"
fv3jedi_var.x: error: Parsing error at unrecognized key: Features
fv3jedi_var.x: error: Parse error in file /etc/slurm-llnl/slurm.conf line 226: " Features=ivy"
fv3jedi_var.x: error: Parsing error at unrecognized key: Features
[...]
So, I'm stuck. I'm guessing that this might be a PMI issue? I currently have slurm libpmi.so installed in the container and that's what I'm specifying with the I_MPI_PMI_LIBRARY variable. But I wonder if the native (system) PMI (I know it is PMI as opposed to PMI2 or PMIx) is somehow configured to properly process the system slurm.conf file? I have tried to use the native PMI library by mounting (binding) the appropriate directory in the container and changing my I_MPI_PMI_LIBRARY variable. But, the native PMI library is in the same directory as the glibc library and when I mount that there is a conflict between the glibc libraries inside and outside the container:
/bin/sh: relocation error: /usr/lib64/libc.so.6: symbol _dl_starting_up, version GLIBC_PRIVATE not defined in file ld-linux-x86-64.so.2 with link time reference
Any ideas on how to proceed? My slurm batch script is below. Thanks!
#!/usr/bin/bash
# --mem-per-cpu=8192M
#SBATCH --job-name=bm_con14
#SBATCH --partition=ivy
#SBATCH --ntasks=864
#SBATCH --cpus-per-task=1
#SBATCH --time=2:00:00
#SBATCH --mail-user=miesch#ucar.edu
source /etc/bashrc
module purge
module load license_intel
module load intel/17.0.6
ulimit -s unlimited
cd /data/users/mmiesch/runs/con-benchmark/con
JEDICON=/data/users/mmiesch
JEDIBUILD=/data/users/mmiesch/jedi/fv3-bundle/build-con
JEDIBIN=/data/users/mmiesch/jedi/fv3-bundle/build-con/bin
export SINGULARITY_BINDPATH="$JEDIBUILD,/etc/slurm:/etc/slurm-llnl"
srun --ntasks=864 --cpu_bind=cores --distribution=block:block --verbose singularity exec --home=$PWD $JEDICON/jedi-intel17-impi-hpc-dev.sif ${JEDIBIN
}/fv3jedi_var.x Config/3dvar_bump.yaml
exit 0

PHP exec(myexe) fails in PHP App, but not CLI. Fails Running Under User "apache"

I have a custom program (e.g. myexe) being executed by a web app using PHP's exec() function. It does not fail when run using the PHP CLI nor does myexe fail when run from the command line with me as a user. I have built myexe so that there are no memory issues when profiled using valgrind. myexe is about 26MB in size.
To simplify the situation, I have run myexe on the command line under the user 'apache' and reproduced the failure.
su -s /bin/sh apache -c "/usr/local/bin/myexe parm1 parm2..."
==> Segmentation fault (core dumped)
BUT when I change the user to myself and run the same command above, it works.
su -s /bin/sh mike -c "/usr/local/bin/myexe parm1 parm2..."
==> WORKS
Here's the error from the system log file:
Jul 9 18:26:15 DEVSTN-1 kernel: myexe[27352]: segfault at 7fffa2bf9ff8 ip 0000000000410324 sp 00007fffa2bfa000 error 6 in myexe[400000+5ae000]
Jul 9 18:26:16 DEVSTN-1 abrt[27353]: Saved core dump of pid 27352 (/usr/local/bin/myexe) to /var/spool/abrt/ccpp-2015-07-09-18:26:15-27352 (13631488 bytes)
Jul 9 18:26:16 DEVSTN-1 abrtd: Directory 'ccpp-2015-07-09-18:26:15-27352' creation detected
Jul 9 18:26:17 DEVSTN-1 abrtd: Executable '/usr/local/bin/myexe' doesn't belong to any package and ProcessUnpackaged is set to 'no'
Jul 9 18:26:17 DEVSTN-1 abrtd: 'post-create' on '/var/spool/abrt/ccpp-2015-07-09-18:26:15-27352' exited with 1
Jul 9 18:26:17 DEVSTN-1 abrtd: Deleting problem directory '/var/spool/abrt/ccpp-2015-07-09-18:26:15-27352'
My configuration:
CentOS6 2.6.32-504.23.4.el6.x86_64
Apache/2.2.15 (CentOS)
PHP Version 5.3.3
Am I correct with assuming that PHP has nothing to do with the error?
What should I do next?
Correct; PHP has nothing to do with the error. This is a segmentation fault caused by invalid memory access (either overflowing a buffer, or accessing already-freed memory) in myexe. It seems to have saved a core dump to /var/spool/abrt/ccpp-2015-07-09-18:26:15-27352, so, try debugging with GDB:
gdb /usr/local/bin/myexe -c /var/spool/abrt/ccpp-2015-07-09-18:26:15-27352
(gdb) bt
And try to see where the executable is failing. To get useful output, it will need to be compiled with debugging symbols. If it doesn't fail running as root or a different user, or running in an interactive terminal, I'd look for bugs that could be triggered by being unable to open a file, unable to read an expected environment variable, etc. to help isolate your problem.
Running the executable under strace might help figure out what's going on as well.
Found the problem by entering a bash shell user user apache and running the program using gdb.
Turns out myexe was trying to create a directory under the user's home dir (/home/apache) which doesn't exist.
What helped me was knowing how to start a shell under a different user and using gdb.
Here's the command to start a shell under another user (apache):
su -s /bin/bash apache

Not running RabbitMQ on Linux, can not find the file xmerl.app

I use OpenSUSE 12.3
I installed erlang and erlang-otp (R14B04)
I started RabbitMQ ./rabbitmq-server
is an error:
$:/opt/rabbitmq/rabbitmq_server-3.1.3/sbin # ./rabbitmq-server
BOOT FAILED
===========
Error description:
{error,{"no such file or directory","xmerl.app"}}
Log files (may contain more information):
./../var/log/rabbitmq/rabbit#testTFOMS.log
./../var/log/rabbitmq/rabbit#testTFOMS-sasl.log
Stack trace:
[{app_utils,load_applications,2},
{app_utils,load_applications,1},
{rabbit,'-boot/0-fun-1-',0},
{rabbit,start_it,1},
{init,start_it,1},
{init,start_em,1}]
{"init terminating in do_boot",{rabbit,failure_during_boot,{error,{"no such file or directory","xmerl.app"}}}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
I can find the file
$:find / -name 'xmerl.app'
/usr/lib/erlang/lib/xmerl-1.2.10/ebin/xmerl.app
where you need to specify it to start the program?
Can you start xmerl without rabbitmq? Just:
application:start(xmerl).
Try:
code:add_path("/usr/lib/erlang/lib/xmerl-1.2.10/ebin/").
And than:
./rabbitmq-server
I downloaded the rpm package for CentOS and set it using zypper, then start up rabbitMQ

Resources