I have a Rails 6 application with Webpacker on a virtual host using Plesk. Node.js packages have been successfully installed with yarn.
When calling the website, Phusion Passenger fails with:
And the Stdout/stderr output of the failing subprocess just prints the first 65412 characters of my public/packs/js/application-ad2c73bce874600d5502.js file, without any more error details... What does that mean, and how can I get it running?
Passenger Core:
PID
27769
Backtrace
in 'bool Passenger::SpawningKit::HandshakePerform::checkCurrentState()' (Perform.h:238)
in 'void Passenger::SpawningKit::HandshakePerform::waitUntilSpawningFinished(boost::unique_lock<boost::mutex>&)' (Perform.h:213)
in 'Passenger::SpawningKit::Result Passenger::SpawningKit::HandshakePerform::execute()' (Perform.h:1752)
in 'Passenger::SpawningKit::Result Passenger::SpawningKit::DirectSpawner::internalSpawn(const AppPoolOptions&, Passenger::SpawningKit::Config&, Passenger::SpawningKit::HandshakeSession&, const Passenger::Json::Value&, Passenger::SpawningKit::JourneyStep&)' (DirectSpawner.h:211)
in 'virtual Passenger::SpawningKit::Result Passenger::SpawningKit::DirectSpawner::spawn(const AppPoolOptions&)' (DirectSpawner.h:261)
in 'void Passenger::ApplicationPool2::Group::spawnThreadRealMain(const SpawnerPtr&, const Passenger::ApplicationPool2::Options&, unsigned int)' (SpawningAndRestarting.cpp:95)
User and group
uid=0(root) gid=0(root) groups=0(root)
Ulimits
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 39266
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 39266
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Environment variables
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
NOTIFY_SOCKET=/run/systemd/notify
LANG=C
PASSENGER_USE_FEEDBACK_FD=true
SERVER_SOFTWARE=Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips Apache mod_fcgid/2.3.9 Phusion_Passenger/6.0.8
Subprocess:
PID
3850
Stdout and stderr output
/var/www/vhosts/mydomain.com/httpdocs/myapp/public/packs/js/application-ad2c73bce874600d5502.js:2
[The first 65412 characters of the file content]
User and group
uid=10000(mthcgidu) gid=1003(psacln) groups=1003(psacln)
Ulimits
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 39266
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 39266
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Environment variables
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
NOTIFY_SOCKET=/run/systemd/notify
LANG=C
PASSENGER_USE_FEEDBACK_FD=true
SERVER_SOFTWARE=Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips Apache mod_fcgid/2.3.9 Phusion_Passenger/6.0.8
IN_PASSENGER=1
PASSENGER_SPAWN_WORK_DIR=/tmp/passenger.spawn.XXXXoUtv1L
PYTHONUNBUFFERED=1
NODE_PATH=/usr/share/passenger/node
RAILS_ENV=development
RACK_ENV=development
WSGI_ENV=development
NODE_ENV=development
PASSENGER_APP_ENV=development
USER=mthcgidu
LOGNAME=mthcgidu
SHELL=/usr/local/psa/bin/chrootsh
HOME=/var/www/vhosts/mydomain.com
PWD=/var/www/vhosts/mydomain.com/httpdocs/myapp
GEOIP_ADDR=[...]
HTTPS=on
PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY=0
PASSENGER_DOWNLOAD_NATIVE_SUPPORT_BINARY=0
PERL5LIB=/usr/share/awstats/lib:/usr/share/awstats/plugins
UNIQUE_ID=YVCbtpjnrt9WLCv4IWd-gAAAAMM
WEBPACKER_NODE_MODULES_BIN_PATH=/httpdocs/myapp/node_modules/.bin
The JS file was minified, thus containing a single line only. I downloaded the file, run some code formating, and uploaded the resulting content with ~25000 lines by replacing the original minified content. Then I could see the line responsible for the error, and also error message and backtrace.
Related
The error is coming only while printing reports which has more than
150 pages.
wkhtmltopdf version : 0.12.5 (with patched qt)
OS : Ubuntu 20.04.3 LTS
CPU(s) : 48
Memory(RAM): 94GB
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 377646
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 200000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 200000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
odoo configuration file
limit_memory_hard = 78114717696
limit_memory_soft = 65095598080
Is it possible to expand locked memory limit on Google Colab notebooks? It runs on a Ubuntu 18.04 VM.
I'm running
ulimit -l unlimited
But I receive this in response
ulimit: max locked memory: cannot modify limit: Operation not permitted
This is what ulimit -a returns
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 51915
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Yes, by running this:
i = 0
while True:
i += 1
After 20 seconds, google with make your GPU virtual memory bigger.
I have a multi-threaded java application that spawns as many thread as the reports to be done in a certain moment. At the end of the process, I generate an Excel file with Apache POI (3.15) with WorkbookFactory.create(file), where file is an empty template I use to create a brand new Excel file.
With a particular intensive report (it takes hours to be generated), when the code reaches this point, it throws this exception:
Caused by: java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:163)
at org.apache.poi.util.IOUtils.readFully(IOUtils.java:164)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:229)
at org.apache.poi.poifs.filesystem.NPOIFSFileSystem.<init>(NPOIFSFileSystem.java:168)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:250)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:222)
at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:201)
at it.habble.report.designers.InvoiceCheckDesigner.<init>(InvoiceCheckDesigner.java:87)
I've read somewhere it could be related to limits.conf file. Have you any advices on how to investigate on this? Current values:
[user#localhost ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 191942
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 8192
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I am running a Spark application and I always getting an out of memory exception..
Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
I run my program under local[5] in a node cluster on linux but it stills gives me this error..can someone point me how to rectify that in my Spark application..
Looks like some problem with ulimit configured on your machine. Run the ulimit -a command, you will see below result.
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63604
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 10240
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 63604
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
check the open files and max user processes configured values. It should be high.
You can configure them using below commands:
ulimit -n 10240
ulimit -u 63604
Once you are done with configuration of ulimits. You can start your application to see the effect.
I am trying to get guitarix and jack running on the Raspberry Pi 2 (+ Cirrus audio card) with raspbian.
When starting jack via qjackctl, I get the errors
Cannot lock down 82278944 byte memory area (Cannot allocate memory)
Cannot use real-time scheduling (RR/10)(1: Operation not permitted)
It seems changes to /etc/security/limits.conf do not apply but to /etc/security/limits.d/audio.conf do.
I tried setting the memory lock size for the user and group:
#audio - rtprio 90 # maximum realtime priority
#audio - memlock unlimited # maximum locked-in-memory address space (KB)
#audio - nice -10
pi - rtprio 90
pi - memlock unlimited
pi - nice -10
From ssh I get a satisfactory result:
pi#raspberrypi ~ $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 30
file size (blocks, -f) unlimited
pending signals (-i) 7349
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 90
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7349
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
But from the desktop terminal I access via vnc I get
pi#raspberrypi ~ $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7349
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7349
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
How can the same user have different settings and how do I get rt and memory allocation running on the desktop?
It seems to have been a problem with the PAM configuration as stated here.
Uncommenting the line in /etc/pam.d/su did the trick.
# session required pam_limits.so