Error with threads MongoDB - multithreading

I have this error:
2018-02-08T15:05:25.267-0600 I - [thread2] pthread_create
failed: Resource temporarily unavailable 2018-02-08T15:05:25.267-0600
I - [thread2] failed to create service entry worker thread for
10.20.9.217:18465 2018-02-08T15:05:25.272-0600 F - [conn27232] std::exception::what(): Resource temporarily unavailable Actual
exception type: std::system_error
and this is my mongod.service
Group=mongodb
ExecStart=/usr/bin/mongod --config /etc/mongod.conf
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
Can the "TaskMax" configuration be responsible?
Did any of you have this error before?
Y have a router(mongos) and a primary(mongod) in the same server.
thanks!!!

Related

Elasticsearch showing received plaintext http traffic on an https channel

I am try learning service in linux and i install the elasticsearch, but it seems its not work when typing the command "sudo service elasticsearch restart. the website just show " the connect is reset" after browsing the "http://localhost:9200/".It seems that the connection is blocked.It just wondering is the problem on the ssl?I have try to use https
it gives:
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [] for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}}],"type":"security_exception","reason":"unable to authenticate user [] for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}},"status":401}
".
and inaddition when i run the " curl " http://google.com:443",it shows
"curl: (52) Empty reply from server"
elastic.log:
[2022-12-01T02:51:22,944][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [alex-VirtualBox] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:54614}
elasticsearch.yml:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: localhost
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
When typing the systemctl status elasticsearch.service,it seems work fine:
systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabl>
Active: active (running) since Thu 2022-12-01 02:49:56 HKT; 10s ago
Docs: https://www.elastic.co
Main PID: 11238 (java)
Tasks: 79 (limit: 1783)
Memory: 780.9M
CPU: 20.908s
CGroup: /system.slice/elasticsearch.service
├─11238 /usr/share/elasticsearch/jdk/bin/java -Xms4m -Xmx64m -XX:+UseSerialGC -D>
├─11297 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 ->
└─11318 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/con>
System.log(seems not very useful):
Dec 1 02:54:54 alex-VirtualBox kernel: [12786.591861] [UFW BLOCK] IN=enp0s3 OUT= MAC=08:00:27:55:db:e2:52:54:00:12:35:02:08:00 SRC=10.0.2.2 DST=10.0.2.15 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=5926 PROTO=TCP SPT=443 DPT=40826 WINDOW=65535 RES=0x00 ACK RST URGP=0

Sidekiq starting successfully, but systemd restarts every ~1 minute anyway

Rails: 6.0.3
Sidekiq: 6.1.2
Ruby 2.7.2
Running on AWS Amazon Linux 2
I'm running a fairly simply Sidekiq configuration on production, and using the boilerplate systemd/sidekiq.service file from the examples directory in the sidekiq repo.
I noticed that my workers can not run long jobs because they are killed every 1 minute or so. I was able to track down what's happening, and it appears that systemd is restarting sidekiq, even though it is successfully started. It appears that it never receives the message that the service started successfully, so systemd is killing the process.
Here are the logs:
sidekiq: 2021-06-01T23:30:56.510Z pid=24939 tid=gir INFO: Shutting down
sidekiq: 2021-06-01T23:30:56.511Z pid=24939 tid=4jxb INFO: Scheduler exiting...
systemd: Failed to start sidekiq.
systemd: Unit sidekiq.service entered failed state.
systemd: sidekiq.service failed.
sidekiq: 2021-06-01T23:30:56.513Z pid=24939 tid=gir INFO: Terminating quiet workers
sidekiq: 2021-06-01T23:30:56.513Z pid=24939 tid=4jvn INFO: Scheduler exiting...
sidekiq: 2021-06-01T23:30:57.015Z pid=24939 tid=gir INFO: Pausing to allow workers to finish...
sidekiq: 2021-06-01T23:30:57.516Z pid=24939 tid=gir INFO: Bye!
systemd: sidekiq.service holdoff time over, scheduling restart.
systemd: Starting sidekiq...
sidekiq: 2021-06-01T23:30:58.991Z pid=32046 tid=fs6 INFO: Enabling systemd notification integration
sidekiq: 2021-06-01T23:31:04.475Z pid=32046 tid=fs6 INFO: Booting Sidekiq 6.1.2 with redis options {:url=>"redis://******"}
sidekiq: 2021-06-01T23:31:08.869Z pid=32046 tid=fs6 INFO: Running in ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]
sidekiq: 2021-06-01T23:31:08.870Z pid=32046 tid=fs6 INFO: See LICENSE and the LGPL-3.0 for licensing details.
systemd: sidekiq.service: Got notification message from PID 32046, but reception only permitted for main PID 31981
Following these messages, the sidekiq worker will successfully perform the jobs from the queue for about 1 minute before it's restarted again. This cycle continues forever.
I've tried modifying the sidekiq.service file a number of different ways, but nothing seems to do the trick. In particular, this line from the logs seems to indicate there's an issue sending the signal to the right process ID, that sidekiq correctly started up: systemd: sidekiq.service: Got notification message from PID 32046, but reception only permitted for main PID 31981
Any ideas on how I can ensure that systemd accurately knows when a job succeeds/fails to start?
Here is my current systemd/sidekiq.service file:
#
# This file tells systemd how to run Sidekiq as a 24/7 long-running daemon.
#
# Customize this file based on your bundler location, app directory, etc.
# Customize and copy this into /usr/lib/systemd/system (CentOS) or /lib/systemd/system (Ubuntu).
# Then run:
# - systemctl enable sidekiq
# - systemctl {start,stop,restart} sidekiq
#
# This file corresponds to a single Sidekiq process. Add multiple copies
# to run multiple processes (sidekiq-1, sidekiq-2, etc).
#
# Use `journalctl -u sidekiq -rn 100` to view the last 100 lines of log output.
#
[Unit]
Description=sidekiq
# start us only once the network and logging subsystems are available,
# consider adding redis-server.service if Redis is local and systemd-managed.
After=syslog.target network.target
# See these pages for lots of options:
#
# https://www.freedesktop.org/software/systemd/man/systemd.service.html
# https://www.freedesktop.org/software/systemd/man/systemd.exec.html
#
# THOSE PAGES ARE CRITICAL FOR ANY LINUX DEVOPS WORK; read them multiple
# times! systemd is a critical tool for all developers to know and understand.
#
[Service]
#
# !!!! !!!! !!!!
#
# As of v6.0.6, Sidekiq automatically supports systemd's `Type=notify` and watchdog service
# monitoring. If you are using an earlier version of Sidekiq, change this to `Type=simple`
# and remove the `WatchdogSec` line.
#
# !!!! !!!! !!!!
#
Type=simple
# If your Sidekiq process locks up, systemd's watchdog will restart it within seconds.
#WatchdogSec=10
EnvironmentFile=/opt/elasticbeanstalk/deployment/custom_env_var
WorkingDirectory=/var/app/current
# If you use rbenv:
# ExecStart=/bin/bash -lc 'exec /home/deploy/.rbenv/shims/bundle exec sidekiq -e production'
# If you use the system's ruby:
# ExecStart=/usr/local/bin/bundle exec sidekiq -e production
# If you use rvm in production without gemset and your ruby version is 2.6.5
# ExecStart=/home/deploy/.rvm/gems/ruby-2.6.5/wrappers/bundle exec sidekiq -e production
# If you use rvm in production wit gemset and your ruby version is 2.6.5
ExecStart=/bin/bash -lc 'cd /var/app/current; bundle exec sidekiq -e production -r /var/app/current -C /var/app/current/config/sidekiq.yml'
# Use `systemctl kill -s TSTP sidekiq` to quiet the Sidekiq process
# !!! Change this to your deploy user account !!!
User=root
Group=root
UMask=0002
# Greatly reduce Ruby memory fragmentation and heap usage
# https://www.mikeperham.com/2018/04/25/taming-rails-memory-bloat/
Environment=MALLOC_ARENA_MAX=2
# if we crash, restart
RestartSec=1
Restart=on-failure
# output goes to /var/log/syslog (Ubuntu) or /var/log/messages (CentOS)
StandardOutput=syslog
StandardError=syslog
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
Change ExecStart to:
ExecStart=/direct/path/to/bundle exec sidekiq -e production
Everything else in that line appears superfluous.
Maybe this work in your case:
Type=notify
Notify=all # or "exec"

ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux-x86_64 Error: 2: No such file or directory

I am running Oracle 11g on Linux server and one the below Database issues occurred suddenly (every 2 weeks or 3 weeks sometimes):
Some times:
ORA-01034: ORACLE not available ORA-27102: out of memory Linux-x86_64 Error: 12: Cannot allocate memory Additional information: 1 Additional information: 163844 Additional information: 8
And last time:
ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Linux-x86_64 Error: 2: No such file or directory
When I tried to startup database after setting up SID but I had the below error:
SQL> startup
ORA-00845: MEMORY_TARGET not supported on this system
I rebooted the server then everything is OK
My page size: 4096
kernel.shmall = 4294967296
How can I prevent these issues from happening again? should I update anything in Oracle memory settings?
Make sure your /dev/shm allocation is greater than what you have set for MEMORY_MAX_TARGET
Example fix for a memory allocation of 4Gb:
mount -o remount,size=4096m /dev/shm
Entry for /etc/fstab file to make the change permanent
tmpfs /dev/shm tmpfs size=4096m 0 0
Also see Oracle support: Doc ID 1399209.1 - ORA-00845 - Which value for /dev/shm is needed to startup database without ORA-00845
See, this is what worked for me. My ORACLE_SID, ORACLE_HOME etc., were just fine.
Restart the listener - lsnrctl start
sqlplus /nolog
connect /as sysdba
startup

Unable to load app 0 (mountpoint='') (callable not found or import error)

I have the below error with nginx uWSGI Flask application on CentOS 7 Linux:
unable to load app 0 (mountpoint='') (callable not found or import error)
I have followed Digital Ocean tut as my first time with Linux, after original Udemy tut using earlier Centos version didnt seem to work. I was able to get 'Hello world' type basic Python file to run on uwsgi and nginx following the tut, then I see several people have this error but seem different solutions. I changed permissions recursively on a parent folder above the app as I thought permissions might be source of error:
sudo chmod -R 771 www
and also giving nginx access to ?my user group? as Dig Ocean tutorial advised:
sudo usermod -a -G will nginx
chmod 710 /home/will
uwsgi.ini file is following, where /var/www/html/CON29Application1/Source/app.py is the app I want to run, not sure if module line is correct syntax:
[uwsgi]
module = Source.app
master = true
socket = /var/www/html/CON29Application1/socket.sock
chmod-socket = 777
vacuum = true
processes = 8
threads = 8
harakiri = 15
logto = /var/www/html/CON29Application1/log/%n.log
die-on-term = true
Systemd file is this at /etc/systemd/system/CON29Application1.service:
[Unit]
Description=uWSGI instance to serve CON29Application1
After=network.target
[Service]
User=will
Group=nginx
WorkingDirectory=/var/www/html/CON29Application1
Environment="PATH=/var/www/html/CON29Application1/venv/bin"
ExecStart=/var/www/html/CON29Application1/venv/bin/uwsgi --ini uwsgi.ini
[Install]
WantedBy=multi-user.target
This is default.conf file at /etc/nginx/conf.d/default.conf (server IP I blanked in this post for security reasons):
server {
listen 80;
server_name 188.xxx.xxx.xxx;
location / {
include uwsgi_params;
uwsgi_pass unix:/var/www/html/CON29Application1/socket.sock;
uwsgi_modifier1 30;
}
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
and this is app.py file:
from flask import Flask, render_template
...
app = Flask(__name__)
app.config.from_object('Source.config')
app.secret_key = "---------------------------------"
...
And sorry as I am not that sure which files are relevant to include, so here is also the full uwsgi.log file! Thanks for any ideas:
*** Starting uWSGI 2.0.15 (64bit) on [Wed Oct 4 13:26:19 2017] ***
compiled with version: 4.8.5 20150623 (Red Hat 4.8.5-16) on 02 October 2017 21:47:33
os: Linux-3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017
nodename: CON29Application1
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 1
current working directory: /var/www/html/CON29Application1
detected binary path: /var/www/html/CON29Application1/venv/bin/uwsgi
your processes number limit is 1792
your memory page size is 4096 bytes
*** WARNING: you have enabled harakiri without post buffering. Slow upload could be rejected on post-unbuffered webservers ***
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /var/www/html/CON29Application1/socket.sock fd 3
Python version: 3.6.2 (default, Sep 27 2017, 16:30:17) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
Python main interpreter initialized at 0x9acbd0
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 1304064 bytes (1273 KB) for 64 cores
*** Operational MODE: preforking+threaded ***
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 24254)
spawned uWSGI worker 1 (pid: 24259, cores: 8)
spawned uWSGI worker 2 (pid: 24260, cores: 8)
spawned uWSGI worker 3 (pid: 24261, cores: 8)
spawned uWSGI worker 4 (pid: 24262, cores: 8)
spawned uWSGI worker 5 (pid: 24263, cores: 8)
spawned uWSGI worker 6 (pid: 24264, cores: 8)
spawned uWSGI worker 7 (pid: 24265, cores: 8)
spawned uWSGI worker 8 (pid: 24266, cores: 8) 

dockerd: Error running deviceCreate (CreatePool) dm_task_run failed

I'm building some CentOS VM with VMWare, with no access to internet, so I've downloaded and made local repositories, including this one
Then I have installed docker-engine.x86_64, and when starting the docker daemon, I get the following errors :
[root]# dockerd
DEBU[0000] docker group found. gid: 993
...
...
DEBU[0001] Error retrieving the next available loopback: open /dev/loop-control: no such device
ERRO[0001] **There are no more loopback devices available.**
ERRO[0001] [graphdriver] prior storage driver "devicemapper" failed: loopback attach failed
DEBU[0001] Cleaning up old mountid : start.
FATA[0001] Error starting daemon: error initializing graphdriver: loopback attach failed
After manually add the loop module which control loop device with this command :
insmod /lib/modules/3.10.0-327.36.2.el7.x86_64/kernel/drivers/block/loop.ko
The error changes to :
[graphdriver] prior storage driver "devicemapper" failed: devicemapper: Error running deviceCreate (CreatePool) dm_task_run failed
I've read that it could be because I have not enough space disk, I think it's not that, any idea?
[root]# df -k .
Filesystem blocs de 1K Used Available Used Mounted on
/dev/mapper/centos-root 51887356 2436256 49451100 5% /
I got the "There are no more loopback devices available" error, which stopped dockerd from running.
I fixed it by ensuring the storage driver was 'overlay':
# /usr/bin/dockerd -D --storage-driver=overlay
This was on Debian Jessie and docker running as a systemd service/unit.
To make it permanent, I created a systemd drop-in:
$ cat /etc/systemd/system/docker.service.d/docker.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --storage-driver=overlay

Resources