My server is not listening for file changes - linux

I use WebStorm and working with React. from some moment IDE just stoped watching for file changes and now I have to reload my server to see the changes. I have no idea what I did.
I found this (https://blog.jetbrains.com/idea/2010/04/native-file-system-watcher-for-linux/) page, but it's not helpful for me. my /etc/sysctl.conf is now looking like this:
# Uncomment the next line to enable TCP/IP SYN cookies
# See http://lwn.net/Articles/277146/
# Note: This may impact IPv6 TCP sessions too
#net.ipv4.tcp_syncookies=1
# Uncomment the next line to enable packet forwarding for IPv4
#net.ipv4.ip_forward=1
# Uncomment the next line to enable packet forwarding for IPv6
# Enabling this option disables Stateless Address Autoconfiguration
# based on Router Advertisements for this host
#net.ipv6.conf.all.forwarding=1
###################################################################
# Additional settings - these settings can improve the network
# security of the host and prevent against some network attacks
# including spoofing attacks and man in the middle attacks through
# redirection. Some network environments, however, require that these
# settings are disabled so review and enable them as needed.
#
# Do not accept ICMP redirects (prevent MITM attacks)
#net.ipv4.conf.all.accept_redirects = 0
#net.ipv6.conf.all.accept_redirects = 0
# _or_
# Accept ICMP redirects only for gateways listed in our default
# gateway list (enabled by default)
# net.ipv4.conf.all.secure_redirects = 1
#
# Do not send ICMP redirects (we are not a router)
#net.ipv4.conf.all.send_redirects = 0
#
# Do not accept IP source route packets (we are not a router)
#net.ipv4.conf.all.accept_source_route = 0
#net.ipv6.conf.all.accept_source_route = 0
#
# Log Martian Packets
#net.ipv4.conf.all.log_martians = 1
#
###################################################################
# Magic system request Key
# 0=disable, 1=enable all
# Debian kernels have this set to 0 (disable the key)
# See https://www.kernel.org/doc/Documentation/sysrq.txt
# for what other values do
#kernel.sysrq=1
###################################################################
# Protected links
#
# Protects against creating or following links under certain conditions
# Debian kernels have both set to 1 (restricted)
# See https://www.kernel.org/doc/Documentation/sysctl/fs.txt
#fs.protected_hardlinks=0
#fs.protected_symlinks=0
#fs.inotify.max_user_watches=524288

This usually happens when the project is large and contains many files.
I have also faced a similar issue. I solved it by increasing the file watch size.
Just uncomment the line fs.inotify.max_user_watches=524288 from the file /etc/sysctl.conf and save it. To load the new setting run sudo sysctl -p in the terminal

Related

Errror 503, HAProxy issue translating services across additional proxies, Docker, and LXD

I do believe I am most likely having issues with my HAProxy file, but I am unsure. I have previously used this same config file to access other services in containers, as well as other services on other loadbalancers, as well as apache systems, and now I am unable to do so.
I do not believe that the other service is to blame, as they are native snap installs.
HAProxy status URI shows the status as L7STS/502, and attempting to load the pages for the port show as 503.
Before, a page was loading, but it was Nextcloud, and so I went into the Gitlab config.rb file, and changed the Default Port for NGinx from 80 to 8800, and ran the gitlab-ctl reconfigure command to rebuild Git onto the other port, and made the correction appropriately inside of HAProxy as well.
Other services that are not behind a proxy of any kind are loading just fine, and docker container services are not loading appropriately either, showing the same 503 error, which leads me further to believe its my HAProxy config file.
Here is a HAProxy Config File:
global
log 127.0.0.1 syslog
maxconn 1000
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
option contstats
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
###########################################
#
# HAProxy Stats page
#
###########################################
listen stats
bind *:9090
mode http
maxconn 10
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth -----:-----
###########################################
#
# Front end for all
#
###########################################
frontend ALL
bind *:80
mode http
# Define path for lets encrypt
acl is_letsencrypt path_beg -i /.well-known/acme-challenge/
use_backend letsencrypt if is_letsencrypt
# Define hosts
acl host_horizon hdr(host) -i horizon.eduarmor.com
acl host_eduarmor hdr(host) -i www.eduarmor.com
acl host_nextcloud hdr(host) -i nextcloud.eduarmor.com
acl host_git hdr(host) -i git.eduarmor.com
acl host_minecraft hdr(host) -i mine.eduarmor.com
acl host_sugar hdr(host) -i sugar.eduarmor.com
acl host_maas hdr(host) -i maas.eduarmor.com
acl host_rocketchat hdr(host) -i rocketchat.eduarmor.com
acl host_hive hdr(host) -i hive.eduarmor.com
# Direct hosts to backend
use_backend horizon if host_horizon
use_backend eduarmor if host_eduarmor
use_backend nextcloud if host_nextcloud
use_backend git if host_git
use_backend minecraft if host_minecraft
use_backend sugar if host_sugar
use_backend maas if host_maas
use_backend rocketchat if host_rocketchat
use_backend hive if host_hive
###########################################
#
# Back end letsencrypt
#
###########################################
backend letsencrypt
server letsencrypt 127.0.0.1:8888
###########################################
#
# Back end for Horizon
#
###########################################
backend horizon
balance roundrobin
# option httpchk GET /check
option httpchk GET /
# http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 10.0.0.30:80 check
# server server2 0.0.0.0:80 check
###########################################
#
# Back end for EduArmor
#
###########################################
backend eduarmor
balance roundrobin
# option httpchk GET /check
option httpchk GET /
# http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 10.0.0.59:80 check
# server server2 0.0.0.0:80 check
##########################################
#
# Back end for Nextcloud
#
##########################################
backend nextcloud
balance roundrobin
# option httpchk GET /check
option httpchk GET /
# http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 10.0.0.101:80 check
##########################################
#
# Back end, Gitlab
#
##########################################
backend git
balance roundrobin
# option httpchk GET /check
option httpchk GET /
# http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 10.0.0.101:8800 check
##########################################
#
# Back end, Minecraft
#
##########################################
backend minecraft
balance roundrobin
# option httpchk GET /check
option httpchk GET /
# http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 10.0.0.101:25565 check
##########################################
#
# Back end, PHPSugar
#
##########################################
backend sugar
balance roundrobin
# option httpchk GET /check
option httpchk GET /
# http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 10.0.0.101:80 check
##########################################
#
# Back End, MAAS
#
##########################################
backend maas
balance roundrobin
# option httpchk GET /check
option httpchk GET /
# http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 10.0.0.100:5240 check
##########################################
#
# Back end for Rocketchat
#
##########################################
backend rocketchat
balance roundrobin
# option httpchk GET /check
option httpchk GET /
# http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 10.0.0.101:3000 check
server server2 10.0.0.102:3000 check
##########################################
#
# Back end for The Hive
#
##########################################
backend hive
balance roundrobin
# option httpchk GET /check
option httpchk GET /
# http-check expect rstring ^UP$
default-server inter 3s fall 3 rise 2
server server1 10.0.0.101:9000 check
server server2 10.0.0.102:9000 check
I would greatly appreciate any advice or insight into solving this problem, as well as any additional resources you may have on best practices, especially including configuring for SSL/TLS usage.
The solution was to comment out the option httpchk GET / comment, specifically for thehive backend, as well as shift away from using docker-compose to docker-swarm, which also substantially increased my knowledge as a whole of how docker works. The combination of issues from docker-compose combined with the / CHK was causing HAProxy to read the services as down, and returning a 503 error, which also meant it would never serve the services.
I would like to thank the anonymous person who volunteered their time to teach me docker-swarm and CI/CD processes tonight. I am much better for it than I would ever have been with being just spoon fed the answer, and I thank you so much for it, so do a lot of homeless veterans.

Hos to use rsyslog to ship non-syslog files to remote server?

I've been following this rsyslog/logstash article to try to ship my applications' log files to a remote server, via rsyslog. From that page, here are the steps I've taken. Note that firewall and SELinux are off on both client (VM sending logs) and server (VM receiving logs). I have proven via netcat utility that I can send packets between client and server.
On my client side, I've configured my /etc/rsyslog.conf file like so:
# Load the imfile module
module(load="imfile" PollingInterval="10")
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Debugging
$DebugFile /var/log/rsyslog-debug.log
$DebugLevel 2
# General configuration
$RepeatedMsgReduction off
$WorkDirectory /var/spool/rsyslog
$ActionQueueFileName mainqueue
$ActionQueueMaxDiskSpace 500M
$ActionQueueSaveOnShutdown on
$ActionQueueType LinkedList
$ActionResumeRetryCount -1
# Template for non json logs, just sends the message wholesale with extra
# # furniture.
template(name="textLogTemplate"
type="list") {
constant(value="{ ")
constant(value="\"type\":\"")
property(name="programname")
constant(value="\", ")
constant(value="\"host\":\"")
property(name="%HOSTNAME%")
constant(value="\", ")
constant(value="\"timestamp\":\"")
property(name="timestamp" dateFormat="rfc3339")
constant(value="\", ")
constant(value="\"#version\":\"1\", ")
constant(value="\"role\":\"app-server\", ")
constant(value="\"sourcefile\":\"")
property(name="$!metadata!filename")
constant(value="\", ")
constant(value="\"message\":\"")
property(name="rawmsg" format="json")
constant(value="\"}\n")
}
On client side, I have /etc/rsyslog.d/01-trm-error-logs.conf
input(type="imfile"
File="/usr/share/tomcat/dist/logs/trm-error.log"
Tag="trm-error-logs:"
readMode="2"
escapeLF="on"
)
if $programname == 'trm-error-logs:' then {
action(
type="omfwd"
Target="my.remoteserver.com"
Port="514"
Protocol="tcp"
template="textLogTemplate"
)
stop
}
On server side, I have in my /etc/rsyslog.conf
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
I've restarted the rsyslog service on both sides.
However, I don't see logs being shipped out. I do see the contents of /usr/share/tomcat/dist/logs/trm-error.log in /var/log/messages, though I do NOT want them to appear there. I do see the contents of /usr/share/tomcat/dist/logs/trm-error.log being read per the contents of the /var/log/rsyslog-debug.log file I generate.
I do run the following on the client machine, and see nothing.
tcpdump -i eth0 -n host my.remoteserver.com -P out -vvv
This turned out to be firewall issue on the server. i did stop the firewall, but did NOT disable it, so when I restarted the server, it was back on.

Run Node.js & Meteor behind SOCKS proxy

I am connecting to the internet in country where many sites blocked. So the method of connection is:
ssh -D 3030 root#46.101.111.333
then I configured in the Network Preferences
this way I able to connect anywhere using my browser. No problem. But when I want to install NPM modules or Meteor.js plugins with Terminal I get an error.
in NPM:
errno: 'ECONNREFUSED' If you are behind a proxy, please make sure that the 'proxy' config is set properly. See: 'npm help config'
in METEOR:
Unable to update package catalog (are you offline?)
If you are using Meteor behind a proxy, set HTTP_PROXY and HTTPS_PROXY
environment variables or see this page for more details:
https://github.com/meteor/meteor/wiki/Using-Meteor-behind-a-proxy
I followed both Meteor & NPM documentations.
Meteor
export HTTP_PROXY=http://root:password#46.101.111.333:3030
export HTTPS_PROXY=http://root:password#46.101.111.333:3030
meteor update
NPM
npm config set proxy http://root:password#46.101.111.333:3030
npm config set https-proxy http://root:password#46.101.111.333:3030
and some others...
Please help, what do I need to do else.. Is it ssh or proxy specific issue. Are my settings correct ?
Suppose your SOCKS5 proxy is: 127.0.0.1:3030 ...
Install proxychains-ng by homebrew
Create a ~/.proxychains/proxychains.conf
for example, you may need to add one line:
socks5 127.0.0.1 3030
following [ProxyList]:
# proxychains.conf VER 4
#
# HTTP, SOCKS4, SOCKS5 tunneling proxifier with DNS.
#
# The option below identifies how the ProxyList is treated.
# only one option should be uncommented at time,
# otherwise the last appearing option will be accepted
#
#dynamic_chain
#
# Dynamic - Each connection will be done via chained proxies
# all proxies chained in the order as they appear in the list
# at least one proxy must be online to play in chain
# (dead proxies are skipped)
# otherwise EINTR is returned to the app
#
strict_chain
#
# Strict - Each connection will be done via chained proxies
# all proxies chained in the order as they appear in the list
# all proxies must be online to play in chain
# otherwise EINTR is returned to the app
#
#random_chain
#
# Random - Each connection will be done via random proxy
# (or proxy chain, see chain_len) from the list.
# this option is good to test your IDS :)
# Make sense only if random_chain
#chain_len = 2
# Quiet mode (no output from library)
#quiet_mode
# Proxy DNS requests - no leak for DNS data
proxy_dns
# set the class A subnet number to usefor use of the internal remote DNS mapping
# we use the reserved 224.x.x.x range by default,
# if the proxified app does a DNS request, we will return an IP from that range.
# on further accesses to this ip we will send the saved DNS name to the proxy.
# in case some control-freak app checks the returned ip, and denies to
# connect, you can use another subnet, e.g. 10.x.x.x or 127.x.x.x.
# of course you should make sure that the proxified app does not need
# *real* access to this subnet.
# i.e. dont use the same subnet then in the localnet section
#remote_dns_subnet 127
#remote_dns_subnet 10
remote_dns_subnet 224
# Some timeouts in milliseconds
tcp_read_time_out 15000
tcp_connect_time_out 8000
# By default enable localnet for loopback address ranges
# RFC5735 Loopback address range
localnet 127.0.0.0/255.0.0.0
# RFC1918 Private Address Ranges
# localnet 10.0.0.0/255.0.0.0
# localnet 172.16.0.0/255.240.0.0
# localnet 192.168.0.0/255.255.0.0
# Example for localnet exclusion
## Exclude connections to 192.168.1.0/24 with port 80
# localnet 192.168.1.0:80/255.255.255.0
## Exclude connections to 192.168.100.0/24
# localnet 192.168.100.0/255.255.255.0
## Exclude connections to ANYwhere with port 80
# localnet 0.0.0.0:80/0.0.0.0
# ProxyList format
# type host port [user pass]
# (values separated by 'tab' or 'blank')
#
#
# Examples:
#
# socks5 192.168.67.78 1080 lamer secret
# http 192.168.89.3 8080 justu hidden
# socks4 192.168.1.49 1080
# http 192.168.39.93 8080
#
#
# proxy types: http, socks4, socks5
# ( auth types supported: "basic"-http "user/pass"-socks )
#
[ProxyList]
# add proxy here ...
# meanwile
# defaults set to "tor"
socks5 127.0.0.1 3030
then run the meteor by adding proxychains4 in front, e.g.:
proxychains4 meteor add angularui:angular-ui-router

haproxy bind command to include cipher in haproxy.cfg file

I am configuring the haproxy.cfg file for haproxy. i need to add cipher suite in this file. for that i am using bind command. My bind command is as below.
bind 0.0.0.0:443 ssl crt /etc/ssl/certs/private1.pem nosslv3
prefer-server-ciphers ciphers
TLSv1+HIGH:!SSLv2:!aNULL:!eNULL:!3DES:#STRENGTH
With bind command bind *:443 it is working fine. once i add the other arguments its throwing error.
After including this command in haproxy.cfg file and restarting the haproxy service. i am getting the error.
**
[ALERT] 164/074924 (31084) : parsing [/etc/haproxy/haproxy.cfg:80] : 'bind' only supports the 'transparent', 'defer-accept', 'name', 'id', 'mss' and 'interface' options.
[ALERT] 164/074924 (31084) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 164/074924 (31084) : Fatal errors found in configuration.
Errors in configuration file, check with haproxy check.
**
For resolving this issue i tried to install "libssl-dev" package. but i am not able to install that package also.
**Please guide me to do this. and i need to know is it neccesary to give the pem file entry in bind, or i can directly include cipher itself like this.
bind *:8443 ciphers TLSv1+HIGH:!SSLv2:!aNULL:!eNULL:!3DES:#STRENGTH**
Appending my haproxy.cfg file below.
**#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
frontend inbound
mode http
bind 0.0.0.0:443 ssl crt /etc/ssl/certs/private1.pem nosslv3 prefer-server-ciphers ciphers TLSv1+HIGH:!SSLv2:!aNULL:!eNULL:!3DES:#STRENGTH
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend postgresqlcluster1
mode http
balance roundrobin
server postgres1 192.44.9.101:8080 check**
You need to be using 1.5-dev19+ (current is 15.-dev26) to utilize any of the ssl functionality; based on the error and the config excerpt, it looks like you are running 1.4.

Openswan tunnel not working after network restart

I observed some strange behaviour while trying to create ipsec connection.
I configured ipsec between cisco asa and my Linux box and it works as expected. But when I restart the network service on my Linux box or restart the port on the cisco side, the tunnel stops working but tunnel status is up:
/etc/init.d/ipsec status
/usr/libexec/ipsec/addconn Non-fips mode set in /proc/sys/crypto/fips_enabled
IPsec running - pluto pid: 2684
pluto pid 2684
1 tunnels up
some eroutes exist
When I try to connect to the other side (telnet, ping, ssh), the connection doesn't work.
My /etc/ipsec.conf looks like this:
# /etc/ipsec.conf - Openswan IPsec configuration file
#
# Manual: ipsec.conf.5
#
# Please place your own config files in /etc/ipsec.d/ ending in .conf
version 2.0 # conforms to second version of ipsec.conf specification
# basic configuration
config setup
# Debug-logging controls: "none" for (almost) none, "all" for lots.
# klipsdebug=none
# plutodebug="control parsing"
# For Red Hat Enterprise Linux and Fedora, leave protostack=netkey
protostack=netkey
nat_traversal=yes
virtual_private=
oe=off
# Enable this if you see "failed to find any available worker"
nhelpers=0
#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf
And my /etc/ipsec.d/myvpn.conf looks like this:
conn myvpn
authby=secret # Key exchange method
left=server-ip # Public Internet IP address of the
# LEFT VPN device
leftsubnet=server-ip/32 # Subnet protected by the LEFT VPN device
leftnexthop=%defaultroute # correct in many situations
right=asa-ip # Public Internet IP address of
# the RIGHT VPN device
rightsubnet=network/16 # Subnet protected by the RIGHT VPN device
rightnexthop=asa-ip # correct in many situations
auto=start # authorizes and starts this connection
# on booting
auth=esp
esp=aes-sha1
compress=no
When I restart the openswan service everything starts working, but i think there should be some logic that does this automatically. has anyone an idea what i am missing?
You probably want to enable dead peer detection if available on both sides. Dead peer detection notices when the tunnel isn't actually working anymore and disconnects or resets it.
If not available, you can also try changing your session renegotiation time down very low; your tunnel will create new keys frequently and set up new tunnels to replace the old ones on a regular basis effectively recreating the tunnel after that timeout when the session has gone down.
For PPP sessions on Linux myself, I simply have a "service ipsec restart" in /etc/ppp/ip-up.local to restart all tunnels whenever the PPP device comes back online.
YMMV.
Just try DPD, but not work.
So I just learned from mikebabcock.
add the following line in my /etc/ppp/ip-down
service ipsec restart
With this workaround, now L2TP/IPSec worked like a charm.
I don't like the idea restarting ipsec every time you lose connection. Actually /usr/libexec/ipsec/_updown is ran on different actions in ipsec. The same script can be run on leftupdown/rightupdown. But the problem is that it doesn't perform any actual command when the remote client connects back to your host. To fix this issue you need add doroute replace after up-client) in /usr/libexec/ipsec/_updown.netkey (if you use Netkey of course):
# ...skipped...
#
up-client)
# connection to my client subnet coming up
# If you are doing a custom version, firewall commands go here.
doroute replace
#
# ...skipped...
But be aware, this file will be overwritten, if you update your packages, so just put it somewhere else, and then add the following commands to your connection config:
rightupdown="/usr/local/libexec/ipsec/_updown"
leftupdown="/usr/local/libexec/ipsec/_updown"
Now the routes will be restored as soon as the remote connects back to your server.
Also to me, for strange reasons DPD not work properly in every situation.
I use this script to check every minute the status. The scripts runs on the Peer (e.g. the Firewall):
C=$(ipsec auto --status | grep "established" | wc -l)
if [ $C -eq 0 ]
then
echo "Tunnel is down... Restarting"
ipsec restart
else
echo "Tunnel is up...Bye!"
fi
this could happen because of iptables rules.
Be sure to have enabled the udp port 500 and the esp protocol towards the remote public ip address.
Example:
iptables -A OUTPUT -p udp -d 1.2.3.4 --dport 500 -j ACCEPT
iptables -A OUTPUT -p esp -d 1.2.3.4 -j ACCEPT
Bye

Resources