Start Varnish with a config file - varnish

Ubuntu 18.04.4 LTS
varnishd -V
varnishd (varnish-6.4.0 revision 13f137934ec1cf14af66baf7896311115ee35598)
Copyright (c) 2006 Verdens Gang AS
Copyright (c) 2006-2020 Varnish Software AS
My very first steps with studying Varnish and I've bitten the dust.
I've prepared a configuration file at /etc/systemd/system/varnish.service
I'm trying to start varnishd with this config:
[Unit]
Description=Varnish HTTP accelerator
Documentation=https://www.varnish-cache.org/docs/4.1/ man:varnishd
[Service]
Type=simple
LimitNOFILE=131072
LimitMEMLOCK=82000
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :6081 -b :8000 -T localh$
ExecReload=/usr/share/varnish/varnishreload
ProtectSystem=full
ProtectHome=true
PrivateTmp=true
PrivateDevices=true
[Install]
WantedBy=multi-user.target
The content of the file is taken from the book "Getting started with Varnish Cache", but it is for version 4.1.
Documentation: https://varnish-cache.org/docs/6.4/users-guide/run_security.html#cli-interface-authentication
Well, I've prepared the file. I enter the command:
varnishd -S /etc/systemd/system/varnish.service
Error: Neither -b nor -f given. (use -f '' to override)
(-? gives usage)
But this command works fine:
sudo varnishd -a localhost:6081 -b localhost:8000
Could you help me understand:
What a simplest config file shoud be like.
Where it should be placed.
How to start Varnish with this config.

I'm the author of Getting started with Varnish Cache. Thanks for buying my book.
The varnish.service file is a systemd file. It has nothing to do with Varnish itself, but it's what Ubuntu uses to manage the Varnish service.
The ExecStart command
Here how I would set the ExecStart command in varnish.service:
/usr/sbin/varnishd -f /etc/varnish/default.vcl -a http=:80,HTTP -a proxy=:8443,PROXY -s malloc,1G -S /etc/varnish/secret -T localhost:6082
What you're not seeing in this command:
-F: the -F flag only makes sense if you're running the varnishd process in the foreground. For example in Docker. Since you'r using systemd to run Varnish, you can remove that parameter
-b: the -b option is used to define your backend location. If you use -b, you can't use -f, and you'll need -f for your VCL configuration.
What the options mean
-f: the location of the VCL file. Refers to /etc/varnish/default.vcl in this case
-a: the listening address of Varnish. In this case it's port 80 for regular HTTP and port 8443 for connections using the PROXY protocol
-s: the size of the cache, in this case 1GB
-S: the location of the secret key file. In this case this is /etc/varnish/secret
-T: the listening address of the CLI. In this case this is localhost on port 6082
The VCL file
The VCL file that contains the location of the backend and defines the caching rules. This file is located in /etc/varnish/default.vcl.
This is the minimum amount of VCL code to get going:
vcl 4.0;
backend default {
.host = "localhost";
.port = 8080;
}
This config assumes your webserver is running on the same machine, on port 8080.
You can extend the configuration of varnish by hooking into the different process states of the Varnish Finite State Machine.
See https://varnish-cache.org/docs/6.0/reference/vcl.html#varnish-configuration-language to learn more about VCL.
Activating the changes
Whenever you update varnish.service, you need to reload systemd. This is command you need:
sudo systemctl daemon-reload
To activate changes in your VCL file, you need to run the following command:
sudo systemctl reload varnish.service
Good luck!

Take a look here: https://varnish-cache.org/docs/6.4/users-guide/command-line.html
More detailed here: https://varnish-cache.org/docs/6.4/reference/varnishd.html
A Varnish config could for example be placed here: /etc/varnish/default.vcl
Most simple VCL:
backend default {
.host = "localhost:81";
}
How to write VCL: https://varnish-cache.org/docs/6.4/users-guide/vcl.html

Related

Gitlab + Cockpit: Prometheus 100% CPU usage

I'm using Cockpit to monitor a server with Gitlab.
Since I installed Cockpit, Gitlab is using 100% of my CPU.
When I check with htop, I see this is a Gitlab component, prometheus.
Solution:
While I'm writing this question, I found a solution.
Prometheus and Cockpit use the same port by default (9090).
I just have to change the Cockpit port to another and everything goes to normal :)
On Ubuntu Server 18.04, edit /etc/systemd/system/sockets.target.wants/cockpit.socket like this:
[Unit]
Description=Cockpit Web Service Socket
Documentation=man:cockpit-ws(8)
[Socket]
ListenStream=XXXX <-- Change port here.
ExecStartPost=-/bin/ln -sf /usr/share/cockpit/issue/active.issue /run/cockpit/issue
ExecStopPost=-/bin/ln -sf /usr/share/cockpit/issue/inactive.issue /run/cockpit/issue
[Install]
WantedBy=sockets.target
Then reload systemd config and restart Cockpit:
sudo systemctl daemon-reload
sudo systemctl restart cockpit.socket
That's all!

10K concurrent connections with nginx

I'm building an app on my own and I'd like for it to handle 10K concurrent connections (tested via local machine and locust script)
Hosted on two ubuntu 14.04 servers with nginx reverse proxy and nodeJS app server.
Currently I get to around 3.3K concurrent users before suffering a spike of 500 connection dropped errors.
I achieved load balancing across port connections by running the app on two separate ports and using an upstream directive to spread requests over ports.
However, this has not shown any demonstrated improvement in my numbers.
Question:
I know that there is a lot of information missing here (I imagine how much bandwidth each user requires). How do I go about gathering the right information to decide this?
What other option can I consider/learn/implement to generate the biggest gain in possible concurrent users?
Thanks a lot!
If you see a lot of "Too many open files" errors in your nginx log, it is because you've reach the limit of concurrent open sockets currently set on the server.
You might have to increase the Usage Limit (ULimit) for the nginx user and probably the node app user too. This is the first issue I hit each time I load test an nginx+nodejs. (https://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/)
ULimit
The ulimit command gives you the limits of the current user so you have to run it while connected on the nginx user
# su in the user nginx
su - nginx
ulimit -Hn
ulimit -Sn
# or
su nginx --shell /bin/bash --command "ulimit -Hn"
su nginx --shell /bin/bash --command "ulimit -Sn"
We usually change it in /etc/security/limits.conf
sudo vi /etc/security/limits.conf
# Add the following lines
nginx soft nofile 30000
nginx hard nofile 30000
# Save
# Reload (I think rebooting here is the best way)
sysctl -p
nginx.conf
After that, you have to set the limit at the nginx software level in the nginx config file
sudo vi /etc/nginx/nginx.conf
# Add this line at the root of the config
worker_rlimit_nofile 30000;
# Reload (This will fail if you have SELinux enabled)
sudo nginx -s reload
SELinux
To allow the nginx app to set its own limit you can set a selinux bool:
setsebool httpd_setrlimit on
sudo nginx -s reload

Setting DNS for Docker daemon on OS with systemd

The default DNS for Docker (e.g. 8.8.8.8) is blocked where I work, so I want to change the default. I've been able to do this using
$ docker daemon --dns <mydnsaddress>
but I want to do this using a systemd drop-in instead, since the official Docker docs recommend this way. I've made a /etc/systemd/system/docker.service.d/dns.conf file, and used things like this:
[Service]
DNS=<mydnsaddress>
But I just have no idea what the variable name is supposed to be. How do I set this? More importantly, is there a page that documents all config variables that can be used in systemd drop-ins for Docker?
(btw, this is Docker 1.9 on Ubuntu 15.10, although I don't suspect any bugs)
All .conf files in /etc/systemd/system/docker.service.d overrule the settings from the /usr/lib/systemd/system/docker.service file, which is almost what you tried.
Instead of putting a DNS=.. line in, you need to copy the ExecStart= part from the /usr/lib/systemd/system/docker.service file to dns.conf (or mydocker.conf). Add --dns $ip after the daemon part of the ExecStart. E.g.:
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon --dns 192.168.1.1 -H fd://
Where the 192.168.1.1 is the ip of the dns server.
Now restart docker via systemctl and docker should now restart with your own dns. (Checkable via systemctl status docker.service | grep dns).
Note that the empty ExecStart= is required, as systemctl only will overrule the ExecStart if it is cleared first.
Also note that a systemctl daemon-reload is needed after editing files in /etc/systemd/system/.
Last remark is that on some systems docker.service is not located in /usr/lib/systemd/system/, but in /lib/systemd/system/.
Yes I agreed to previous answer given by #steviethecat but this changes overwrite to default when docker restart so I followed below steps. Using Docker version 18.09.2,
I followed link https://success.docker.com/article/using-systemd-to-control-the-docker-daemon
sudo systemctl edit docker //this opens new file use as overwrite file.
add below lines. Make sure you have ExecStart= before setting this value. Above given link having details.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --dns 192.168.1.1 -H fd://
once above lines added to file, execute below lines.
sudo systemctl daemon-reload
systemctl restart docker
systemctl status docker

How can I automatically start a node.js application in Amazon Linux AMI on aws?

Is there a brief guide to explain how to start up a application when the instance starts up and running? If it were one of the services installed through yum then I guess I can use /sbin/chkconfig to add it to the service. (To make it sure, is it correct?)
However, I just want to run the program which was not installed through yum. To run node.js program, I will have to run script sudo node app.js at home directory whenever the system boots up.
I am not used to Amazon Linux AMI so I am having little trouble finding a 'right' way to run some script automatically on every boot.
Is there an elegant way to do this?
One way is to create an upstart job. That way your app will start once Linux loads, will restart automatically if it crashes, and you can start / stop / restart it by sudo start yourapp / sudo stop yourapp / sudo restart yourapp.
Here are beginning steps:
1) Install upstart utility (may be pre-installed if you use a standard Amazon Linux AMI):
sudo yum install upstart
For Ubuntu:
sudo apt-get install upstart
2) Create upstart script for your node app:
in /etc/init add file yourappname.conf with the following lines of code:
#!upstart
description "your app name"
start on started mountall
stop on shutdown
# Automatically Respawn:
respawn
respawn limit 99 5
env NODE_ENV=development
# Warning: this runs node as root user, which is a security risk
# in many scenarios, but upstart-ing a process as a non-root user
# is outside the scope of this question
exec node /path_to_your_app/app.js >> /var/log/yourappname.log 2>&1
3) start your app by sudo start yourappname
You can use forever-service for provisioning node script as a service and automatically starting during boots. Following commands will do the needful,
npm install -g forever-service
forever-service install test
This will provision app.js in the current directory as a service via forever. The service will automatically restart every time system is restarted. Also when stopped it will attempt a graceful stop. This script provisions the logrotate script as well.
Github url: https://github.com/zapty/forever-service
As of now forever-service supports Amazon Linux, CentOS, Redhat support for other Linux distro, Mac and Windows are in works..
NOTE: I am the author of forever-service.
Quick solution for you would be to start your app from /etc/rc.local ; just add your command there.
But if you want to go the elegant way, you'll have to package your application in a rpm file,
have a startup script that goes in /etc/rc.d so that you can use chkconfig on your app, then install the rpm on your instance.
Maybe this or this help. (or just google for "creating rpm packages")
My Amazon Linux instance runs on Ubuntu, and I used systemd to set it up.
First you need to create a <servicename>.service file. (in my case cloudyleela.service)
sudo nano /lib/systemd/system/cloudyleela.service
Type the following in this file:
[Unit]
Description=cloudy leela
Documentation=http://documentation.domain.com
After=network.target
[Service]
Type=simple
TimeoutSec=0
User=ubuntu
ExecStart=/usr/bin/node /home/ubuntu/server.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
In this application the node application is started. You will need a full path here. I configured that the application should simply restart if something goes wrong. The instances that Amazon uses have no passwords for their users by default.
Reload the file from disk, and then you can start your service. You need to enable it to make it active as a service, which automatically launches at startup.
ubuntu#ip-172-31-21-195:~$ sudo systemctl daemon-reload
ubuntu#ip-172-31-21-195:~$ sudo systemctl start cloudyleela
ubuntu#ip-172-31-21-195:~$ sudo systemctl enable cloudyleela
Created symlink /etc/systemd/system/multi-user.target.wants/cloudyleela.service → /lib/systemd/system/cloudyleela.service.
ubuntu#ip-172-31-21-195:~$
A great systemd for node.js tutorial is available here.
If you run a webserver:
You probably will have some issues running your webserver on port 80. And the easiest solution, is actually to run your webserver on a different port (e.g. 4200) and then to redirect that port to port 80. You can accomplish this with the following command:
sudo iptables -t nat -A PREROUTING -i -p tcp --dport 80 -j REDIRECT --to-port 4200
Unfortunately, this is not persistent, so you have to repeat it whenever your server restarts. A better approach is to also include this command in our service script:
ExecStartPre to add the port forwarding
ExecStopPost to remove the port forwarding
PermissionStartOnly to do this with sudo power
So, something like this:
[Service]
...
PermissionsStartOnly=true
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4200
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 4200
Don't forget to reload and restart your service:
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl daemon-reload
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl stop cloudyleela
[ec2-user#ip-172-31-39-212 system]$ sudo systemctl start cloudyleela
[ec2-user#ip-172-31-39-212 system]$
For microservices (update on Dec 2020)
The previously mentioned solution gives a lot of flexibility, but it does take some time to set it up. And for each additional application, you need to go through this entire process again. By the time you'll be installing your 5th node application, you'll certainly start wondering: "there has to be a shortcut".
The advantage of PM2 is that it's just 1 service to install. Next it's PM2 which manages the actual applications.
Even the initial setup of PM2 is easy, because it automatically installs the pm2 service for you.
npm install pm2 -g
And adding new services is even easier:
pm2 start index.js --name "foo"`.
When everything's up and running, you can save your setup, to have it automatically start on reboot.
pm2 save
If you want an overview of all your running node applications,
you can run pm2 list
And PM2 also offers an online (webbased) dashboard to monitor your application remotely. You may need a license to access some of the dashboard functionality though (which is a bit over-priced imho).
You can create a script that can start and stop your app and place it in /etc/init.d; make the script adhere to chkconfig's conventions (below), and then use chkconfig to set it to start when other services are started.
You can pick an existing script from /etc/init.d to use as an example; this article describes the requirements, which are basically:
An executable script that identifies the shell needed (i.e., #!/bin/bash)
A comment of the form # chkconfig: where is often 345, startprio indicates where in the order of services to start, and stopprio is where in the order of services to stop. I generally pick a similar service that already exists and use that as a guide for these values (i.e., if you have a web-related service, start at the same levels as httpd, with similar start and stop priorities).
Once your script is set up, you can use
chkconfig --add yourscript
chkconfig yourscript on
and you should be good to go. (Some distros may require you to manually symlink to the script to /etc/init.d/rc.d, but I believe your AWS distro will do that for you when you enable the script.
Use Elastic Beanstalk :) Provides support for auto-scaling, SSL termination, blue/green deployments, etc
If you want the salty sysadmin way for a RedHat based linux distro (Amazon Linux is a flavor of RedHat), learn systemd, as mentioned by #bvdb in the answer above:
https://en.wikipedia.org/wiki/Systemd
Set everything up as described on an EC2 instance, snapshot a custom AMI, and use this custom AMI as your base for EC2 instances hosting your apps. This way you don't have to go through all that setup multiple times. You'll probably want to get acquainted with load balancers too, if you are running in a production environment with uptime requirements.
Or, yes, as mentioned by #bvdb, you could also use pm2 to interface with systemd. Though I don't think pm2 helps with running your app across multiple EC2 instances, which is definitely recommended for production environments with uptime requirements.
All of which is a very steep learning curve. Since the OP seemed to be new to all this, Elastic Beanstalk, Google App Engine, and others are a great way to get code running in the cloud without all that.
These days I dev in TypeScript, deploying to serverless function execution in the cloud for most things, and don't have to think about package installs or app startup at all.
You can use screen. Run crontab -e and add this line:
#reboot screen -d -m bash -c "cd /home/user/yourapp/; node app"
Have been using forever on AWS and it does a good job. Install using
[sudo] npm install forever -g
To add an application use
forever start path_to_application
and to stop the application use
forever stop path_to_application
This is a useful article that helped me with setting it up.

Amazon EC2 - Apache server restart issue

When i run this command
sudo /etc/init.d/httpd restart
it gives below error
Stopping httpd: [FAILED]
Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80
(98)Address already in use: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
Unable to open logs [FAILED]
i checked running programs at port 80 by using
netstat -lnp | grep :80 (it gives below output)
tcp 0 0 :::80 :::* LISTEN 21739/httpd
why i am not able to stop stop apache by using sudo /etc/init.d/httpd restart?
below commands work without issue
sudo apachectl stop
sudo apachectl start
i am using linux micro instance of amazon ec2
I ran into this problem when I installed apache from source, but then tried to run
$ sudo /etc/init.d/httpd restart
which was using a pre-installed version of apache. The stop directive in /etc/init.d/httpd was not removing the httpd.pid file that was created when starting the source-installed version of apache.
To determine if this is also the reason for your problem, find where the httpd.pid file is getting set when you run
$ sudo apachectl start
If you installed from source and apache2 is living in /usr/local/apache2, then the httpd.pid file should get created in /usr/local/apache2/logs. When you stop apache by running
$ sudo apachectl stop
this file should get removed. So to test if the httpd.pid file is causing your problem, start apache by calling
$ sudo apachectl start
and locate the httpd.pid file. Then try stopping apache by using
$ sudo /etc/init.d/httpd stop
If the original httpd.pid file is still present, then that is why apache is unable to start when you use
$ sudo /etc/init.d/httpd start
To get my /etc/init.d/httpd file to work correctly, I explicitly put the call to apachectl in the start and stop methods:
#!/bin/bash
# /etc/init.d/httpd
#
# Path to the apachectl script, server binary, and short-form for messages.
apachectl=/usr/local/apache2/bin/apachectl
httpd=/usr/local/apache2/bin/httpd
pid=/usr/local/apache2/logs/httpd.pid
prog=httpd
RETVAL=0
start() {
echo -n $"Starting $prog: "
$apachectl -k start
RETVAL=$?
echo
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
$apachectl -k stop
RETVAL=$?
echo
}
I tried this and it works:
sudo fuser -k -n tcp 80
sudo service httpd start
Hope this will help you!
Cheers
I feel its better to kill the process itself, find out the process id and kill it and then do a fresh start, it should work fine
I have had this issue very rarely over the last couple years with a server I've been managing. Unfortunately, if you are getting FAILED after trying to restart, the process that's managing the connection on port 80 won't release it's hold on that port.
I would try a full "sudo /etc/init.d/httpd stop" wait for that to finish or fail.
If that doesn't fix it you'll have to restart the server completely. Hopefully, it's configured to start everything up automatically on restart, but that isn't guaranteed.
"apachectl" is also great tool for Apache, but it may not be on this server, it depends on the install and linux distro used.
If after rebooting the server, apache still fails to start, something bad has happened. I'd consider pulling all the website and conf files for creating a new server at that point, but the apache start, and then failed message output should give you some idea of where to look in the Logs about why it cannot start.

Resources