Troubleshoot varnish and dual server configuration - varnish

I have Varnish set up and working with 2 server instances.
I've made changes to the default.vcl file and updated my changes as follows on both my servers:
$NOW = fdfdf;
sudo varnishadm -T xx.xx.xx.xx: -S /etc/varnish/secret vcl.load reload$NOW /etc/varnish/default.vcl && sudo varnishadm -T xx.xx.xx.xx: -S /etc/varnish/secret vcl.use reload$NOW"
One server reflects the new change and works fine, but the other still seems to be using the old configuration.
Does anyone have idea why this might be or how to troubleshoot?
Thanks,

If you execute each command manually, does it also work? My best guesses now are:
you're prompted for your sudo password on one system, not on the other (thus it hangs)?
you're pointing to a secret-file with incorrect permissions?
somehow your first varnishadm didn't return "true" and thus the vcl.use didn't trigger in the second part
So, best thing to do: execute each command manually and check the responses you get.

Related

Varnish is running but throws ‘Could not get hold of varnishd’ when I run varnishlog

Possible Solution
Possible create files with different permissions.
Steps to Reproduce (for bugs)
Pull image for K8s from varnish docker registry
FROM varnish:6.0
Start it with the docker -compose.
Your Environment:
Version used: Varnish 6.0
Operating System and version: Ubuntu or mac os
Source of binary packages used (if any)
Built from varnish image 6.0
I faced an issue when i use varnishlog.
varnishhist & varnishlog don't work
Here the code Source of DockerFile
I fix the root privilege . Now i need to fix the varnishlog issue.
you can use any default.vcl for demo. https://github.com/varnishcache/varnish-cache/blob/master/bin/varnishd/builtin.vcl
#Varnish stage
FROM varnish:6.0
RUN apt-get update && apt-get install -y libpcap-dev libcap2-bin
COPY docker/varnish/conf/default.vcl /etc/varnish/default.vcl
RUN setcap 'cap_net_bind_service=+ep' /usr/sbin/varnishd
RUN usermod -a -G varnish varnishlog
RUN chown -R varnish:varnish /var/lib/varnish
USER varnish
CMD ["bash", "-c", "varnishd -F -f /etc/varnish/default.vcl -n /tmp/varnish -p http_req_hdr_len=65536 -p http_req_size=98304 -p workspace_backend=256k -p workspace_client=256k -p shm_reclen=1024 -p max_retries=1 & varnishncsa -n /tmp/varnish -b -c -t off"]
`
i'm still suck with my issue VSM: Could not get hold of varnishd, is it running?
It looks like varnishlog is not pointing to the correct directory, or has not access to it.
Any helps would be nice
my varnish works and it is responsive but i can not do an varnishlog.same thing with varnishncsa
Did i make something wrong in my config ?
Because you gave your Varnish instance a name via the -n parameter, you now need to use that same name when calling varnishlog, varnishncsa, varnishtop or varnishstat.
In this case this would be the varnishlog command:
varnishlog -n /tmp/varnish
The -n parameter is only useful when you run multiple Varnish instances on a single machine. I would advise you to drop -n as a varnishd runtime parameter.

How do I start a tmux service in the user's directory on user login using systemd

I'm trying to set up my system so that when a user logs in, a tmux session will automatically be created for them, and this session is restarted if it ever exits, and the session starts in the user's home directory. I would like this to work for any user, or any new user added to the system, without a static unit file for each user. I'm having trouble making this work in a generic way, because I need to specify User and WorkingDirectory in the unit file for the tmux session to be created for the correct user in the correct directory.
So far my unit file looks like the following:
/etc/systemd/system/tmux-session-service.service...
---------------------------------------------------
[Unit]
Description=Tmux Session Service
[Service]
Type=forking
User=my-user
WorkingDirectory=/home/my-user
ExecStart=/usr/bin/tmux new-session -s tmux-session-service -d
ExecStop=/usr/bin/tmux kill-session -t tmux-session-service
Restart=on-failure
[Install]
WantedBy=multi-user.target
When I install and enable this, everything works like I expect as long as I am logged is as my-user. However if I log in as another user, the tmux session isn't created with the right permissions or working directory for the new user.
I looked into template files, but I can't quite get things to work. I tried setting the target to default.target, and using the %u template directive, but that seems to just refer to the user running the service manager, which is root.
One option would be to run systemctl run tmux-session-service#new-user.service when new-user logs in. Then I could use %i in the User and WorkingDirectory directives in the unit file. But then I need some process that has systemctl permissions to kick that off on user login, and I can't think of a way to do that.
I'm running:
Arch Linux
tmux v3.1c
systemd v247
Under Ubuntu, I install the ....service file under:
/usr/lib/systemd/user/...
With the Debian packager, that is not automatic if you installed using the .service in the debian installation folder. Instead, you have to do it manually so it goes in the correct folder. So say you have a project defined like so:
tmux-session-service/debian/tmux-session-service.docs
tmux-session-service/debian/tmux-session-service.install
tmux-session-service/debian/tmux-session-service.service <-- wrong!
Then for each user I would enable the service like so:
# Make sure the target folder exists
mkdir -p /home/${USER}/.config/systemd/user/default.target.wants
# If you're root when doing that, you want to fix the ownership
# (for the group, you may need a different variable)
chown -R ${USER}:${USER} /home/${USER}/.config/systemd
# If already installed, remove the link before re-creating it
rm -f /home/${USER}/.config/systemd/user/default.target.wants/${SERVICE}.service
# Again, I do this as root, so I need to use special care to run the
# following command as $USER isntead
sudo -H -u ${USER} sh -c "ln -s /usr/lib/systemd/user/${SERVICE}.service /home/${USER}/.config/systemd/user/default.target.wants/${SERVICE}.service"
If you want (can) do it manually, then install the file under /usr/lib/systemd/user/... as mentioned above, and then use the enable option:
systemctl --user enable ${SERVICE}.service
The problem with this technique is that you need to log in as each user to order to enable your service.
I think there is a way to have a service auto-start for all users, but that I haven't found out how to make it work yet...
Could it be as simple as putting something in new user's .bashrc files (or in the /etc/profile file to get a system wide effect) that attaches to a tmux session called 'main' (this would create it if it doesn't already exist)?
That's what I have in mine, and it's as if tmux is just a built in feature of my terminal:
# Launch tmux
if command -v tmux>/dev/null; then
[[ ! $TERM =~ screen ]] && [ -z $TMUX ] && tmux new-session -A -s main
fi

User-data scripts is not running on my custom AMI, but working in standard Amazon linux

I searched a lot of topic about "user-data script is not working" in these few days, but until now, I haven't gotten any idea about my case yet, please help me to figure out what happened, thanks a lot!
According to AWS User-data explanation:
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts.
So I tried to pass my own user-data when instance launch, this is my user-data:
\#!/bin/bash
echo 'test' > /home/ec2-user/user-script-output.txt
But there is no file in this path: /home/ec2-user/user-script-output.txt
I checked /var/lib/cloud/instance/user-data.txt, the file is exist and same as my user-data script.
Also I checked the log in /var/log/cloud-init.log, there is no error message.
But the user-data script is working if I launch an new instance with Amazon linux(2014.09.01), but I'm not sure what difference between my AMI (based on Amazon linux) and Amazon linux.
The only different part I saw is if I run this script:
sudo yum list installed | grep cloud-init
My AMI:
cloud-init.noarch 0.7.2-8.33.amzn1 #amzn-main
Amazon linux:
cloud-init.noarch 0.7.2-8.33.amzn1 installed
I'm not sure this is the reason?
If you need more information, I'm glad to provide, please let me know what happened in my own AMI and how to fix it?
many thanks
Update
Just found an answer from this post,
If I add #cloud-boothook in the top of user-data file, it works!
#cloud-boothook
#!/bin/bash
echo 'test' > /home/ec2-user/user-script-output.txt
But still not sure why.
User_data is run only at the first start up. As your image is a custom one, I suppose it have already been started once and so user_data is desactivated.
For windows, it can be done by checking a box in Ec2 Services Properties. I'm looking at the moment how to do that in an automated way at the end of the custom image creation.
For linux, I suppose the mechanism is the same, and user_data needs to be re-activated on your custom image.
The #cloud-boothook make it works because it changes the script from a user_data mechanism to a cloud-boothook one that runs on each start.
EDIT :
Here is the code to reactivate start on windows using powershell:
$configFile = "C:\\Program Files\\Amazon\\Ec2ConfigService\\Settings\\Config.xml"
[xml] $xdoc = get-content $configFile
$xdoc.SelectNodes("//Plugin") |?{ $_.Name -eq "Ec2HandleUserData"} |%{ $_.State = "Enabled" }
$xdoc.SelectNodes("//Plugin") |?{ $_.Name -eq "Ec2SetComputerName"} |%{ $_.State = "Enabled" }
$xdoc.OuterXml | Out-File -Encoding UTF8 $configFile
$configFile = "C:\\Program Files\\Amazon\\Ec2ConfigService\\Settings\\BundleConfig.xml"
[xml] $xdoc = get-content $configFile
$xdoc.SelectNodes("//Property") |?{ $_.Name -eq "AutoSysprep"} |%{ $_.Value = "Yes" }
$xdoc.OuterXml | Out-File -Encoding UTF8 $configFile
(I know the question focus linux, but it could help others ...)
As I tested, there were some bootstrap data in /var/lib/cloud directory.
After I cleared that directory, User Data script worked normally.
rm -rf /var/lib/cloud/*
I have also faced the same issue on Ubuntu 16.04 hvm AMI. I have raised the issue to AWS support but still I couldn't find exact reason/bug which affects it.
But still I have something which might help you.
Before taking AMI remove /var/lib/cloud directory (each time). Then while creating Image, set it to no-reboot.
If these things still ain't working, you can test it further by forcing user-data to run manually. Also tailf /var/log/cloud-init-output.log for cloud-init status. It should end with something like modules:final to make your user-data run. It should not stuck on modules:config.
sudo rm -rf /var/lib/cloud/*
sudo cloud-init init
sudo cloud-init modules -m final
I don't have much idea whether above commands will work on CentOS or not. I have tested it on Ubuntu.
In my case, I have also tried removing /var/lib/cloud directory, but still it failed to execute user-data in our scenario. But I have came up with different solution for it. What we have did is we have created script with above commands and made that script to run while system boots.
I have added below line in /etc/rc.local to make it happen.
sudo bash /home/ubuntu/force-user-data.sh || exit 1
But here is the catch, it will execute the script on each boot so which will make your user-data to run on every single boot, just like #cloud-boothook. No worries, you can just tweak it by just removing the force-user-data.sh itself at the end. So your force-user-data.sh will look something like
#!/bin/bash
sudo rm -rf /var/lib/cloud/*
sudo cloud-init init
sudo cloud-init modules -m final
sudo rm -f /home/ubuntu/force-user-data.sh
exit 0
I will appreciate if someone can put some lights on why it is unable to execute the user-data.
this is the answer as an example: ensure that you have in the headline only #!/bin/bash
#!/bin/bash
yum update -y
yum install httpd mod_ssl
service httpd start
chkconfig httpd on
I was having a lot of trouble with this. I'll provide detailed walk-though.
My added wrinkle is that I'm using terraform to instantiate the hosts via a launch configuration and autoscaling group.
I could NOT get it to work by adding the script inline in lc.tf
user_data = DATA <<
"
#cloud-boothook
#!/bin/bash
echo 'some crap'\'
"
DATA
I could fetch it from user data,
wget http://169.254.169.254/latest/user-data
but noticed I was getting it with the quotes still in it.
This is how I got it to work: I moved to pulling it from a template instead since what you see is what you get.
user_data = "${data.template_file.bootscript.rendered}"
This means I also need to declare my template file like so:
data "template_file" "bootscript" {
template = "${file("bootscript.tpl")}"
}
But I was still getting an error in the cloud init logs
/var/log/cloud-init.log
[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'Content-Type: text/cloud...'
Then I found this article about user data formatting user That makes sense, if user-data can come in multiple parts, maybe cloud-init needs cloud-init commands in one place and the script in the other.
So my bootscript.tpl looks like this:
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
echo "some crap"
--//
#!/bin/bash
Do not leave any white space at the start of the line . Use exact command. Otherwise it may run in AMAZON Linux AMI , but it’ll not run in RHEL.
On ubuntu 16, removing /var/lib/cloud/* does not work. I removed only instance(s) from the folder /var/lib/cloud/ and then it ran fine for me
I ran:
sudo rm /var/lib/cloud/instance
sudo rm /var/lib/cloud/instances
Then I retried my user data script and it worked fine
I'm using CentOS and the logic for userdata there is simple:
In the file /etc/rc.local there is a call for a initial.sh script, but it looks for a flag first:
if [ -f /var/tmp/initial ]; then
/var/tmp/initial.sh &
fi
initial.sh is the file that does the execution of user-data, but in the end it deletes the flag. So, if you want your new AMI to execute user-data again, just create the flag again before create the image:
touch /var/tmp/initial
The only way I get it to work was to add the #cloud-boothook before the #!/bin/bash
This is a typical user data script that installs Apache web server on a newly created instance
#cloud-boothook
#!/bin/bash
yum update -y
yum install -y httpd.x86_64
systemctl start httpd.service
systemctl enable httpd.service
without the #cloud-boothook it does not work, but with it, it works. It seems that different users have different experiences. Some are able to get it to work without it, not sure why.
just add --// at the end of your user data script , example :
#!/bin/bash
#Upgrade ec2 instance
sudo yum update -y
#Start docker service
sudo service docker start
--//
User Data should execute fine without using #cloud-boothook (which is used to activate the User Data at the earliest possible time during the boot process).
I started a new Amazon Linux AMI and used your User Data, plus a bit extra:
#!/bin/bash
echo 'bar' > /tmp/bar
echo 'test' > /home/ec2-user/user-script-output.txt
echo 'foo' > /tmp/foo
This successfully created three files.
User Data scripts are executed as root, so it should have permission to create files in any location.
I notice that in your supplied code, one example refers to /home/ec2-user/user-script/output.txt (with a subdirectory) and one example refers to /home/ec2-user/user-script-output.txt (no subdirectory). The command would understandably fail if you attempt to create a file in a non-existent directory, but your "Update" example seems to show that it did actually work.
Also,
If we are using user interactive commands like
sudo yum install java-1.8.0-devel
Then, we need to use with flags like -y
sudo yum install java-1.8.0-devel -y
You can find this in the EC2 documentation under Run commands at launch.
Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts

htop with web interface

Is there any simple and lightweight monitoring tool like well-known htop, but with web interface? For Debian / Repberry Pi. All solutions I've seen was complicated and resource-intensive.
I've found an interesting solution to run htop (and any other interactive console application) in browser — shellinabox
Install shellinabox
[sudo] apt-get install shellinabox
Stop shellinabox daemon
[sudo] service shellinaboxd stop
Disable shellinaboxd autostart (in default configuration shellinaboxd serves http-ssh session on 4200 port)
[sudo] update-rc.d -f shellinaboxd remove
Now start shellinaboxd with own parameters
[sudo] shellinaboxd -t -b -p 8888 --no-beep \
-s '/htop_app/:nobody:nogroup:/:htop -d 10'
Options:
-t — disable ssl (if necessary, not recommended for public servers)
-b — run in background
-p — web server port number
--no-beep — disable annoying beeps
-s '…commands…' — session configurstion, where
/htop_app/ — URL
nobody:nogroup — user and group for session (nobody:no group chosen for security reasons)
htop -d 10 — command (actually session shell): run htop with -d 10 argument (means update every second)
Now go to browser and navigate to
http://you_server_address:8888/htop_app/
Should look something like this (screenshot)
glances is great! Use that!
https://nicolargo.github.io/glances/
https://iotrant.com/2019/09/03/keep-tabs-on-your-raspberry-pi-with-glances/
Very light dependencies -- basically just Python, psustil, bottle if you want to see it as a webservice...
Thanks everything works well!
In debian wheezy:
[sudo] service shellinaboxd stop
Becomes (without the letter 'd')
[sudo] service shellinabox stop
The same applies to update-rc.d line
[sudo] update-rc.d -f shellinabox remove

How do I clone an OpenLDAP database

I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go:
I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all.
What files would I need to copy over? I believe the setup is pretty standard.
The problem with SourceRebels' answer is that slapcat(8) does not guarantee that the data is ordered for ldapadd(1)/ldapmodify(1).
From man slapcat (from OpenLDAP 2.3) :
The LDIF generated by this tool is suitable for use with slapadd(8).
As the entries are in database order, not superior first order, they
cannot be loaded with ldapadd(1) without first being reordered.
(FYI: In OpenLDAP 2.4 that section was rephrased and expanded.)
Plus using a tool that uses the backend files to dump the database and then using a tool that loads the ldif through the ldap protocol is not very consistent.
I'd suggest to use a combination of slapcat(8)/slapadd(8) OR ldapsearch(1)/ldapmodify(1). My preference would go to the latter as it does not need shell access to the ldap server or moving files around.
For example, dump database from a master server under dc=master,dc=com and load it in a backup server
$ ldapsearch -Wx -D "cn=admin_master,dc=master,dc=com" -b "dc=master,dc=com" -H ldap://my.master.host -LLL > ldap_dump-20100525-1.ldif
$ ldapadd -Wx -D "cn=admin_backup,dc=backup,dc=com" -H ldap://my.backup.host -f ldap_dump-20100525-1.ldif
The -W flag above prompts for ldap admin_master password however since we are redirecting output to a file you wont see the prompt - just an empty line. Go ahead and type your ldap admin_master password and enter and it will work. First line of your output file will need to be removed (Enter LDAP Password:) before running ldapadd.
Last hint, ldapadd(1) is a hard link to ldapmodify(1) with the -a (add) flag turned on.
ldapsearch and ldapadd are not necessarily the best tools to clone your LDAP DB. slapcat and slapadd are much better options.
Export your DB with slapcat:
slapcat > ldif
Import the DB with slapadd (make sure the LDAP server is stopped):
slapadd -l ldif
Some appointments:
Save your personalized schemas and objectclasses definitions on your new server. You can look for your included files at slapd.conf to obtain it, for example (this is a part of my slapd.conf):
include /etc/ldap/schema/core.schema
Include your personalized schemas and objectclasses in your new openLDAP installation.
Use slapcat command to export your full LDAP tree to a single/various ldif files.
Use ldapadd to import the ldif files on to your new LDAP installation.
I prefer copy the database through the protocol:
first of all be sure you have the same schemas on both servers.
dump the database with ldapsearch:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" > domain.ldif
and import it in the new server:
ldapmodify -Wx -D "cn=admin,dc=domain" -a -f domain.ldif
in one line:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" | ldapmodify -w pass -x -D "cn=admin,dc=domain" -a
By using the bin/ldap* commands you are talking directly with the server while using bin/slap* commands you are dealing with the backend files
(Not enough reputation to write a comment...)
Ldapsearch opens a connection to the LDAP server.
Slapcat instead accesses the database directly, and this means that ACLs, time and size limits, and other byproducts of the LDAP connection are not evaluated, and hence will not alter the data. (Matt Butcher, "Mastering OpenLDAP")
Thanks, Vish. Worked like a charm! I edited the command:
ldapsearch -z max -LLL -Wx -D "cn=Manager,dc=domain,dc=fr" -b "dc=domain,dc=fr" >/tmp/save.ldif
ldapmodify -c -Wx -D "cn=Manager,dc=domain,dc=fr" -a -f /tmp/save.ldif
Just added the -z max to avoid the size limitation and the -c to go on even if the target domain already exists (my case).

Resources