Varnishadm configs - varnish

How I can keep configurations for varnishadm? I asked because after restart varnish lost data about configs which I declared.
varnish> vcl.load my_config /etc/varnish/my_config.vcl
200
VCL compiled.
varnish> vcl.label my_config_test my_config
200
varnish> vcl.list
200
active auto/warm 0 boot
available auto/warm 0 my_config (1 label)
available label/warm 0 my_config_test -> my_config
root#dev:/var/lib/varnish/wujek# /etc/init.d/varnish restart
[ ok ] Restarting varnish (via systemctl): varnish.service.
varnish> vcl.list
200
active auto/warm 0 boot
varnish>
I thinking how varnishadm keep configuration for boot.
Thanks.

I solved this problem. I run the server using to command
varnishd -I start.cli -F -a :80 -f ''
https://info.varnish-software.com/blog/one-vcl-per-domain

Related

siege unable to load test with 1000 users

I have been trying to load test with siege to my application hosted on IIS server with the following command
siege -b -c1000 -t1M 'http://xxxx/home/load POST -H "delay: 50" -H "count: 600" -H "Content-Type: application/json"'
I get the error
[error] Address resolution failed at sock.c:158 with the following error:: No such file or directory
[error] Name or service not known: No such file or directory
Fixes i tried:-
Increased port limit from 1024 to 65535
Set limit in .siegerc file to 1500
increased ulimit to 100000
Tried with
siege -c200 -r300 'http://xxxx/home/load POST -H "delay: 50" -H "count: 600" -H "Content-Type: application/json"'
but then the iteration runs for only 3 mins against the 5 mins which is my actual requirement.
My doubt is if this is due to siege running more than its original 255 user default or is it due to my IIS server being unable to handle the load?
If this is seige related then how can i resolve this?My siege is installed on a Linux AMI EC2-instance

What starts this docker process on my laptop?

Every time I boot up my Lubuntu 16.04 laptop I can see I have a running docker container:
$ ps -ef | grep docker
root 1724 1 3 21:17 ? 00:01:30 /usr/bin/dockerd -H fd://
root 1774 1724 0 21:17 ? 00:00:04 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 4750 1774 0 21:17 ? 00:00:00 docker-containerd-shim 72541a4648b890132985daf2357d1130b8b5208cf12ede607b93ab2987629719 /var/run/docker/libcontainerd/72541a4648b890132985daf2357d1130b8b5208cf12ede607b93ab2987629719 docker-runc
stephane 10755 1793 0 22:07 pts/0 00:00:00 grep docker
It serves a Jenkins application on the port 80 and requesting localhost/ in the browser redirects to http://localhost/login?from=%2F and shows a Jenkins warning page:
Unlock Jenkins
To ensure Jenkins is securely set up by the administrator, a password has been written to the log (not sure where to find it?) and this file on the server:
A wget request shows:
$ wget localhost/
--2017-05-23 22:09:55-- http://localhost/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2017-05-23 22:09:55 ERROR 403: Forbidden.
How can I know which service is firing up this docker process ?
I looked in the /etc/init.d/ directory:
$ l /etc/init.d/
alsa-utils* checkroot-bootclean.sh* halt* mattermostd* nginxd* rc* single* uuidd*
anacron* checkroot.sh* hostname.sh* mountall-bootclean.sh* ntp* rc.local* skeleton whoopsie*
apachedsd* console-setup* httpd* mountall.sh* ondemand* rcS* ssh* x11-common*
apparmor* cron* hwclock.sh* mountdevsubfs.sh* openvpn* README tomcatd*
apport* cups* irqbalance* mountkernfs.sh* php-fpm* reboot* udev*
avahi-daemon* cups-browsed* keyboardd* mountnfs-bootclean.sh* plymouth* redis* ufw*
bluetooth* dbus* killprocs* mountnfs.sh* plymouth-log* resolvconf* umountfs*
bootmisc.sh* docker* kmod* mysqld* postfix* rsync* umountnfs.sh*
cgroupfs-mount* dropboxd* lightdm* networking* pppd-dns* rsyslog* umountroot*
checkfs.sh* grub-common* mariadbd* network-manager* procps* sendsigs* urandom*
The /etc/init.d/docker is mine and removing it from the directory, a reboot still comes up with a running docker process.
I removed the /etc/init.d/docker file, rebooted, and there is a docker process:
$ ps -ef | grep docker
root 1560 1 5 22:15 ? 00:00:06 /usr/bin/dockerd -H fd://
root 1645 1560 0 22:15 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 4644 1645 0 22:15 ? 00:00:00 docker-containerd-shim 069db46cca05d43c35f05ff50aaa836507cbf69e4e3d9443b6b859d0edb5b076 /var/run/docker/libcontainerd/069db46cca05d43c35f05ff50aaa836507cbf69e4e3d9443b6b859d0edb5b076 docker-runc
stephane 5520 1741 0 22:17 pts/0 00:00:00 grep docker
So I looked up for anything docker in all these files, but found nothing named docker:
$ cd /etc/init.d/
[stephane#stephane-ThinkPad-X301 init.d]
$ grep.sh docker
[stephane#stephane-ThinkPad-X301 init.d]
This docker process is there every time I start my laptop, even when off line.
What starts this docker process ?
Lubuntu 16.04 comes with systemd by default. At some point you must have started up a jenkins instance in docker - it's hard to tell exactly what started the process initially. However, systemd would be what is currently causing it to start. In order to stop it from running, run the following commands:
systemctl status docker <- Find out of systemctl thinks docker is running.
It'll likely show something like this:
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2017-05-21 22:59:46 EDT; 1 day 17h ago
Docs: http://docs.docker.com
Main PID: 1314 (dockerd-current)
Tasks: 14 (limit: 8192)
CGroup: /system.slice/docker.service
└─1314 /usr/bin/dockerd-current --add-runtime oci=/usr/libexec/docker/docker-runc-current --default-runtime=oci --containerd /run/containerd.sock --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --log-driver=journald
To stop it, run systemctl stop docker and then systemctl disable docker. As a last resort if this doesn't work, you can run systemctl mask docker.
Docker is being started by systemd in your environment. You can disable the entire engine by running:
sudo systemctl disable docker
sudo systemctl stop docker
You can also stop only the container that is running (the shim and Jenkins application):
sudo docker ps # lists the running containers along with their container id
sudo docker update --restart=no $container_id
sudo docker stop $container_id
If you know that you do not need this container and want to permanently delete it, you can run this instead of the above two last commands:
sudo docker rm -f $container_id
The -f switch also stops the container if it's currently running.
Edit: from your comment, your container is running under swarm mode which is redeploying it. To stop that first find the stack or service that is running it.
sudo docker stack ls
sudo docker service ls
If you see a stack listed, you can remove that with:
sudo docker stack rm $stack_name
If there are no stacks listed, or they don't apply to this container, you can delete the service with:
sudo docker service rm $service_name

What is etcd looking for in 127.0.0.1:4001?

I'm trying to set up a test cluster using etcd 2.3.7 installed from CentOS RPM on CentOS 7.1. On the Loader 1 I executed:
etcdctl member add loader2 http://10.11.51.231:2380
And received response which confirmed the operation completed successfully.
Similarly:
etcdctl member add loader3 http://10.11.51.231:2380
with all default settings, and here's what I see:
Loader 1 10.11.51.166
systemctl status etcd -ln1
etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled)
Active: active (running) since Sun 2017-02-19 14:33:18 IST; 28min ago
Main PID: 19009 (etcd)
CGroup: /system.slice/etcd.service
└─19009 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://localhost:2379
Feb 19 15:02:03 loader3 etcd[19009]: cannot get the version of member a4803061db803edc (Get http://10.11.51.166:2380/version: dial tcp 10.11.51.166:2380: getsockopt: connection refused)
Tried to see cluster health:
etcdctl --debug cluster-health
Cluster-Endpoints: http://127.0.0.1:4001, http://127.0.0.1:2379
cURL Command: curl -X GET http://127.0.0.1:4001/v2/members
cURL Command: curl -X GET http://127.0.0.1:2379/v2/members
member ce2a822cea30bfca is unhealthy: got unhealthy result from http://localhost:2379
member da05b63349d818dc is unreachable: no available published client urls
cluster is unhealthy
Note how this ignores the two nodes added previously, but sends requests to random port on localhost...
Loader 2 10.11.51.174
At first this machine started OK, but after I saw there was something wrong with Loader 1, I tried adding Loader 1 as a member from this machine, and now I see the same picture on this machine too. I.e. it tries to query this 4001 port, where nobody responds. On all machines:
netstat -tupln | grep etcd
tcp 0 0 127.0.0.1:7001 0.0.0.0:* LISTEN 4507/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 4507/etcd
tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 4507/etcd
Nobody listens on 4001...
Loader 3 10.11.51.231
On this loader I didn't try to add new members. So it looks like this:
etcdctl --debug cluster-health
Cluster-Endpoints: http://127.0.0.1:4001, http://127.0.0.1:2379
cURL Command: curl -X GET http://127.0.0.1:4001/v2/members
cURL Command: curl -X GET http://127.0.0.1:2379/v2/members
member ce2a822cea30bfca is healthy: got healthy result from http://localhost:2379
cluster is healthy
In other words it still sends requests to random port, but this time it isn't bothered by the fact that nobody replied...
Below is the contents of the configuration files:
cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
And:
cat /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#
#[proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#
#[logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
#
#[profiling]
#ETCD_ENABLE_PPROF="false"
So... what is going on? The error messages given by etcd are the typical mindless nonsense produced by Go built-ins. The HTTP server that etcd uses is again, the Go built-in junk, that produces non-standard and absolutely worthless replies. So I cannot understand what was (if at all) misconfigured / missing.

AWS - EC2 - MongoDB replica set time sync issue - NTP - replication lag

We are encountering clock drift issues with our MongoDB replica set running on AWS. This just seemed to start happening recently after we added additional data to the set, before then we did not really notice this issue unless the system was under heavy load. The following error is logged in the mongod.log file sporadically and the system is not under load.
To test this we have isolated a set of machines with the same dataset and not in use by our web application though the error is still occurring;
2014-12-12T13:33:51.333+0000 [rsBackgroundSync] changing sync target
because current sync target's most recent OpTime is Dec 12 13:32:42:c
which is more than 30 seconds behind member mongo1:27017 whose most
recent OpTime is 1418391230
From the above the time stamp shows that one of the mongodb replica set members is over a minute behind. The worst we have seen is 12 minutes out of sync.
This error in turn causes replication lag and we receive the notification about this from the Mongo Monitoring Service although it does correct itself.
The setup is 3 x r3.xlarge AWS Linux instances, 1 in each availability zone of the EU-West-1A region. The machines have been setup using the Mongo recommended settings with a Raid array and the cloud formation scripts provided by Mongo. The data is around 4GB in size.
We think the issue is related to the NTP sync, by default on the AWS Linux Amazon Machine Image the ntpd service is configured to go to a pool of aws ntp servers hosted on www.pool.ntp.org.
To try and rule this out we setup our own NTP server on AWS that the MongoDB servers could sync to. The issue still occurred so we changed the maxpoll and minpoll time for the ntpd service on the mongo machines to sync the time every 16 seconds from the NTP server but the error is still occurring.
We increased the MongoDB OpLog size as well to see if that would make any difference but it didn’t.
Does anyone else encounter this type of issue? Is there something we are missing?
Cheers,
Colin.
ps -ef |grep ntp;
mongodb1
ntp 5163 1 0 Dec11 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 15865 15839 0 09:31 pts/2 00:00:00 grep ntp
mongodb2
ntp 4834 1 0 Dec11 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 19056 19029 0 09:31 pts/0 00:00:00 grep ntp
mongodb3
ntp 5795 1 0 Dec11 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
ec2-user 26199 26173 0 09:31 pts/0 00:00:00 grep ntp
cat /etc/ntp.conf;
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
# Permit all access over the loopback interface. This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict -6 ::1
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.amazon.pool.ntp.org iburst dynamic
#server 1.amazon.pool.ntp.org iburst dynamic
#server 2.amazon.pool.ntp.org iburst dynamic
#server 3.amazon.pool.ntp.org iburst dynamic
server time-server.domain.com iburst
#broadcast 192.168.1.255 autokey # broadcast server
#broadcastclient # broadcast client
#broadcast 224.0.1.1 autokey # multicast server
#multicastclient 224.0.1.1 # multicast client
#manycastserver 239.255.254.254 # manycast server
#manycastclient 239.255.254.254 autokey # manycast client
# Enable public key cryptography.
#crypto
includefile /etc/ntp/crypto/pw
# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys
# Specify the key identifiers which are trusted.
#trustedkey 4 8 42
# Specify the key identifier to use with the ntpdc utility.
#requestkey 8
# Specify the key identifier to use with the ntpq utility.
#controlkey 8
# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
# Enable additional logging.
logconfig =clockall =peerall =sysall =syncall
# Listen only on the primary network interface.
interface listen eth0
interface ignore ipv6
ntpq -npcrv;
remote refid st t when poll reach delay offset jitter
==============================================================================
*172.31.14.137 91.*.*.* 3 u 557 1024 377 1.121 -0.264 0.161
associd=0 status=0615 leap_none, sync_ntp, 1 event, clock_sync,
version="ntpd 4.2.6p5#1.2349-o Sat Mar 23 00:37:31 UTC 2013 (1)",
processor="x86_64", system="Linux/3.14.23-22.44.amzn1.x86_64", leap=00,
stratum=4, precision=-23, rootdelay=23.597, rootdisp=109.962,
refid=172.31.14.137,
reftime=d83a757a.175b5fa1 Tue, Dec 16 2014 9:10:18.091,
clock=d83a77a7.82431efa Tue, Dec 16 2014 9:19:35.508, peer=27361,
tc=10, mintc=3, offset=-0.264, frequency=-13.994, sys_jitter=0.000,
clk_jitter=0.358, clk_wander=0.053
After upgrading to MongoDB 3 using the WiredTiger storage engine we do not see this issue any more.

how to run a daemon process in docker container with API/v1.5?

I'm tring to run a daemon process in the docker container with API/1.5,and here is my POST request, and the containter is created successed while the command seemed run failed,what's the problem here?pls give me some advance,thanks.
{
"Hostname":"",
"User":"",
"Memory":10000000,
"MemorySwap":0,
"AttachStdin":true,
"AttachStdout":true,
"AttachStderr":true,
"PortSpecs": ["8080:8080"],
"Privileged": true,
"Tty":true,
"OpenStdin":true,
"StdinOnce":false,
"Env":null,
"Cmd":[
"nc", "-l", "8080"
],
"Dns":null,
"Image":"base",
"Volumes":{},
"VolumesFrom":"",
"WorkingDir":"~"
}
And here is the response:
HTTP/1.1 201 Created
Content-Type: application/json
Content-Length: 113
Date: Sun, 29 Sep 2013 13:27:52 GMT
{"Id":"9a880dcbbbda","Warnings":["Your kernel does not support memory swap capabilities. Limitation discarded."]}
And I tested if the container is running with sudo docker ps -l and showed that:
ID IMAGE COMMAND CREATED STATUS PORTS
9a880dcbbbda base:latest nc -l 8080 33 seconds ago Exit 0
The result includes a warning, but it is just a warning, not an error.
It means that your system cannot limit the memory or swap allocated to the container, and therefore, the container will run without the memory or swap limitation. But other than that, it should be running fine.
Is there anything indicating that the container is not running correctly?

Resources