How to wait until services are ready - linux

I have been setting up a Jenkins pipeline using docker images. Now I need to run various services like MySQL, Redis, Memcache, Beanstalkd and Elasticsearch. To wait the job until MySQL is ready, I am using the following command :
sh "while ! mysqladmin ping -u root -h mysqlhost ; do sleep 1; done"
sh 'echo MySQL server is up and running'
Where mysqlhost is the hostname I have provided for the container. Similarly, I need to check and wait for Redis, Memcached, Beanstalkd and Elasticsearch. But pinging to these services are not working as it is done for MySQL . How can I implement this ?

The Docker docs mention this script to manage container readiness checks: https://github.com/vishnubob/wait-for-it
I also use this one which is compatible with Alpine:
https://github.com/eficode/wait-for

You can do a curl to this services in order to check if they are alive or not.
For redis you can also do https://redis.io/commands/ping

Related

Kubernetes Unable to connect to the server: dial tcp x.x.x.x:6443: i/o timeout

I am using test kubenetes cluster (Kubeadm 1 master and 2 nodes setup), My public ip change time to time and when my public IP changed, I am unable to connect to cluster and i get below error
Kubernetes Unable to connect to the server: dial tcp x.x.x.x:6443: i/o timeout
I also have private IP 10.10.10.10 which is consistent all the time.
I have created kubernetes cluster using below command
kubeadm init --control-plane-endpoint 10.10.10.10
But still it failed because certificates are signed to public IP and below is the error
The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
Can someone help to setup kubeadm, and should allow for all IP's something like 0.0.0.0 and I am fine for security view point since it is test setup. or any parament fix.
Since #Vidya has already solved this issue by using a static IP address, I decided to provide a Community Wiki answer just for better visibility to other community members.
First of all, it is not recommended to have a frequently changing master/server IP address.
As we can find in the discussion on GitHub kubernetes/88648 - kubeadm does not provide an easy way to deal with this.
However, there are a few workarounds that can help us, when the IP address on the Kubernetes master node changes.
Based on the discussion Changing master IP address, I prepared a script that regenerates certificates and re-init master node.
This script might be helpful, but I recommend running one command at a time (it will be safer).
In addition, you may need to customize some steps to your needs:
NOTE: In the example below, I'm using Docker as the container runtime.
root#kmaster:~# cat reinit_master.sh
#!/bin/bash
set -e
echo "Stopping kubelet and docker"
systemctl stop kubelet docker
echo "Making backup kubernetes data"
mv /etc/kubernetes /etc/kubernetes-backup
mv /var/lib/kubelet /var/lib/kubelet-backup
echo "Restoring certificates"
mkdir /etc/kubernetes
cp -r /etc/kubernetes-backup/pki /etc/kubernetes/
rm /etc/kubernetes/pki/{apiserver.*,etcd/peer.*}
echo "Starting docker"
systemctl start docker
echo "Reinitializing master node"
kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd
echo "Updating kubeconfig file"
cp /etc/kubernetes/admin.conf ~/.kube/config
Then you need to rejoin the worker nodes to the cluster.

Redis Startup issues on Debian Stretch (9)

Actually I'm on my way to switch to debian 9 for the new production servers of the company and want to provision them with ansible. So far, everything works fine, but now I'm stuck with redis-server.
By default, Debian 9 comes with redis version 3.2. I'm installing the package via apt-get install redis-server. After that, redis starts up as a daemon in the background. Now I want to apply some custom configuration, like binding to 2 different IPs (127.0.0.1 and the server IP).
After changing this as well as the daemonize option (to yes), redis is no longer willing to start in the background. Whenever doing either service redis-server start or /etc/init.d/redis-server start, the command just stucks.
journalctl -xe tells me, that the pid file is not readable (redis-server.service: PID file /var/run/redis/redis-server.pid not readable (yet?) after start-post: No such file or directory) even though it should be created according to init.d script:
start)
echo -n "Starting $DESC: "
mkdir -p $RUNDIR
touch $PIDFILE
chown redis:redis $RUNDIR $PIDFILE
chmod 755 $RUNDIR
After all, I can see, that both service redis-server start and /etc/init.d/redis-server are starting the server and I'm also able to connect to the server via redis-cli. But the damn process stucks.
Can anyone help? If you need further information, just let me know. I'll provide what ever possible if this solves the problem!
best
Chris
I had a similar situation on a Centos 7 server.
The resolution was to change supervised from no to auto
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised auto
When you run the process as daemon it need to interact with systemd for process management (if I read well some documentation).
Thanks

Update my node.js code in multiple instances

I have a Elastic Load Balancer in AWS. I have my node.js code deployed in 3 instances and I'm using pm2 to update my code, but I need to do manually on this way:
Connect by ssh in each machine
Git pull on each machine
pm2 reload all on each machine
How can I do to update all the code in ALL machines when I do a new commit to the master or other branch (like production branch)?
Thanks.
You can just write a script in for example bash to solve this:
# This will run your local script update.sh on the remote
ssh serverIp1 "bash -s" < ./update.sh
Then in you local update.sh you can add code to git pull and reload:
# This code will run on the remote
git pull
# Update
# Other commands to run on remote host
You can also have a script that does all of this for all your machines:
ssh serverIp1 "bash -s" < ./update.sh
ssh serverIp2 "bash -s" < ./update.sh
ssh serverIp3 "bash -s" < ./update.sh
or event better:
for ip in serverIp1 serverIp2 serverIp3; do
(ssh $ip "bash -s" < ./update.sh)
done
An alternative is ElasticBeanstalk, especially if you are using a "pure" Node solution (not a lot of extra services on the instances). With beanstalk, you supply a git ref or ZIP file of your project, and it handles the deployment (starting up new instances, health checks, getting them on the load balance, removing old instances, etc.) In some ways it is an automated deployment version of what you have now, because you still will have EC2 instances, a Load Balancer, etc.
Use a tool like Jenkins (self-hosted) or Travis CI to run your builds and deployments. Many alternatives are available FYI, Jenkins and Travis are just some of the most popular.
Ok, thanks for your answers but I think the best option for me is AWS CodeDeploy.
I don't know why I did not find this before make the question...

Couchdb cartridge not responding in docker image

I successfully deployed a couchdb cartridge to wso2stratos and member get activated successfully. For the implementation of the dockerfile i used this git code. which include the below line that i have no idea why it is there! Can someone explain the below code?
RUN printf "[httpd]\nport = 8101\nbind_address = 0.0.0.0" > /usr/local/etc/couchdb/local.d/docker.ini
EXPOSE 8101
CMD ["/usr/local/bin/couchdb"]
I tried pointing http://127.0.0.1:5984/_utils/spec/run.html url and its working perfectly.
I just SSH to the docker container and start the couchdb,
root#instance-00000001:/usr/local/etc/couchdb/local.d# couchdb couchdb
Apache CouchDB 1.6.1 (LogLevel=info) is starting.
Apache CouchDB has started. Time to relax.
[info] [<0.32.0>] Apache CouchDB has started on http://0.0.0.0:8101/
Then I try to pointing the browser to http://0.0.0.0:8101/ and http://127.0.0.1:5984/_utils/index.html both of them not working.
Can someone tell me why i can't view my databases and create database window?
For your first question about what those lines do:
# Set port and address for couchdb to bind too.
# Remember these are addresses inside the container
# and not necessarily publicly available.
# See http://docs.couchdb.org/en/latest/config/http.html
RUN printf "[httpd]\nport = 8101\nbind_address = 0.0.0.0" >
/usr/local/etc/couchdb/local.d/docker.ini
# Tell docker that this port needs to be exposed.
# You still need to run -P when running container
EXPOSE 8101
# This is the command which is run automatically when container is run
CMD ["/usr/local/bin/couchdb"]
As for why you cannot access it, What does your docker run command look like, did you expose the port? i.e.
docker run -p 8101:8101 ....
Are you by any chance testing on OSX? If so try http://192.168.59.103:8101/ On OSX docker would be inside a virtual box VM as docker cannot run natively on OSX. The IP of the virtual machine can be looked up using boot2docker ip and is normally 192.168.59.103.

How to simultaneously deploy Node.js web app on multiple servers with Jenkins?

I'm gonna deploy a Node.js mobile web application on two remote servers.(Linux OS)
I'm using SVN server to manage my project source code.
To simply and clearly manage the app, I decided to use Jenkins.
I'm new to Jenkins so it was a quite difficult task installing and configuring Jenkins.
But I couldn't find how to set up Jenkins to build remote servers simultaneously.
Could you help me?
You should look into supervisor. It's language and application type agnostic, it just takes care of (re-) starting application.
So in your jenkins build:
You update your code from SVN
You run your unit tests (definitely a good idea)
You either launch an svn update on each host or copy the current content to them (I'd recommend this because there are many ways to make SVN fail and this allows to include SVN_REVISION in the some .JS file for instance)
You execute on each host: fuser -k -n tcp $DAEMON_PORT, this will kill the currently running application with the port $DAEMON_PORT (the one you use in your node.js's app)
And the best is obviously that it will automatically start your node.js at system's startup (provided supervisor is correctly installed (apt-get install supervisor on Debian)) and restart it in case of failure.
A node.js supervisord's subconfig looks like this:
# /etc/supervisor/conf.d/my-node-app.conf
[program:my-node-app]
user = running-user
environment = NODE_ENV=production
directory = /usr/local/share/dir_app
command = node app.js
stderr_logfile = /var/log/supervisor/my-node-app-stderr.log
stdout_logfile = /var/log/supervisor/my-node-app-stdout.log
There are many configuration parameters.
Note: There is a node.js's supervisor, it's not the one I'm talking about and I haven't tested it.
per Linux OS, you need to ssh to your hosts to run command to get application updated:
work out the workflow of application update in shell script. Especially you need to daemonize your node app so that a completed jenkins job execution will not kill your app when exits. Here's a nice article to tell how to do this: Running node.js Apps With Upstart, or you can refer to pure nodejs tech like forever. Assume you worked out a script under /etc/init.d/myNodeApp
ssh to your Linux OS from jenkins. so you need to make sure the ssh private key file has been copied to /var/lib/jenkins/.ssh/id_rsa with the ownership of jenkins user
Here's an example shell step in jenkins job configuration:
ssh <your application ip> "service myNodeApp stop; cd /ur/app/dir; svn update; service myNodeApp restart"

Resources