I'm trying to run the replicate-couchdb-cluster -d -v -s http://couch-instance1:5984 -t http://couch-instance2:5984 -i -users,_replicator,_global_changes from powershell but it is not skipping these databases. Any reason why this is happening? I'm able to skip the dbs when running from command prompt.
with -i option I was expecting the dbs to be skipped when executing from powershell
Related
I'm trying to backup a single table and restore to another clean server. I do it like this:
innobackupex --tables(or include)='db.table1' --compress --stream=xbstream ./ | ssh user#ip \ "xbstream -x -C /var/lib/mysql/partial-backup/"
In the output I see:
Skipping ./db/table1.ibd.
No errors occur. What could be the reason for this?
First: I have searched the forum and also went through documentation, but still cannot get it right.
So, I have a docker command I want to run on a remote server, from my bash script. I want to pass an environment variable – on the local machine running the script – to the remote server. Furthermore, I need a response from the remote command.
Here is what I actually am trying to do and what I need: the script is a tiny wrapper around our Traefik/Docker/Elixir/Phoenix app setup to be able to connect easily to the running Elixir application, inside the Erlang observer. With the script, the steps would be:
ssh into the remote machine
docker ps to see all running containers, since in our blue/green deploy the active one changes name
docker exec into the correct container
execute a command inside the docker container to connect to the running Elixir application
The command I am using now is:
CONTAINER=$(ssh -q $USER#$IP 'sudo docker ps --format "{{.Names}}" | grep ""$APP_NAME"" | head -n 1')
The main problem is the part with the grep and the ENV var... It is empty, and does not get replaced. It makes sence, since that var does not exist on the remote machine, it does on my local machine. I tried single quotes, $(), ... Either it just does not work, or the solutions I find online execute the command but then I have no way of getting the container name, which I need for the subsequent command:
ssh -o 'RequestTTY force' $USER#$IP "sudo docker exec -i -t $CONTAINER /bin/bash -c './bin/app remote'"
Thanks for your input!
First, are you sure you need to call sudo docker stop? as stopping the containers did not seem to be part of the workflow you mentioned. [edit: not applicable anymore]
Basically, you use a double-double-quote, grep ""$APP_NAME"", but it seems this variable is not substituted (as the whole command 'sudo docker ps …' is singled-quoted); according to your question, this variable is available locally, but not on the remote machine, so you may try writing:
CONTAINER=$(ssh -q $USER#$IP 'f() { sudo docker ps --format "{{.Names}}" | grep "$1" | head -n 1; }; f "'"$APP_NAME"'"')
You can try this single command :
ssh -t $USER#$IP "docker exec -it \$(docker ps -a -q --filter Name=/$APP_NAME) bash -c './bin/app remote'"
You will need to redirect the command with the local environmental variable (APP_NAME) into the ssh command using <<< and so:
CONTAINER=$(ssh -q $USER#$IP <<< 'sudo docker ps --format "{{.Names}}" | grep "$APP_NAME" | head -n 1 | xargs -I{} sudo docker stop {}')
I am trying to link a window from another session by specifying target session using format variable. In that way I hope to get it always linked next to the current active window.
The hard coded version of the working command:
:link-window -a -s 1:remote -t 0:2
in which case I specify a target pane literaly. When I try any of:
:link-window -a -s 1:remote -F -t "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -F "#{session_name}":"#{window_index}"
:link-window -a -s 1:remote -t "#{session_name}":"#{window_index}"
I got an error. The notable part here is that when I do use -F flag, the usage for link-window command is displayed. And when I omit it and use only -t, the error is cann't find window #{session_name}
Does it mean that link-window command simply doesn't support format variables?
-t does not support format variables and link-window does not support -F. run-shell will expand so you can do it by doing, for example:
run "tmux linkw -t '#{session_name}'"
I'm trying to do two commands in docker exec. Concretely, I have to run a command inside a specific directory.
I tried this, butit didn't work:
docker exec [id] -c 'cd /var/www/project && composer install'
Parameter -c is not detected.
I also tried this:
docker exec [id] cd /var/www/project && composer install
But the command composer install is executed after the docker exec command.
How can I do it?
In your first example, you are giving the -c flag to docker exec. That's an easy answer: docker exec does not have a -c flag.
In your second example, your shell is parsing this into two commands before Docker even sees it. It is equivalent to this:
if docker exec [id] cd /var/www/project
then
composer install
fi
First, the docker exec is run, and if it exits 0 (success), composer install will try to run locally, outside of Docker.
What you need to do is pass both commands in as a single argument to docker exec using a string. Then they will not be interpreted by a shell until already inside the container.
docker exec [id] "cd /var/www/project && composer install"
However, as you noted in the comments, this also does not work. That's because cd is a shell builtin, and doesn't exist on its own. Trying to execute it as the initial command will fail. So the next step is to hand this off to a shell to execute.
docker exec [id] "bash -c 'cd /var/www/project && composer install'"
And finally, at this point the && has moved into an inner set of quote marks, so we don't really need the quotes around the bash command... you can drop them if you prefer.
docker exec [id] bash -c 'cd /var/www/project && composer install'
Everything after the container id is the command to run, so in the first example -c isn't an option to exec, but a command docker tries to run and fails since that command doesn't exist.
Most likely you found this syntax from a docker run command where the entrypoint was set to /bin/sh. However, exec bypasses the entrypoint, so you need to include the full command to run. As others have pointed out, that command includes a shell like bash or in the below example, sh:
docker exec [id] /bin/sh -c 'cd /var/www/project && composer install'
The other answers are fine if you want to run 2 arbitrary commands. But if the first command is simply cd, then you should use the -w option to set the working directory instead.
docker exec -w {dir} {container} {commands}
So in your example:
docker exec -w /var/www/project {container} composer install
as Nehal J Wani said in his commentary, the correct syntax is the following:
docker exec [id] /bin/bash -c 'cd /var/www/project && composer install'
many thanks!
I would like to add my example here because it is a bit more complex then the ones thate were shown above. This example also illustrates on how to find a container id that should be used in the docker exec command.
I needed to execute a composite docker exec command against docker container over ssh.
I managed to achieve this in 2 steps:
-definition of the variable that contains a command expored as an environment variable
-ssh command that runs it
environment varialbe definition:
export COMMAND="bash -c 'php bin/console --version && composer --version'"
ssh command that runs on a remote system:
ssh -t -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i keyfile.pem ec2-user#111.222.111.222 'docker exec docker ps|grep php|grep api|grep -v cron|awk '"'"'{print $1}'"'"' '$COMMAND
As you can see I left the command out of the single quotes to pass its actual value to the SSH process
The output of the command execution is:
Warning: Permanently added '111.222.111.222' (ECDSA) to the list of known hosts.
Cannot load Xdebug - it was already loaded
Symfony 4.3.5 (env: dev, debug: true)
Cannot load Xdebug - it was already loaded
Composer version 1.9.1 2019-11-01 17:20:17
Connection to 111.222.111.222 closed.
If you wish to execute this command in a single line you can use a bit modified version of my first example:
COMMAND="bash -c 'php bin/console --version && composer --version'" ssh -t -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i keyfile.pem ec2-user#111.222.111.222 'docker exec `docker ps|grep php|grep api|grep -v cron|awk '"'"'{print $1}'"'"'` '$COMMAND
I am trying to build a HA two node cluster with pacemaker and corosync for postgresql-9.3. I am using the link below as my guide.
http://kb.techtaco.org/#!linux/postgresql/building_a_highly_available_multi-node_cluster_with_pacemaker_&_corosync.md
However, I cannot get pass to the part where I need do pg_basebackup as shown below.
[root#TKS-PSQL01 ~]# runuser -l postgres -c 'pg_basebackup -D
/var/lib/pgsql/9.3/data -l date +"%m-%d-%y"_initial_cloning -P -h
TKS-PSQL02 -p 5432 -U replicator -W -X stream' pg_basebackup:
directory "/var/lib/pgsql/9.3/data" exists but is not empty
/var/lib/pgsql/9.3/data in TKS-PSQL02 is confirmed empty.
[root#TKS-PSQL02 9.3]# ls -l /var/lib/pgsql/9.3/data/ total 0
Any idea why I am getting such error? And is there any better way to do Postgresql HA? Note: I am not using shared storage for the database so I could not proceed with Redhat clustering.
Appreciate all the answers in advance.