I have a linux system running with several Docker containers. One of them is mosquitto container which runs from mosquitto 1.6.7 docker image.
I do not have control how the Mosquitto container is created as it is given by default from a supplier/client.
I need to make changes in the mosquitto/config/mosquitto.conf file. This is the output when I run ls -l
/mosquitto/config # ls -l
total 4
-rwxrwxr-x 1 nobody nobody 210 May 24 05:35 mosquitto.conf
I tried the codes below to add a comment in the mosquitto.conf, but I am not successful.
/mosquitto/config # echo '#test' | su nobody -c 'tee -a mosquitto.conf'
nologin: this account is not available
/mosquitto/config # echo '#test' | su nobody -s sh -c 'tee -a mosquitto.conf'
su: can't execute 'sh': No such file or directory
/mosquitto/config # echo '#test' | su nobody -s bin/sh -c 'tee -a mosquitto.conf'
su: can't execute 'bin/sh': No such file or directory
/mosquitto/config # echo '#test' | su nobody -s /bin/sh -c 'tee -a mosquitto.conf'
tee: mosquitto.conf: Permission denied
#test
Is it possible to change the mosquitto.conf?
If yes, how? Thanks.
You don't.
You make a copy of it on the host machine, edit there and then mount that edited copy into the container when you start it.
e.g.
docker run -d -v /path/to/local/mosquitto.conf:/mosquitto/config/mosquitto.conf mosquitto
Related
I seem to have a weird issue:
I want to restart a reverse ssh tunnel on boot, I've tried it with an init script (that works fine when executed as user) and with an added line in /etc/rc.d but none of it works. What I get after boot is:
$ ps ax | grep autossh
397 pts/10 S+ 0:00 grep --color=auto autossh
1351 ? Ss 0:00 /usr/lib/autossh/autossh -M 22221 -N -o PubkeyAuthentication=yes -o PasswordAuthentication=no -i ~/.ssh/etherwan.key -R 19999:localhost:22 ubuntu#server
but I'm unable to login from server. So I did the following after boot:
$ sudo killall -KILL autossh
[sudo] password for ron:
$ /usr/bin/autossh -M 22221 -f -N -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -i ~/.ssh/etherwan.key -R 19999:localhost:22 ubuntu#server
upon which I can login using port 19999 just fine!
The keys permissions look like: (but root should not need to care, would it?)
$ ls -l ~/.ssh/etherwan.key
-r-------- 1 ron ron 1675 Nov 6 04:15 /home/ron/.ssh/etherwan.key
Replace ~/.ssh/etherwan.key in your rc.d script with /home/ron/.ssh/etherwan.key
The '~' character is expanded to the user's home directory by the shell, but rc.d scripts are run as root.
I do not have passwordless ssh enabled between my two servers a and b. So I am using sshpass to connect to the server b from a.
I have a requirement to add host entries in the /etc/hosts of server b from a. But the user that i am logging into server b is non-root user but has sudo privileges to edit files owned by root.
How do i add host entries to /etc/hosts of server b from server a through a shell script while using sshpass.
Here is the script that was tried:
#!/bin/bash
export SSHPASS="password"
SSHUSER=ciuser
WPC_IP=10.8.150.28
sshpass -e ssh -o UserKnownHostsFile=/dev/null -o 'StrictHostKeyChecking no' $SSHUSER#$WPC_IP "echo test >> /etc/hosts"
Output:
bash test.sh
Warning: Permanently added '10.8.150.28' (RSA) to the list of known hosts.
bash: /etc/hosts: Permission denied
Thank you.
sudo doesn't work with redirects directly, so you can use sudo tee -a to append to a file:
echo '1.2.3.4 test' | sudo tee -a /etc/hosts
In your command, this would be:
sshpass -e ssh -o UserKnownHostsFile=/dev/null -o 'StrictHostKeyChecking no' "$SSHUSER#$WPC_IP" "echo test | sudo tee -a /etc/hosts"
Note that this requires passwordless sudo access without a tty, which is not necessarily the same as your sudo privileges.
I have a script with 2 ssh commands. The SSH scripts uses SSH to log into a remote server and deletes docker images.
ssh person#someserver.com 'set -x &&
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q)'
Note use of ; to separate commands (we don't care if one or more of the commands fail).
The 2nd ssh command uses SSH to log into the same server, grab a docker compose file and run docker.
ssh person#someserver.com 'set -x &&
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'
Note use of && to separate commands (this time we do care if one or more of the commands fail as we grab the exit code i.e exitCode=$?).
I don't like the fact I have to split this into 2 so my question is can these 2 sections of bash commands be combined into a single SSH call (with both ; and && combinations)?
Although it is possible to pass a set of commands as a simple single-quoted string, I wouldn't recommend that, because:
internal quotation marks should be escaped
it is difficult to read (and maintain!) a code that looks like a string in a text editor
I find it better to keep the scripts in separate files, then pass them to ssh as standard input:
cat script.sh | ssh -T user#host -- bash -s -
Execution of several scripts is done in the same way. Just concatenate more scripts:
cat a.sh b.sh | ssh -T user#host -- bash -s -
If you still want to use a string, use a here document instead:
ssh -T user#host -- <<'END_OF_COMMANDS'
# put your script here
END_OF_COMMANDS
Note the -T option. You don't need pseudo-terminal allocation for non-interactive scripts.
ssh person#someserver.com 'set -x;
echo "Stop docker images" ;
sudo docker stop $(sudo docker ps -a -q) ;
sudo docker rmi -f $(sudo docker images -q) ;
sudo docker rm -f $(sudo docker ps -a -q) ;
export AWS_CONFIG_FILE=/somelocation/myaws.conf &&
aws s3 cp s3://com.somebucket.somewhere/docker-compose/docker-compose.yml . --region us-east-1 &&
echo "Get ECR login credentials and do a docker compose up" &&
sudo $(aws ecr get-login --region us-east-1) &&
sudo /usr/local/bin/docker-compose up -d'
I have a centos7 base ami and have successfully changed the user name using the ec2-launch user data modified from an amazon-linux script
#!/bin/bash
groupadd ec2-user
usermod -d /home/ec2-user -m -g ec2-user -l ec2-user centos
echo "" | sudo tee -a /etc/sudoers
echo "Defaults:root !requiretty" | sudo tee -a /etc/sudoers
echo "ec2-user ALL=(ALL) NOPASSWD: ALL" | sudo tee -a /etc/sudoers
echo "Defaults:ec2-user !requiretty" | sudo tee -a /etc/sudoers
log in works as expected and home directory has been changed, however when I use sudo it still asks for a password. As I cannot get into the file to check format I wonder if I am using the correct syntax?
How do change the user and remove the sudo password requirement in a single script?
I believe your Cloudinit userdata script is failing because it's attempting to use sudo without a tty (and the !requiretty hasn't been added yet). Since that script runs as root anyways, this should work:
#!/bin/bash
groupadd ec2-user
usermod -d /home/ec2-user -m -g ec2-user -l ec2-user centos
echo "" | tee -a /etc/sudoers
echo "Defaults:root !requiretty" | tee -a /etc/sudoers
echo "ec2-user ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers
echo "Defaults:ec2-user !requiretty" | tee -a /etc/sudoers
I am working with Chef on EC2 instances, and created a user data script to be passed in through the knife ec2 command, which creates a new user, copies the public key file from the default ec2-user and sets the correct ownership and permissions.
#!/bin/bash
CHEFUSER="$(date +%s | sha256sum | base64 | head -c 32)"
useradd $CHEFUSER
echo $CHEFUSER 'ALL=(ALL) NOPASSWD:ALL' | tee -a /etc/sudoers
cp -f /home/ec2-user/.ssh/authorized_keys /tmp/
chown $CHEFUSER /tmp/authorized_keys
runuser -l $CHEFUSER -c 'mkdir ~/.ssh/'
runuser -l $CHEFUSER -c 'mkdir ~/.aws/'
runuser -l $CHEFUSER -c 'chmod 700 ~/.ssh/'
runuser -l $CHEFUSER -c 'mv -f /tmp/authorized_keys ~/.ssh/'
runuser -l $CHEFUSER -c 'chmod 600 ~/.ssh/authorized_keys'
Checking ownership and permissions seems to return as expected after running the script:
# ls -l .ssh/authorized_keys
-rw-------. 1 NWYzMThiMDBmNzljOTgxZmU1NDE1ZmE0 root 396 May 29 11:28 .ssh/authorized_keys
# stat -c '%a %n' .ssh/
700 .ssh/
# stat -c '%a %n' .ssh/authorized_keys
600 .ssh/authorized_keys
If I SSH with the new user, the key is rejected. On a new instance, if I copy/paste the same commands as root in the terminal (which is how the script runs according to Amazon), everything works fine and I can then SSH in with the new user.