Run a command in remote server - linux

What would be the best way to run commands in remote servers? I am thinking of using SSH, but is there a better way than that?
I used Red Hat Linux and I want to run the command in one of the servers, specify what other servers I want to have my command run, and it has to do the exact same thing in the servers specified. Puppet couldn't solely help, but I might be able to combine some other tool with Puppet to do the job for me.

It seems you are able to log on to the other servers without entering a password. I assume this is based on SSH keys, as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-ssh-configuration-keypairs.html
You say another script is producing a list of servers. You can now use the following simple script to loop over the list:
for server in `./server-list-script`; do
echo $server:
ssh username#$server mkdir /etc/dir/test123
done >logfile 2>&1
The file "logfile" will collect the output. I'm pretty sure Puppet is able to do this as well.

Your solution will almost definitely end up involving ssh in some capacity.
You may want something to help manage the execution of commands on multiple servers; ansible is a great choice for something like this.
For example, if I want to install libvirt on a bunch of servers and make sure libvirtd is running, I might pass a configuration like this to ansible-playbook:
- hosts: all
tasks:
- yum:
name: libvirt
state: installed
- service:
name: libvirtd
state: running
enabled: true
This would ssh to all of the servers in my "inventory" (a file -- or command -- that provides ansible with a list of servers), install the libvirt package, start libvirtd, and then arrange for the service to start automatically at boot.
Alternatively, if I want to run puppet apply on a bunch of servers, I could just use the ansible command to run an ad-hoc command without requiring a configuration file:
ansible all -m command -a 'puppet apply'

Related

Nodejs/Gcloud/kubectl any command we run from WSL2 is deadly slow

I referred many solutions yet no luck. I have a linux automation which runs few gcloud commands with some conditions. I made this script with node js, but it is incredibly slow that I even finish it manually before the scrips completes the run.
Same with the gcloud commands when I connect to a cluster and kubectl commands when i query something.
Please help!!
It could be a DNS config error on WSL side. I hadthe same issue today, here's how I fixed it !
1. Checking the (deadly slow) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m1.212s
user 0m0.151s
sys 0m0.050s
2. Checking the WSL/DNS configuration
[tbg#~] cat /etc/wsl.conf
[network]
generateResolvConf=false
[tbg#~] cat /etc/resolv.conf
nameserver XX.XXX.XXX.X
nameserver YYY.YY.YY.YY
nameserver 1.1.1.1
If you see that, remove these lines to get back to automatic resolv.conf generation and restart WSL (wsl --shutdown)
3. Checking the (fixed !) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m10.530s
user 0m0.087s
sys 0m0.043s
I found out my resolv.conf configuration was causing that latency, by trying to reinstall kubectl with apt, and finding apt really slow too
Right now access to /mnt folders in WSL2 is too slow and by default at launch the entire Windows PATH is added to the Linux $PATH so any Linux binary that scans $PATH will make things unbearably slow.
To disable this feature, edit the /etc/wsl.conf to add the following section:
[interop]
appendWindowsPath = false
Avoid adding Windows Path to Linux $PATH and best for now is adding folders to the $PATH manually.
Terminate the WSL distro (wsl.exe --terminate <distro_name>) to make it immediately effective or wsl.exe --shutdown and start the terminal again.
Refer to the stack link for more information.

Sandboxing to allow multiple processes open the same port

Background
I have a command-line application that I use to connect to a remote device on port 1234. I cannot change the port number, and I do not have access to the source to rebuild this tool. I'm currently working in a lab where all ports except SSH are blocked. To get around this, I create a tunnel, i.e.:
ssh -L 1234:remotehost:1234 sshuser#remotehost
Now, I can just point my CLI program at localhost:1234 to connect with my CLI tool to the desired host.
Problem
This CLI tool needs to run for about an hour straight, and I have about 200 remote hosts to test with it. I would like to parallelize this task. Unfortunately, I can only create a single tunnel on my local machine using port 1234.
Question
Is there a (trivial/simple/automated) way to jail/sandbox my CLI tool so that I can launch 100 instances in parallel (i.e. via a shell script) so that each instance "thinks" it's talking to port 1234? For example, does Docker or KVM provide some sort of anonymous/on-demand compute node feature that I could setup rapidly? I'd rather not have to resort to manually deploying and managing a slew of VirtulBox hosts via vagrant.
The simple answer is that you can use multiple IP addresses locally. Each local IP address on the client will allow you to create another tunnel. Currently, you are using localhost. But your client also has an IP address. You can prove my point by trying this syntax:
ssh -f -N -L 127.0.0.1:1234:remotehost1:1234 sshuser#remotehost1 # this is default
ssh -f -N -L <local-IP1>:1234:remotehost2:1234 sshuser#remotehost2 # specifying non-default value <local-IP1>
Now, you just need to figure out how to give your client multiple IP addresses (secondary addresses). Then you can expand this beyond 2 parallel sessions.
I've also added -f and -N to your ssh syntax to put ssh into the background (-f) and to not issue any commands.
Using -R tunnels in the past, I've found that I need to enable GatewayPorts on the server (/etc/ssh/sshd_config). In the case of -L , I don't see the need. However, the ssh man-page explicitly mentioned GatewayPorts associated with the -L function. You may need to play around a bit. I just tried this out on my Mac and I was able to get it going without any GatewayPorts considerations.

How to run multiple scripts in remote machine

I have to remotely connect to a gateway(working on Linux platform), inside which I have couple of executable files (signingModule.sh and taxModule.sh).
Now I want to write one script in my desktop which will connect to that gateway and run signingModule.sh and taxModule.sh in two different terminals.
I have written below code:
ssh root#10.138.77.150 #to connect to gateway
sleep 5
cd /opt/swfiscal/signingModule #path of both modules
./signingModule #executable.
But through this code I am able to connect my gateway but after connecting to gateway nothing is happening.
2nd code:
source configPath # file where i have given path of both the modules
cd $FCM_SCRIPTS # variable in which i have stored the path of modules
ssh root#10.138.77.150 'sh -' < startSigningModule** #to connect and run one module.
as an output of this i am getting:
-source: configPath: file not found
Please help me working this out. Thanks in advance.
Notes:
I can copy paste my files in that gateway if required.
Gnome-Terminal or any other alternatives of this is not working in my gateway
ssh root#10.138.77.150 "cd /opt/swfiscal/signingModule && ./signingModule"
Line source configPath doesn't work because you need specify full path to the file.
You can pass several commands to ssh to run them in sequence; but I prefer a different solution: I have whole scripts locally; and running them remotely means:
Using scp to copy my script to the remote system
Using ssh to then run the script on the remote system
The big advantage here: there is always a potential for getting things wrong (for example: quoting) when directly giving commands to ssh. But when you put everything into a script, you have exact/full control over what is going to happen. You can put things like "set -e" into your script to improve error handling ...
(and of course, you can also automate the two steps listed above!)

docker: SSH access directly into container

Up to now we use several linux users:
system_foo#server
system_bar#server
...
We want to put the system users into docker container.
linux user system_foo --> container system_foo
The changes inside the servers are not problem, but remote systems use these users to send us data.
We need to make ssh system_foo#server work. The remote systems can't be changed.
I would be very easy if there would be just one system per linux operating system (pass port 22 to the container). But there are several.
How can we change from the old scheme to docker containers and keep the service ssh system_foo#server available without changes at the remote site?
Please leave a comment if you don't understand the question. Thank you.
Let's remember however that having ssh support in a container is typically an anti-pattern (unless it's your container only 'concern' but then what would be the point of being able to ssh in. Refer to http://techblog.constantcontact.com/devops/a-tale-of-three-docker-anti-patterns/ for information about that anti-pattern
nsenter could work for you. First ssh to the host and then nsenter to the container.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)`
nsenter --target $PID --mount --uts --ipc --net --pid
source http://jpetazzo.github.io/2014/06/23/docker-ssh-considered-evil/
Judging by the comments, you might be looking for a solution like dockersh. dockersh is used as a login shell, and lets you place every user that logins to your instance into an isolated container.
This probably won't let you use sftp though.
Note that dockersh includes security warnings in their README, which you'll certainly want to review:
WARNING: Whilst this project tries to make users inside containers
have lowered privileges and drops capabilities to limit users ability
to escalate their privilege level, it is not certain to be completely
secure. Notably when Docker adds user namespace support, this can be
used to further lock down privileges.
Some months ago, I helped my like this. It's not nice, but works. But
pub-key auth needs to be used.
Script which gets called via command in .ssh/authorized_keys
#!/usr/bin/python
import os
import sys
import subprocess
cmd=['ssh', 'user#localhost:2222']
if not 'SSH_ORIGINAL_COMMAND' in os.environ:
cmd.extend(sys.argv[1:])
else:
cmd.append(os.environ['SSH_ORIGINAL_COMMAND'])
sys.exit(subprocess.call(cmd))
file system_foo#server: .ssh/authorized_keys
command="/home/modwork/bin/ssh-wrapper.py" ssh-rsa AAAAB3NzaC1yc2EAAAAB...
If the remote system does ssh system_foo#server the SSH-Daemon at server executes the comand given in .ssh/authorized_keys. This command does a ssh to a different ssh-daemon.
In the docker container, there needs to run ssh-daemon which listens on port 2222.

What user will Ansible run my commands as?

Background
My question seems simple, but it gets more complex really fast.
Basically, I got really tired of maintaining my servers manually (screams in background) and I decided it was time to find a way to make being a server admin much more liveable. That's when I found Ansible. Great huh? Sure beats making bash scripts (louder scream) for everything I wanted to automate.
What's the problem?
I'm having a lot of trouble figuring out what user my Ansible playbook will run certain things as. I also need the ability to specify what user certain tasks will run as. Here are some specific use cases:
Cloning a repo as another user:
My purpose with this is it run my node.js webapp from another user, who we'll call bill (that can only use sudo to run a script that I made that starts the node server, as opposed to root or my user that can use sudo for all commands). To do this, I need the ability to have Ansible's git module clone my git repo as bill. How would I do that?
Knowing how Ansible will gain root:
As far as I understand, you can set what user Ansible will connect to the server you're maintaining by defining 'user' and the beginning of the playbook file. Here's what I don't understand: if I tell it to connect via my username, joe, and ask it to update a package via the apt module, how will it gain root? Sudo usually prompts me for my password, and I'd prefer keeping it that way (for security).
Final request
I've scoured the Ansible docs, done some (what I thought was thorough) Googling, and generally just tried to figure it out on my own, but this information continues to elude me.
I am very new to Ansible, and while it's mostly straight-forwards, I would benefit greatly if I could understand exactly how Ansible runs, on which users it runs, and how/where I can specify what user to use at different times.
Thank you tons in advance
You may find it useful to read the Hosts and Users section on Ansible's documentation site:
http://docs.ansible.com/playbooks_intro.html#hosts-and-users
In summary, ansible will run all commands in a playbook as the user specified in the remote_user variable (assuming you're using ansible >= 1.4, user before that). You can specify this variable on a per-task basis as well, in case a task needs to run as a certain user.
Use sudo: true in any playbook/task to use sudo to run it. Use the sudo_user variable to specify a user to sudo to if you don't want to use root.
In practice, I've found it easiest to run my playbook as a deploy user that has sudo privileges. I set up my SSH keys so I can SSH into any host as deploy without using a password. This means that I can run my playbook without using a password and even use sudo if I need to.
I use this same user to do things like cloning git repos and starting/stopping services. If a service needs to run as a lower-privileged user, I let the init script take care of that. A quick Google search for a node.js init.d script revealed this one for CentOS:
https://gist.github.com/nariyu/1211413
Doing things this way helps to keep it simple, which I like.
Hope that helps.
My 2 cents:
Ansible uses your local user (eg Mike) to ssh to the remote machine. (That required Mike to be able to ssh to the machine)
From there it can change to a remote user if needed
It can also sudo if needed and if Mike is allowed. If no user is specified then root will be selected via your ~/.ansible.cfg on your local machine.
If you supply a remote_user with the sudo param then like no.3 it will not use root but that user.
You can specify different situations and different users or sudo via the playbooks.
Playbook's define which roles will be run into each machine that belongs to the inventory selected.
I suggest you read Ansible best practices for some explanation on how to setup your infrastructure.
Oh and btw since you are not referring to a specific module that ansible uses and your question is not related to python, then I don't find any use your question having the python tag.
Just a note that Ansible>=1.9 uses privilege escalation commands so you can execute tasks and create resources as that secondary user if need be:
- name: Install software
shell: "curl -s get.dangerous_software.install | sudo bash"
become_user: root
https://ansible-docs.readthedocs.io/zh/stable-2.0/rst/become.html
I notice current answers are a bit old and suffering from link rot.
Ansible will SSH as your current user, by default:
https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html#connecting-to-remote-nodes
Ansible communicates with remote machines over the SSH protocol. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
This can be overridden using:
passing the -u parameter at the command line
setting user information in your inventory file
setting user information in your configuration file
setting environment variables
But then you must ensure a route exists to SSH as that user. An approach to maintaining user-level ownership I see more often is become (root) and then to chown -R jdoe:jdoe /the/file/path.
In my 2.12 release of ansible I found the only way I could change the user was by specifying become: yes as an option at the play level. That way I am SSHing as the unprivileged, default, user. This user must have passwordless sudo enabled on the remote and is about the safest I could make my VPS. From this I could then switch to another user, with become_user, from an arbitrary command task.
Like this:
- name: Getting Started
gather_facts: false
hosts: all
become: yes # All tasks that follow will become root.
tasks:
- name: get the username running the deploy
command: echo $USER
become_user: trubuntu # From root we can switch to trubuntu.
If the user permitted SSH access to your remote is, say, victor, and not your current user, then remote_user: victor has a place at the play level, adjacent to become: yes.

Resources