How do I run a shell script on multiple CGE machines concurrently? - linux

I have multiple Linux GCE machines and there's a bash script I want to run on some of them from time to time. Right now I'm connecting manually to each machine separately in order to run the script.
How do I run the script on several machines concurrently without connecting to each machine individually each time? How do I configure on which machines it should run?

you can use csshx to ssh into multiple servers at once. Once logged in into servers, you can execute script as per your need. you can follow this link to install it on mac.
Other alternative could be that you schedule a cronjob at all servers so they will run at specific time. you can edit these multiple cronjobs using csshx.

Try fabric. You can write simple scripts in Python that you can run on multiple hosts (which you will need SSH access to).

Related

Issue in executing a sequence of linux commands using Jmeter installed on a linux VM

Hope you are doing great.
I am reaching out to the community as I am currently stuck with a problem of executing a sequence of commands from a linux machine using jmeter.
A bit of Background:-
I have an external VM which is used to mimic the transfer of file to various inbound channels.
This VM is basically acting as a third party which hosts files which are then transferred to different location by following a sequence of commands
The Sequence of Commands that I am trying to execute to mimic the third party are as below
ls (to list the files in the Home Dir)
mv test123.txt test456.txt (This renames the file in the home Dir from test123.txt to test456.txt)
Then we connect to the File exchange server using the command below
sftp -P 24033 testuser#test-perf.XYZ.com
password is test#123456
Once Connected we execute the below sequence
ls(This will list folders Inbound or Route)
CD Route (To change Dir to Route)
ls (List the account ID's)
put test456.txt 12345 (12345 is the account ID)
Post the execution of the last command the file is transferred to internal folder based on account ID
I did some search on stack over flow and found a couple of links but was not able to make successful use of it to simulate the above sequence of commands
The closest one I could find is as below
How to execute Linux command or shell script from APACHE JMETER
But this does not talk about executing from a linux machine itself
Any help on how to approach this one will help me out. Thanks in advance
PS:- I am using jmeter cause I have to keep this sequence executing continuously till I transfer the expected number of file in a peak hour durations and these files are of different sizes ranging from few MB's to a couple of GB's
New Edit
I used the JSR223 Pre-Processor where I have my sequence of commands and then I call that command in the OSS Sampler and created a script as below
The script executes on the Linux box without any error but the file is not transferred to the destination. Am I missing something?
On some research I did found an lftp command but not sure how to use in my case and if that will work or not.
Any suggestions?
To execute commands on local Linux machine you can use OS Process Sampler
To execute commands on remote Linux machine you can use SSH Command Sampler
See How to Run External Commands and Programs Locally and Remotely from JMeter article for more information if needed.
To transfer the file from local to remote you can use SSH SFTP Sampler
In order to get SSH Command and SSH SFTP Samplers install SSH Protocol Support plugin using JMeter Plugins Manager:

How can I send a command to X number of EC2 instances via SSH

I've a lot AWS EC2 Instances and I need to execute a python script from them at the same time.
I've been trying from my pc to execute the script by sending via ssh the commands required. For this, I've created a another python script that open a cmd terminal and then execute some commands (the ones I need to execute the python script on each instance). Since I need that all these cmd terminal are openned at the same time I've used the ThreatPoolExecutor that (with my PC characteristics) grants me 60 runs in parallel. This is the code:
import os
from concurrent.futures import ThreadPoolExecutor
ipAddressesList=list(open("hosts.txt").read().splitlines())
def functionMain(threadID):
os.system(r'start cmd ssh -o StrictHostKeyChecking=no -i mysshkey.pem ec2-user#'+ipAddressesList[threadID]+' "cd scripts && python3.7 script.py"')
functionMainList =list(range(0,len(ipAddressesList)))
with ThreadPoolExecutor() as executor:
results = executor.map(functionMain, functionMainList)
The problem of this is that the command that executes the script.py is blocking the terminal until the end of the process, hence the functionMain stays waiting for the result. I would like to find the way that after sending the command python3.7 script.py the function ends but the script keeps executing in the instance. So the pool executor can continue with the threads.
The AWS Systems Manager Run Command can be used to run scripts on multiple Amazon EC2 instances (and even on-premises computers if they have the Systems Manager agent installed).
The Run Command can also provide back results of the commands run on each instance.
This is definitely preferably to connecting to the instances via SSH to run commands.
Forgive me for not providing a "code" answer, but I believe there are existing tools that already solve this problem. This sounds like an ideal use of ClusterShell:
ClusterShell provides a light and unified command execution Python framework to help administer GNU/Linux or BSD clusters. Some of the most important benefits of using ClusterShell are to:
provide an efficient, parallel and highly scalable command execution engine in Python,
Using clush you can execute commands in parallel across many nodes. It has options for grouping the output by hostname as well.
Another option would be to use Ansible, but you'll need to create a playbook in that case whereas with ClusterShell you are running a command the same way you would with SSH. With Ansible, you will create a target group for a playbook and it will connect up to each instance and tell it to run the playbook. To make it disconnect while the command is still running, look into asynchronous actions:
By default Ansible runs tasks synchronously, holding the connection to the remote node open until the action is completed. This means within a playbook, each task blocks the next task by default, meaning subsequent tasks will not run until the current task completes. This behavior can create challenges. For example, a task may take longer to complete than the SSH session allows for, causing a timeout. Or you may want a long-running process to execute in the background while you perform other tasks concurrently. Asynchronous mode lets you control how long-running tasks execute.
I've used both of these in HPC environments with more than 5,000 machines and they both will work well for your purpose.

Is it possible to write a shell script that takes input then will ssh into a server and run scripts on my local machine?

I have several scripts on my local machine. These scripts run install and configuration commands to setup my Elasticsearch nodes. I have 15 nodes coming and we definitely do not want to do that by hand.
For now, let's call them Script_A, Script_B, Script_C and Script_D.
Script_A will be the one to initiate the procces, it currently contains:
#!/bin/bash
read -p "Enter the hostname of the remote machine: " hostname
echo "Now connecting to $hostname!"
ssh root#$hostname
This works fine obviously and I can get into any server I need to. My confusion is running the other scripts remotely. I have read few other articles/SO questions but I'm just not understanding the methodology.
I will have a directory on my machine as follows:
Elasticsearch_Installation
|
|=> Scripts
|
|=> Script_A, Script_B, etc..
Can I run the Script_A, which remotes into the server, then come back to my local and run Script_B and so on within the remote server without moving the files over?
Please let me know if any of this needs to be clarified, I'm fairly new to the Linux environment in general.. much less running remote installs from scripts over the network.
Yes you can. Use ssh in non interactive mode, it will be like launching a command in your local environment.
ssh root#$hostname /remote/path/to/script
Nothing will be changed in your local system, you will be at the same point where you launched the ssh command.
NB: this command will ask you a password, if you want a really non interactive flow, set up host a passwordless login, like explained here
How to ssh to localhost without password?
You have a larger problem than just setting up many nodes: you have to be concerned with ongoing maintenance and administration of all those nodes, too. This is the space in which configuration management systems such as Puppet, Ansible, and others operate. But these have a learning curve to overcome, and they require some infrastructure of their own. You would probably benefit from one of them in the medium-to-long term, but if your new nodes are coming next week(ish) then you probably want a solution that you can use immediately to get up and going.
Certainly you can ssh into the server to run commands there, including non-interactively.
My confusion is running the other scripts remotely.
Of course, if you want to run your own scripts on the remote machine then they have to be present there, first. But this is not a major problem, for if you have ssh then you also have scp, the secure copy program. It can copy files to the remote machine, something like this:
#!/bin/bash
read -p "Enter the hostname of the remote machine: " hostname
echo "Now connecting to $hostname!"
scp Script_[ABCD] root#${hostname}:./
ssh root#hostname ./Script_A
I also manage Elasticsearch clusters with multiple nodes. A hack that works for me is using Terminator Terminal Emulator and split it into multiple windows/panes, one for each ES node. Then you can broadcast the commands you type in one window into all the windows.
This way, you run commands & view their results almost interactively across all nodes parallely. You could also save this layout of windows in Terminator, and then you can get this view quickly using a shortcut.
PS, this approach will only work of you have only small number of nodes & that too for small tasks only. The only thing that will scale with the number of nodes & the number of times and variety of tasks you need to perform will probably be a config management solution like Puppet or Salt.
Fabric is another interesting project that may be relevant to your use case.

How to connect and run commands in a ubuntu and windows machines?

I want to run few scripts (jmeter scripts) in few AWS ubuntu machines and windows7 machines. Usually I use WinScp (transfer files), putty (run commands) for linux and Remote Desktop Connection for windows machines, to do the work manually.
Now I want to automate these processes and would like to know how to achieve that. My intention is to write code to,
connect to these machines,
copy the scripts,
run the scripts,
fetch the log files and
close the machine
I also want to schedule them. What is the best way of doing this? Also I prefer that the code that I write can be hosted somewhere (so that rest api can be exposed) or called as a direct library (API) in my java server.
I know that chef scripts can be written, but want to know any other alternatives. https://www.chef.io/chef/
Thanks a ton in Advance.

How to write a shell script to run scripts on several remote machines without ssh?

Can anyone please tell me how I can write a bash shell script that executes another script on several remote machines without ssh.
The scenario is I've a couple of scripts that I should run on 100 Amazon Ec2 cloud instances. The naive approach is to write a script to scp both the source scripts to all the instances and then run them by doing a ssh on each instance. Is there a better way of doing this?
Thanks in advance.
If you just want to do stuff in parallel, you can use Parallel SSH or Cluster SSH. If you really don't want to use SSH, you can install a task queue system like celery. You could even go old school and just have a cron job that periodically checks a location in s3 and if the key exists, download the file and run it, though you have to be careful to only run it once. You can also use tools like Puppet and Chef if you're generally trying to manage a bunch of machines.
Another option is rsh, but be careful, it has security implications. See rsh vs. ssh.

Resources