Answer terminal prompt remotely. Pipe to a different shell session - linux

I have a scheduled job that runs on a ec2 instances. Due to some recent third party lib change the job nows requires user interaction and can't be run automatically anymore.
The job executes a script that will prompt user input. The input is not simple y/n, and can changes so we can just echo y | myscript.sh
Compromise - I still like to continue running this job automatically on schedule, but it will ping the oncall engineer to answer prompt. Is there a way to answer the prompt remotely? If thats not possible then the engineer can ssh into the ec2 instance, but the engineer still needs a way to pipe to the shell that's awaiting for answer.

Related

Run Python script in Task Scheduler as normal user but with admin privileges

I have an odd set of constraints and I'm not sure if what I want to do is possible. I'm writing a Python script that can restart programs/services for me via an Uvicorn/FastAPI server. I need the following:
For the script to always be running and to restart if it stops
To be constantly logged on as the standard (non-admin) user
To stop/start a Windows service that requires admin privileges
To start a program as the current (non-admin) user that displays a GUI
I've set up Task Scheduler to run this script as admin, whether logged in or not. This was the only way I found to be able to stop/start Windows services. With this, I'm able to do everything I need except for running a program as the current user. If I set the task to run as the current user, I can do everything except the services.
Within Python, I've tried running the program with os.startfile(), subprocess.Popen(), and subprocess.run(), but it always runs with no GUI, and seemingly as the admin since I can't kill the process without running Task Manager as admin. I'm aware of the 'user' flag in subprocess, but as I'm on Windows 8, the latest Python version I can use is 3.8.10, and 'user' wasn't introduced until Python 3.9.
I've tried the 'runas' cmd command (run through os.system() as well as a separate batch script), but this doesn't work as it prompts for the user's password. I've tried the /savecred flag and I've run the script manually both as a user and as admin just fine, but if I run this through Task Scheduler, either nothing happens, or there is a perpetual 'RunAs' process that halts my script.
I've tried PsExec, but again that doesn't work in Task Scheduler. If I run even a basic one-line batch file with PsExec as a task, I get error 0xC0000142, which from what I can tell is some DLL error: NT_STATUS_DLL_INIT_FAILED.
The only solution I can think of is running two different Python scripts in Task Scheduler (one as admin, one as non-admin), but this is not ideal as I want only one Uvicorn/FastAPI server running with one single port.
EDIT -
I figured out a way to grant service perms to the user account with ServiceSecurityEditor, but I'm still open to any suggestions that may be better. I want the setup process for a new machine to be as simple as possible.

How can I send a command to X number of EC2 instances via SSH

I've a lot AWS EC2 Instances and I need to execute a python script from them at the same time.
I've been trying from my pc to execute the script by sending via ssh the commands required. For this, I've created a another python script that open a cmd terminal and then execute some commands (the ones I need to execute the python script on each instance). Since I need that all these cmd terminal are openned at the same time I've used the ThreatPoolExecutor that (with my PC characteristics) grants me 60 runs in parallel. This is the code:
import os
from concurrent.futures import ThreadPoolExecutor
ipAddressesList=list(open("hosts.txt").read().splitlines())
def functionMain(threadID):
os.system(r'start cmd ssh -o StrictHostKeyChecking=no -i mysshkey.pem ec2-user#'+ipAddressesList[threadID]+' "cd scripts && python3.7 script.py"')
functionMainList =list(range(0,len(ipAddressesList)))
with ThreadPoolExecutor() as executor:
results = executor.map(functionMain, functionMainList)
The problem of this is that the command that executes the script.py is blocking the terminal until the end of the process, hence the functionMain stays waiting for the result. I would like to find the way that after sending the command python3.7 script.py the function ends but the script keeps executing in the instance. So the pool executor can continue with the threads.
The AWS Systems Manager Run Command can be used to run scripts on multiple Amazon EC2 instances (and even on-premises computers if they have the Systems Manager agent installed).
The Run Command can also provide back results of the commands run on each instance.
This is definitely preferably to connecting to the instances via SSH to run commands.
Forgive me for not providing a "code" answer, but I believe there are existing tools that already solve this problem. This sounds like an ideal use of ClusterShell:
ClusterShell provides a light and unified command execution Python framework to help administer GNU/Linux or BSD clusters. Some of the most important benefits of using ClusterShell are to:
provide an efficient, parallel and highly scalable command execution engine in Python,
Using clush you can execute commands in parallel across many nodes. It has options for grouping the output by hostname as well.
Another option would be to use Ansible, but you'll need to create a playbook in that case whereas with ClusterShell you are running a command the same way you would with SSH. With Ansible, you will create a target group for a playbook and it will connect up to each instance and tell it to run the playbook. To make it disconnect while the command is still running, look into asynchronous actions:
By default Ansible runs tasks synchronously, holding the connection to the remote node open until the action is completed. This means within a playbook, each task blocks the next task by default, meaning subsequent tasks will not run until the current task completes. This behavior can create challenges. For example, a task may take longer to complete than the SSH session allows for, causing a timeout. Or you may want a long-running process to execute in the background while you perform other tasks concurrently. Asynchronous mode lets you control how long-running tasks execute.
I've used both of these in HPC environments with more than 5,000 machines and they both will work well for your purpose.

How to use ssh to execute actions that are keyboard interactive by supplying static text

I'm writing a shell script to connect to a Linux based remote machine using ssh
After successfully logging in to the remote machine I'm able to execute Linux commands, now the real problem I'm facing is when trying to run the same script for another remote machine which will ask for a local authentication(username and password) upon ssh login to proceed further.
Can someone guide me on how to fill the first line with my username and immediately do a carriage return and perform similar action for password.
I've tried the below code connect.sh
sshpass -p <remote-passwd> ssh root#<remote-ip> 'bash -s' < test.sh
test.sh contain
ls
pwd
If I run connect.sh script it executes perfectly without asking for remote machine password. It also executes ls and pwd
Can I actually replace ls and pwd with my username and password to achieve what I'm trying to do??
Also, Am I looking on something which is not possible?? (I have seen a similar code in VB.NET which is solving my purpose but it is not a robust code and I really don't have any idea on VB scripts)
Update: I'm able to login to remote machine non interactively, but the remote machine environment immediately asks for a local authentication which again requires keyboard interaction, I'm looking for achieving this authentication non interactively
If at all possible, you should configure a public key on the server so you don't have to supply a password. This is more secure and will solve your problem more directly.
You may also want to look into orchestration frameworks, rather than implementing this all yourself. If you're doing small things, Fabric is a good option. If this looks like it'll become something much larger, you should look into something like Ansible, which can also additionally handle system configuration and a million other things, but requires very little setup to get started with.

Shell script to get current load status of servers and refresh in every 2 minutes

I have to create a Shell script which would get the latest linux server load status from different-2 clusters for every 2 second into my shell.
What parameters I have to take care while creating this?
a.) server name
b.) server password
c.) watch command i.e watch -n 2 w
I need to create two tabs like server name and server load against that
I will ssh for connecting servers and also I would appreciate if someone suggest better way to achieve this?
Thanks in advance
Instead of reinventing the wheel why dont you use that.
There are many tools which are doing the same task you need.
Below tools will provide you sys stat after specified time and will store that data also for later use
Ganglia,
Munin
Graphite
Writing shell script for such task has many disadvantages like,
shell script modification/maintainance is difficult
credentials are need to provided in script(security reason)
most imp : difficult to interpret the results/stats on screen
data not available for offline analysis
I hope you understand the point I am trying to put here.
while : ; do
ssh host1 uptime
ssh host2 uptime
sleep 120
tput clear
done

the interaction between a user and crontab

How to realize that?
I need to run a crontab task. At some time, there will be a pop-out window to remind me doing something.
If I input yes in that window, that task will be done, while if no,nothing will happen
Crontab just does things in background. How to interact with it?
I could use, say
echo 'good' >/dev/pts/1
However tty1 does not exists necessarily.
Use zenity and make sure to set the DISPLAY environment variable to :0.0 when executing zenity from within the crontab as this is necessary to start GUI apps on the X server.
The usual division of labor for a problem like this is to divide the code into a server component and a client component. The server runs in the background, detached from your interactive session, and does any actual work, whilst listening for client connections. You run the client from your GUI, either interactively or as part if your GUI session, and it performs any user interactions and communicates your inputs to the server.

Resources