How to run multiple ssh sessions from a remote server - azure

I have deployed a topology of VMs in a VNET on Azure. There is one jumpbox which has access to all these machines and is part of VNET. There are some 25 machines which this jumpbox provides access to.
I want to be able to simultaneously run commands and scripts on all the VMs through this jumpbox.
I installed cssh and it shows following error:
Can't find DISPLAY -- guessing unix:0' at /usr/share/perl5/App/ClusterSSH.pm line 1981.
Can't connect to display unix:0': No such file or directory at /usr/share/perl5/X11/Protocol.pm line 2264.

See this answer here: https://unix.stackexchange.com/a/76777
In essence:
Setup Public Key authentication between the jumpbox and your servers.
for host in $(cat hosts.txt); do ssh "$host" "$command" > "output.$host" ; done
pssh is probably the better tool for this job:
https://www.linux.com/news/parallel-ssh-execution-and-single-shell-control-them-all
cssh should also work, just don't do X11 stuff with it or make sure you have X11 forwarding enabled. Actually, i'm lying, i have no idea if it works without xterm.

Related

Transferring files from my Local windows pc to my Linux VM

SO i am new to tech, and as previous posts suggests i am working with OCI. Currently i run a linux 8 VM on OCI. My goal is to run terraform scrips on the vm, and have the resources created in OCI.
Current problem:
The tf files i will be writing will be done so on my local windows 10 machine. The files will be saved in a local directory. I need a way of transferring these local files to a directory in my linux machine, in order to execute them!
Is anybody good with OCI is there capability for a sftp transfer using winscp?? I'm just not sure where to start. Anybody with good advice please aid me!
It depends of your OCI network configuration.
If your OCI compute VM is in a public subnet and you have an internet gateway, then you can use ssh to connect to it (using putty for instance). That means you can also use scp which lets move copy files over ssh. As you mentioned, WinSCP let's you connect to your OCI compute VM by using ssh and scp or sFTP. After installing it you can create a new connection using the public ip of your OCI compute VM and the private key.
My personal preference is to use MobaXterm to connect to ssh to
my OCI compute VMs. Once connected to a remote host using ssh, the
left pane directly displays a file browser for the remote host.
Drag-and-dropping a file there would initiate an sFTP transfer
automatically.
Please also note that scp is obsolete since 2019. SFTP or rsync could be used instead. Using MobaXterm, it can be done by opening a new terminal tab (which is local to your Windows machine) and type the rsync command you wish for instance rsync -v -P -e 'ssh -i "D:/my_folder/oci_api_key.pem"' /cygdrive/d/my_folder/*.tf opc#<oci_vm_ip>:/home/opc/my_folder
-v is increasing verbosity, to display more information. -P displays partial progress for each file transferred. -e lets you specify which command to use to run rsync. in this case I use ssh and pass the private key. More option are available and you can check them by typing man rsync.
If your OCI compute VM is in a private subnet, you would need to set up a bastion VM in a public subnet to first access the bastion and then the VM. Here is a blog post about how to achieve that using putty and WinSCP : https://www.ateam-oracle.com/ssh-tunnel-to-a-private-vm-using-a-bastion-host-in-oci

WSL2 use "localhost" to access Windows service

I'm using WSL2 on Windows 10.
My dev stack is using a local webserver (localwp or wamp) on the host OS.
I use WSL2 as the main terminal (SSH, Git, SASS, automation tools, ...).
What I need is a way to connect to my host services (MySql) from the WSL2 system using a server name instead of a random IP address.
It is already possible for the Windows host to connect to WSL2 services with "localhost". Is there a solution to do it the other way?
You should use hostname.local to access Windows from WSL2 because that will use the correct IP. Note that hostname should be replaced with the result of the hostname command run in WSL2.
You can check the IP by running ping $(hostname).local from WSL2.
You also need to add a firewall rule to allow traffic from WSL2 to Windows. In an elevated PowerShell prompt run this:
New-NetFirewallRule -DisplayName "WSL" -Direction Inbound -InterfaceAlias "vEthernet (WSL)" -Action Allow
The command above should allow you to access anything exposed by Windows from WSL, no matter what port, however bear in mind that any apps you've launched get an automated rule created for them when you first launch them, blocking access from public networks (this is when you get a prompt from Windows Firewall, asking whether the app should be allowed to accept connections from public networks).
If you don't explicitly allow, they will be blocked by default, which also blocks connections from WSL. So you might need to find that inbound rule, and change it from block to allow (or just delete it).
See info here:
https://github.com/microsoft/WSL/issues/4585#issuecomment-610061194
Well, your title and your question body don't seem quite aligned.
The question title says "use localhost", but then in the body you say "using a server name."
Accessing the Windows 10 service via the name "localhost" from WSL2? Let's just go with "no". I can think of a possibility of how to make it work, but it would be complicated.
But I think the second is really what you are looking for, so a couple of options that I can think of for accessing the Windows host services by hostname in WSL2:
First, and hopefully the easiest, WSL2 supports mDNS (WSL1 did not), so you should be able to access the Windows host as {hostname}.local (where {hostname} is the name of the Windows host (literally, in bash, ping $(hostname).local, since the assigned WSL2 hostname is that of the host Windows 10 computer). That works for me. While I don't recall having to do anything special to enable this, this Super User answer seems to indicate that you have to turn it on manually.
The second option would be to add your Windows host IP to /etc/hosts. If your Windows IP is static, then you could just add it manually to /etc/hosts and be done. If it's dynamic, then you might want to script it. You can retrieve it from inside WSL2 via:
powershell.exe "(Test-Connection -ComputerName (hostname) -Count 1).IPV4Address.IPAddressToString" (and other methods) and then use something like sed to change /etc/hosts.
Add the following code to ~/.bashrc or ~/.zshrc, and then use winhost to access the host ip。
sed -i -e '/winhost/d' /etc/hosts
win_ip=$(cat /etc/resolv.conf | grep nameserver | awk '{ print $2 }')
win_host="$win_ip winhost"
echo $win_host >> /etc/hosts
The last time I was facing this issue,
I downgraded to WSL1, and all the connections started working perfectly.
You can use:
wsl --set-version Ubuntu 1
This is the easiest approach to fix all connection related issues in WSL2.

Sandboxing to allow multiple processes open the same port

Background
I have a command-line application that I use to connect to a remote device on port 1234. I cannot change the port number, and I do not have access to the source to rebuild this tool. I'm currently working in a lab where all ports except SSH are blocked. To get around this, I create a tunnel, i.e.:
ssh -L 1234:remotehost:1234 sshuser#remotehost
Now, I can just point my CLI program at localhost:1234 to connect with my CLI tool to the desired host.
Problem
This CLI tool needs to run for about an hour straight, and I have about 200 remote hosts to test with it. I would like to parallelize this task. Unfortunately, I can only create a single tunnel on my local machine using port 1234.
Question
Is there a (trivial/simple/automated) way to jail/sandbox my CLI tool so that I can launch 100 instances in parallel (i.e. via a shell script) so that each instance "thinks" it's talking to port 1234? For example, does Docker or KVM provide some sort of anonymous/on-demand compute node feature that I could setup rapidly? I'd rather not have to resort to manually deploying and managing a slew of VirtulBox hosts via vagrant.
The simple answer is that you can use multiple IP addresses locally. Each local IP address on the client will allow you to create another tunnel. Currently, you are using localhost. But your client also has an IP address. You can prove my point by trying this syntax:
ssh -f -N -L 127.0.0.1:1234:remotehost1:1234 sshuser#remotehost1 # this is default
ssh -f -N -L <local-IP1>:1234:remotehost2:1234 sshuser#remotehost2 # specifying non-default value <local-IP1>
Now, you just need to figure out how to give your client multiple IP addresses (secondary addresses). Then you can expand this beyond 2 parallel sessions.
I've also added -f and -N to your ssh syntax to put ssh into the background (-f) and to not issue any commands.
Using -R tunnels in the past, I've found that I need to enable GatewayPorts on the server (/etc/ssh/sshd_config). In the case of -L , I don't see the need. However, the ssh man-page explicitly mentioned GatewayPorts associated with the -L function. You may need to play around a bit. I just tried this out on my Mac and I was able to get it going without any GatewayPorts considerations.

sublime text sftp tunnel wbond

To work remotely I need to SSH into the main server and then again into the departmental server.
I would like to set up a tunnel using sublime text 3 wbond sftp package to view and edit files remotely but I can't seem to find any information for setting up a tunnel. Is this even possible?
The reason I'm interested in this particular package is because I am unable to install any packages locally on the server, hence using something like rsub is not possible.
Any other suggestions besides sublime sftp are welcome.
I'm not sure the SFTP plugin would allow to do this directly.
What i would suggest is for you to use ssh -L to create a tunnel.
ssh -L localhost:random_unused_port:target_server:22 username_for_middle_server#middle_server -nNT
Use the password/identity_file for the middle server
The -nNT is to avoid opening an interactive shell in the middle server.
IMPORTANT: You need to keep the ssh -L command running so keep that shell open.
In this way you can connect to the target_server as such:
ssh username_for_target_server#localhost -p random_port_you_allocated
Similarly you can setup the SFTP plugin file as such
{
...
"host":"localhost",
"user":"username_for_target_server",
"ssh_key_file": "path_to_target_server_key",
"port":"random_port_you_allocated",
....
}
As a sidenote, always use the same port to tunnel to the same server, otherwise, with the default ssh configuration, you will be warned of a "Man in the middle attack" because the signature saved in the .ssh/known_hosts will not match with the previous one. This can be avoided by disabling this feature but I wouldn't recommend it.

ssh on edge-node for azure HDInsight

I tried deploying a HDInsight cluster with an edge node.
I used https://github.com/Azure/azure-quickstart-templates/blob/master/101-hdinsight-linux-with-edge-node/azuredeploy.json for deployment.
After deployment is complete I tried ssh using following command:
ssh sshuser#new-edgenode.myclustertest-ssh.azurehdinsight.net:22
[myclustertest is the name of the cluster].
It gives following error:
ssh: Could not resolve hostname new-edgenode.myclustertest-ssh.azurehdinsight.net:22: Name or service not known
Do I need to add something to the azuredeploy.json to enable ssh access?
Looking at the https://learn.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-linux-use-ssh-unix I thought that
<edgenodename>.<clustername>-ssh.azurehdinsight.net
is enabled by default for external access.
Problem was in the ssh command.
I used the ssh command supplied from azure portal hoping that it would work seamlessly. I had to remove :22 from the command to make it work.
Modified command looks like this:
ssh sshuser#new-edgenode.myclustertest-ssh.azurehdinsight.net

Resources