Transferring files from my Local windows pc to my Linux VM - linux

SO i am new to tech, and as previous posts suggests i am working with OCI. Currently i run a linux 8 VM on OCI. My goal is to run terraform scrips on the vm, and have the resources created in OCI.
Current problem:
The tf files i will be writing will be done so on my local windows 10 machine. The files will be saved in a local directory. I need a way of transferring these local files to a directory in my linux machine, in order to execute them!
Is anybody good with OCI is there capability for a sftp transfer using winscp?? I'm just not sure where to start. Anybody with good advice please aid me!

It depends of your OCI network configuration.
If your OCI compute VM is in a public subnet and you have an internet gateway, then you can use ssh to connect to it (using putty for instance). That means you can also use scp which lets move copy files over ssh. As you mentioned, WinSCP let's you connect to your OCI compute VM by using ssh and scp or sFTP. After installing it you can create a new connection using the public ip of your OCI compute VM and the private key.
My personal preference is to use MobaXterm to connect to ssh to
my OCI compute VMs. Once connected to a remote host using ssh, the
left pane directly displays a file browser for the remote host.
Drag-and-dropping a file there would initiate an sFTP transfer
automatically.
Please also note that scp is obsolete since 2019. SFTP or rsync could be used instead. Using MobaXterm, it can be done by opening a new terminal tab (which is local to your Windows machine) and type the rsync command you wish for instance rsync -v -P -e 'ssh -i "D:/my_folder/oci_api_key.pem"' /cygdrive/d/my_folder/*.tf opc#<oci_vm_ip>:/home/opc/my_folder
-v is increasing verbosity, to display more information. -P displays partial progress for each file transferred. -e lets you specify which command to use to run rsync. in this case I use ssh and pass the private key. More option are available and you can check them by typing man rsync.
If your OCI compute VM is in a private subnet, you would need to set up a bastion VM in a public subnet to first access the bastion and then the VM. Here is a blog post about how to achieve that using putty and WinSCP : https://www.ateam-oracle.com/ssh-tunnel-to-a-private-vm-using-a-bastion-host-in-oci

Related

Permission denied lost connection

I have created a simple VM in azure in which I will have to host a very simple server written in C.
To send the folder hosted on my computer containing the server to the virtual machine, I use the command from powershell:
scp -r <path_to_key.pem> <path_to_folder_on_my_pc> <azureuser#ip:/home/azureuser/>
The result of this command is
azureuser#ip: Permission denied (publickey).
lost connection
Would anyone who has had this problem have a solution ?
You need to copy your private key to the ~/.ssh/ directory on the host from which you want to transfer the file. Once you have done that, you can use the following command:
scp -i ~/.ssh/<name of your key>.pem <path of file to transfer> user#azureip:<target directory>
So for example you want to transfer file.txt to your Azure VM (IP of 10.10.10.10) with the private key named key.pem
scp -i ~/.ssh/key.pem file.txt user#10.10.10.10:/home/user/
To pull a file from your Azure VM to your local host, you reverse the order of the file to get and user#azureip.
scp -i ~/.ssh/key.pem user#10.10.10.10:/home/user/file.txt /home/user/
This problem may cause in your public key. please Ensure that the public key is also present in your home directory when you create the Azure Virtual machine with a public key. Meaning The public key was kept on your both local computer and virtual machine Then, with the permission accept from your local workstation, you can use ssh into your Azure Virtual Machine using the public key.
Reference: linux - Can't scp to Azure's VM - by ale93p
Suppose if you want to use the private key in the SCP then you will have to use the below command to copy files from the local system to the Azure VM
sudo scp -i ~/.ssh/id_rsa /path/cert.pem azureuser#ip.xxx.xxx.xxx:/home/file/user/local
Make sure that the Azure VM's incoming NSG rule has port 22 opened and by default VM'S page is reachable through port 80/443 over public IP address.
For more information in detail, please refer this link:
Use SSH keys to connect to Linux VMs - Azure Virtual Machines | Microsoft Docs
Use SCP to move files to and from a VM - Azure Virtual Machines | Microsoft Docs

Scp connection timed out ubuntu VM

so i'm trying to copy a file for my directory to Azure ubuntu VM , SSH works just fine ,but scp command takes a lot of time and then i had this message
connect to host 10.x.x.x port 22: Connection timed out lost connection
this is the command i used :
scp -vvv -i .ssh/id_rsa BaltimoreCyberTrustRoot.crt.pem azureuser#10.x.x.x:/var/www/html
• AFAIK, the SCP command that you are using to try to connect to your Ubuntu Azure VM might not be correct as the correct command to connect to your Ubuntu Linux VM from your local machine to copy files between them is as follows: -
scp -r ./tmp/ azureuser#10.xxx.xxx.xxx:/home/file/user/local
In the above command, the SCP connection gets established successfully after entering the private key further which files in the local system in ‘/tmp’ directory is recursively getting copied in the Azure ubuntu VM specified in ‘/home/file/user/local’ directory. Thus, the whole directory tree as specified is copied from the local system to the Azure ubuntu VM.
• Also, if you want to use the private key in the ‘SCP’ command through SSH, then you will have to use the below command to copy files from the local system to the Azure ubuntu VM: -
sudo scp -i ~/.ssh/id_rsa /path/cert.pem azureuser#10.xxx.xxx.xxx:/home/file/user/local
Using ‘sudo’ to access a ‘root’ file, while ‘SCP’ is going to look for the identity file ‘id_rsa’ in ‘/root/.ssh/’ instead of in ‘/home/user/.ssh/’. That's why you will have to specify the identity file (private key) in the SCP command to connect to the Azure ubuntu VM and transfer files from local system to the VM.
Other than this, kindly ensure that port 22 is opened in the inbound NSG rule on the Azure ubuntu VM and the VM's default page is accessible on port 80/443 over public IP address and the Azure FQDN assigned.
For more information, kindly refer to the links below: -
Can't scp to Azure's VM
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/copy-files-to-linux-vm-using-scp#scp-a-directory-from-a-linux-vm

How to configure users/keys to allow Ansible to run against multiple hosts?

I'm currently using a sandbox environment to help gain an understanding of Linux and Ansible.
I have a rhel 7.6 VM where Ansible is installed/ran from that i connect to via moba. I then have 2 test VMs that i'd like to run Ansible against.
I cannot SSH from the Ansible VM to either of the test VM's (Permission denied public key) but i can connect directly to the test VM's.
How do i set up the keys/hosts? does the private key need to be uploaded to the Ansible VM?
Try to deploy ~/.ssh/id_rsa.pub key from Ansible control machine to one of your VM's in a file ~/.ssh/authorized_keys. Copy the contents of ~/.ssh/id_rsa.pub from the Ansible control machine in ~/.ssh/authorized_keys on the target host. You may use the ssh-copy-id command to perform this for you so long as you have access to the target host via some method.
another method different from best practice id_rsa.pub deployment is configuring inventory vars for your hosts/groups by setting ansible_user, ansible_ssh_pass (with vault usage), ansible_become_user, ansible_become_pass (with vault usage)

How to copy files from Amazon EFS to my local machine with shell script?

I have a question regarding file transfer from Amazon efs to my local machine with a simple shell script. The manual procedure I follow is:
Copy the file from efs to my Amazon ec2 instance using sudo cp
Copy from ec2 to my local machine using scp or FileZilla (drag and drop)
Is there a way it can be done running a shell script in which I give two inputs: source file address and save destination directory?
Can two steps be reduced to one i.e. directly copying from efs to local machine?
You should be able to mount to the local machine and access the remote file system locally on your machine.
http://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html
With mounting, you can access the file locally with your machine resources to edit the remote files.
While SCP can work, you need to keep them in sync all the time between your local and remote.
Hope it helps.

How to run multiple ssh sessions from a remote server

I have deployed a topology of VMs in a VNET on Azure. There is one jumpbox which has access to all these machines and is part of VNET. There are some 25 machines which this jumpbox provides access to.
I want to be able to simultaneously run commands and scripts on all the VMs through this jumpbox.
I installed cssh and it shows following error:
Can't find DISPLAY -- guessing unix:0' at /usr/share/perl5/App/ClusterSSH.pm line 1981.
Can't connect to display unix:0': No such file or directory at /usr/share/perl5/X11/Protocol.pm line 2264.
See this answer here: https://unix.stackexchange.com/a/76777
In essence:
Setup Public Key authentication between the jumpbox and your servers.
for host in $(cat hosts.txt); do ssh "$host" "$command" > "output.$host" ; done
pssh is probably the better tool for this job:
https://www.linux.com/news/parallel-ssh-execution-and-single-shell-control-them-all
cssh should also work, just don't do X11 stuff with it or make sure you have X11 forwarding enabled. Actually, i'm lying, i have no idea if it works without xterm.

Resources