PostgreSQL SSH Tunnel to Amazon EC2? - security

I've created an Amazon EC2 AMI running CentOS Linux 5.5 and PostgreSQL 8.4. I'd like to be able to create an SSH tunnel from my machine to the instance so I can connect to the PostgreSQL database from my development machine (JDBC, Tomcat, etc.) I haven't modified the PostgreSQL configuration at all as of yet. I can successfully SSH into the instance from a command line, and have run the following command to try and create my tunnel:
ssh -N -L2345:<My instance DNS>:5432 -i <keypair> root#<My instance DNS>
I don't receive any errors when initially running this command. However, when I try and use psql to open up a connection on localhost:2345, the connection fails.
Any thoughts as to why this is happening?

The first <My instance DNS> should be localhost. And you probably don't want to/need to ssh as root.

Related

How to execute linux commands on a remote ubuntu server using symfony4?

I want to execute some linux commands like docker run nginx on a remote Ubuntu server. Let's say host A using my client interface on another host B developed in symfony4 and then the server (host A) will send some info after executing the command to the client interface on host B to be displayed on it.
How to achieve this?
at first you need to log in:
ssh username#ip_address_of_server_with_symfony4
Then
cd /path/to/symfony4
Then
docker exec symfony_container_php php bin/console command:name --arguments
If you need to run single command from your local computer, you also can use ssh -t
ssh -t user#ip full;command;separated with;-es

Connecting to Azure Container Services from Windows 8.1 Docker

I've been following this tutorial to set up an Azure container service. I can successfully connect to the master load balancer via putty. However, I'm having trouble connecting to the Azure container via docker.
~ docker -H 192.168.33.400:2375 ps -a
error during connect: Get https://192.168.33.400:2375/v1.30/containers/json?all=1: dial tcp 192.168.33.400:2375: connectex: No connection could be made because the target machine actively refused it.
I've also tried
~ docker -H 127.0.0.1:2375 ps -a
This causes the docker terminal to hang forever.
192.168.33.400 is my docker machine ip.
My guess is I haven't setup the tunneling correctly and this has something to do with how docker runs on Windows 8.1 (via VM).
I've created an environment variable called DOCKER_HOST with a value of 2375. I've also tried changing the value to 192.168.33.400:2375.
I've tried the following tunnels in putty,
1. L2375 192.168.33.400:2375
2. L2375 127.0.0.1:2375
3. L22375 192.168.33.400:2375
4. L22375 127.0.0.1:2375 (as shown in the video)
Does anyone have any ideas/suggestions?
Here are some screenshots of the commands I ran:
We can follow this steps to setup tunnel:
1.Add Azure container service FQDN to Putty:
2.Add private key(PPK) to Putty:
3.Add tunnel information to Putty:
Then we can use cmd to test it:

AWS EC2 becomes "Other Linux" from Ubuntu when creating AMI

I did the following steps:
1) Use an existing AMI with Ubuntu 16.0.
2) Create Instance from this AMI and Launch it.
3) Do nothing, just stop the instance.
4) Create AMI from this stopped instance.
The new AMI becomes (Platform) "Other Linux" and I cannot login into SSH into anymore......:
When using Putty, I can see on terminal output 'instance XXX connected',
but do not have access the terminal Bash.
(just read only screen).
It happens anytime when I used another AMI than Amazon AMI (Ubuntu).
Is it related to Free tier account ?
How to get support on this ?
Thanks vm.
Similar issues:
https://forums.aws.amazon.com/message.jspa?messageID=758595#758595
Solved (using AWS Support...):
Pb was due to AWS Mindterm Java SSH Browser which cannot connect to this instance.
But using Putty works well, "Other Linux" does not have influence.
1.) Other Linux means No vanilla Ubuntu (default Ubuntu changed).
2.) Can be connect to EC VM using Putty
putty -ssh loginName#Ipadress
Dont forget to convert PEM file to PPK using PuttyGen 1st.
login depends on the original AMI instance,
here ubuntu/root can be used since this is Ubuntu Amazon.
Compare to google Cloud, AWS does not have robust Web Browser SSH, so 3rd party SSH is needed.

How to give GUI to remote server and access it from my local machine in ubuntu 14.04 lts

I have one ubuntu server located in one city(remotely), i want to give dummy display/GUI to that server and access it from my local ubuntu machine, how can i create this, please suggest me if there is any way to create a dummy display to that server from my local machine and can i access it like my own local machine.
Connect via ssh to your Linux server and do the following in cli:
sudo passwd root (put password for your username that you want to connect from)
sudo apt-get install ubuntu-desktop (to install your GUI)
sudo apt-get install xrdp (to install the middleware to connect through)
open port on your cloud/host portal for remote connection (port 3389)
Do your remote desktop to your vm dns using that port and use ur created username/password to go through.
You can use ssh service, open a terminal and write
# sudo root ssh#ip_address_of_server
so if the ip address of server is : 192.168.1.22
the command line will be :
# sudo root ssh#192.168.1.22
You do not forget to install ssh service

cygwin/sshd and Virtualbox

I'm using vagrant/VirtualBox on my Window (8.1) Laptop to start up a linux-test-vm from a Cygwin terminal... vagrant up, vagrant ssh, everything is working fine.
Now I want to work on that environment remotely from my main Linux-Workstation, so I've set up sshd in Cygwin and I can successfully ssh into my Windows-Box (same user as logged in locally in windows).
But when I cd'ed (via my remote ssh connection to windows-laptop) in my working directory and ran vagrant ssh, it tells me:
VM must be created before running this command. Run 'vagrant up' first
But I see the VM is running in VirtualBox GUI on Windows.
From this point on even locally on the Windows machine I can no longer interact with the running vagrant vm and the .vagrant (sub)directory has not files inside.
Same happens vice versa:
I stopped/deleted the VM in VirtualBox GUI
ran vagrant up via my ssh connection ... worked
ran vagrant ssh via my ssh connection ... works
but I do not see the VM in VirtualBox GUI on Windows
trying vagrant ssh locally on Windows ... same error again and .vagrant directory gets cleared
So I assume the Cygwin/sshd connection creates some sort of different Sessions that do not share the same "instance" of VirtualBox.
Is there any chance to share VirtualBox/vagrant environment between the local Windows and remote ssh session ???
WORKAROUND:
export ssh-config on the windows host: vagrant ssh-config > ssh_config
from the cygwin/ssh jump into the VM: ssh -F ssh_config default
never run any vagrant command from the cygwin/ssh connection
Vagrant has a built-in solution since the 1.7.x versions called vagrant share which also allows you to remote into a box directly (bypassing the Windows host abstraction). It is generally used for the HTTP feature (eg. to show clients or others on a project the current state of work) but there is the ability to connect to any service running on any port. From the docs:
Just call vagrant share. This will automatically share as many ports
as possible for remote connections. If the Vagrant environment has a
static IP or DNS address, then every port will be available.
Otherwise, Vagrant will only expose forwarded ports on the machine.
Note the share name at the end of calling vagrant share, and give this
to the person who wants to connect to your machine. They simply have
to call vagrant connect NAME. This will give them a static IP they can
use to access your Vagrant environment.
Note to use vagrant share you need a (free) account with hashicorp.

Resources