AWS EC2 becomes "Other Linux" from Ubuntu when creating AMI - linux

I did the following steps:
1) Use an existing AMI with Ubuntu 16.0.
2) Create Instance from this AMI and Launch it.
3) Do nothing, just stop the instance.
4) Create AMI from this stopped instance.
The new AMI becomes (Platform) "Other Linux" and I cannot login into SSH into anymore......:
When using Putty, I can see on terminal output 'instance XXX connected',
but do not have access the terminal Bash.
(just read only screen).
It happens anytime when I used another AMI than Amazon AMI (Ubuntu).
Is it related to Free tier account ?
How to get support on this ?
Thanks vm.
Similar issues:
https://forums.aws.amazon.com/message.jspa?messageID=758595#758595

Solved (using AWS Support...):
Pb was due to AWS Mindterm Java SSH Browser which cannot connect to this instance.
But using Putty works well, "Other Linux" does not have influence.
1.) Other Linux means No vanilla Ubuntu (default Ubuntu changed).
2.) Can be connect to EC VM using Putty
putty -ssh loginName#Ipadress
Dont forget to convert PEM file to PPK using PuttyGen 1st.
login depends on the original AMI instance,
here ubuntu/root can be used since this is Ubuntu Amazon.
Compare to google Cloud, AWS does not have robust Web Browser SSH, so 3rd party SSH is needed.

Related

Startx and VNC are not working on AWS Lightsail CentOS7

I installed tigervnc on AWS Lightsail CentOS7. When I try to connect to the server with vnc client it shows a black screen. Then I tried to run startx, it gives an error. "xauth: file /root/.serverauth.6301 does not exist".
Is it possible that GUI is not working on AWS Lightsail?
I was using AWS Lightsail Server with low configuration. The problem was gone after I moved to another AWS Lightsail server with high configuration. I think the problem was in low memory.

Virtualbox- Cant access to VM when running in Headless mode

I installed an ubuntu server with openssh server on a virtualbox and it works fine. when I start it from GUI I can access it via ssh and Putty, there is no problem. When I start it In headless mode from virtualbox gui there is no problem either.
the problem is, when I run it using VBoxManage startvm "Ubuntu" --type headless it returns a message saying that the Ubuntu is running in headless mode but when I want to connect via ssh to it, its not accessable. my host os is windows 10 and the ubuntu server name is "Ubuntu" and os version is ubuntu-16.04.2-server-x64 and I installed openssh-server and dkms as it described here: https://www.htpcbeginner.com/install-virtualbox-guest-additions-on-ubuntu-debian/
when I work in GUI evrything is fine but I want to run it from windows command line to save some time.
It looks like that the command line vm "Ubuntu" is diffrent from GUI vm "Ubuntu". But I have only one vm on virtualbox. in gui there is one and in cmd vboxmanage list vms returns one vm. so what is the problem?
I also added virtualbox guest addition from Device menu in virtualbox GUI
Edit:
I saw another command : VBoxHeadless --startvm "Ubuntu" its not working either. but unlike the last command it does not show message that Ubuntu is Running. actually it will stuck in execution and the cursor turns to a blinking dash for ever. so I should close the cmd to get ride of it.
I checked something. If I use NAT on network adapter and port forwarding, it works even from cmd. but when using bridge to avoid using port forwarding its not working. in the bridge mode there is connection and ping is working but cant ssh to the Ubuntu.
I found the solution. the solution is I should keep the NAT interface as primary as is by default and make a secondary interface in virtualbox gui settings. the secondary interface should be Host-only. then by using this question and its answer I added an interface to guest Ubuntu to a static address. now I can ssh to the static Ip address even if I run the vm from command line, and there is no need to port forwarding.

cygwin/sshd and Virtualbox

I'm using vagrant/VirtualBox on my Window (8.1) Laptop to start up a linux-test-vm from a Cygwin terminal... vagrant up, vagrant ssh, everything is working fine.
Now I want to work on that environment remotely from my main Linux-Workstation, so I've set up sshd in Cygwin and I can successfully ssh into my Windows-Box (same user as logged in locally in windows).
But when I cd'ed (via my remote ssh connection to windows-laptop) in my working directory and ran vagrant ssh, it tells me:
VM must be created before running this command. Run 'vagrant up' first
But I see the VM is running in VirtualBox GUI on Windows.
From this point on even locally on the Windows machine I can no longer interact with the running vagrant vm and the .vagrant (sub)directory has not files inside.
Same happens vice versa:
I stopped/deleted the VM in VirtualBox GUI
ran vagrant up via my ssh connection ... worked
ran vagrant ssh via my ssh connection ... works
but I do not see the VM in VirtualBox GUI on Windows
trying vagrant ssh locally on Windows ... same error again and .vagrant directory gets cleared
So I assume the Cygwin/sshd connection creates some sort of different Sessions that do not share the same "instance" of VirtualBox.
Is there any chance to share VirtualBox/vagrant environment between the local Windows and remote ssh session ???
WORKAROUND:
export ssh-config on the windows host: vagrant ssh-config > ssh_config
from the cygwin/ssh jump into the VM: ssh -F ssh_config default
never run any vagrant command from the cygwin/ssh connection
Vagrant has a built-in solution since the 1.7.x versions called vagrant share which also allows you to remote into a box directly (bypassing the Windows host abstraction). It is generally used for the HTTP feature (eg. to show clients or others on a project the current state of work) but there is the ability to connect to any service running on any port. From the docs:
Just call vagrant share. This will automatically share as many ports
as possible for remote connections. If the Vagrant environment has a
static IP or DNS address, then every port will be available.
Otherwise, Vagrant will only expose forwarded ports on the machine.
Note the share name at the end of calling vagrant share, and give this
to the person who wants to connect to your machine. They simply have
to call vagrant connect NAME. This will give them a static IP they can
use to access your Vagrant environment.
Note to use vagrant share you need a (free) account with hashicorp.

Could not connect to VM created with Azure command line tools

I am trying to use the Azure Command Line Tools (http://www.windowsazure.com/en-us/manage/linux/how-to-guides/command-line-tools/) to create an Ubuntu 12.04 VM.
I am issuing the following commands:
azure vm create xxxxxxxxxx.cloudapp.net b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-12_04_1-LTS-amd64-server-20121218-en-us-30GB azureuser mypassword --location "West Europe"
azure vm endpoint create xxxxxxxxxx 22 22
azure vm start xxxxxxxxxx
This seems to create and start the VM successfully.
I try to connect via SSH to the VM using the following command (on Mac OS X)
ssh azureuser#xxxxxxxxxx.cloudapp.net
However, when I try to SSH into the VM, it seems that password authentication is disabled on the VM as I am getting the following error:
Permission denied (publickey).
I would like to add that connecting via SSH to an Ubuntu VM created trough the Azure Management portal works absolutely fine. This issue only appears when the VM was created through the Azure command line tools.
Has anybody encountered a similar issue and knows how to solve it?
You need to use the --ssh switch on your azure vm create command to enable ssh. Adding the endpoint has no effect.
According to the Windows Azure command-line tool for Mac and Linux documentation you can only add ssh connectivity via the azure cli when the virtual machine is created.

PostgreSQL SSH Tunnel to Amazon EC2?

I've created an Amazon EC2 AMI running CentOS Linux 5.5 and PostgreSQL 8.4. I'd like to be able to create an SSH tunnel from my machine to the instance so I can connect to the PostgreSQL database from my development machine (JDBC, Tomcat, etc.) I haven't modified the PostgreSQL configuration at all as of yet. I can successfully SSH into the instance from a command line, and have run the following command to try and create my tunnel:
ssh -N -L2345:<My instance DNS>:5432 -i <keypair> root#<My instance DNS>
I don't receive any errors when initially running this command. However, when I try and use psql to open up a connection on localhost:2345, the connection fails.
Any thoughts as to why this is happening?
The first <My instance DNS> should be localhost. And you probably don't want to/need to ssh as root.

Resources