Connect to remote server without SSH Key using VSCode and AWS SSM - linux

I'm looking for a way to connect to a server with VSCode without an SSH key. One way is to enable password authentication which I'd rather avoid. The servers are on company LAN but I still don't want to go that route.
We've looked at solutions such as Okta ASA for keyless SSH connection (client/agent model). And also AWS SSM with IAM Roles and profiles. Both work great through Terminal, but not sure how that connection could be passed through an IDE such as when you use a key or password.
Any thoughts or directions would be helpful. Thanks!

Related

Public connection to AWS Linux server

I'm quite new to setting up the config for servers.
I want to have a user connect my Linux server, I'm using AWS to host the virtual machine. I cannot find anywhere to do this without using the public key. I want the user to just have to enter a username and password.
Any help would be appreciated!
Why the user will connect your server? Is for database operations or something? If it is for database operations, you can use an API for this. A direct user connection to server is not a good thing. If you want to connect your server for configurations, you can use ssh to connect to your server.

google cloud disabled publickey?

I was sshd_config change disabled publickey then I don't connecting server.
I have a user but user have not privillege.
Google cloud debian 9 server, bitnami wordpress deploy server
Please check documentation that describes how to connect to the instance using ssh.
You may also check troubleshooting SSH documentation.
You may also try to use gloucd compoute ssh. where you can even connect as a service account
If above does not work for you. Please update your question with more details like: precise error message that you get, user,

Is there a way to use IAM to manage developer access to an EC2 instance? (ssh not ec2 api)

Is there a way to use IAM to manage developer access to an EC2 instance? (ssh not ec2 API).
Not the EC2 rest API or the online console but to manage individual ssh or ftp access to a server?
What you are looking for is a linux Pluggable Authentication Module (PAM) that talks to the AWS IAM service.
This is not available out of the box in an image, but have a look here:
https://github.com/denismo/aws-iam-ldap-bridge
This project allows you to sync an LDAP server with IAM, and then you can configure your sshd to use the LDAP server.
That might work for you.
You can use either of these two projects: https://github.com/widdix/aws-ec2-ssh or https://github.com/kislyuk/keymaker
They amount to synchronising the IAM accounts to the user accounts and can pull the SSH keys in. They both rely on a cron job to keep them up to date.
No, IAM is meant to control access to EC2 resources. Logging into an instance via SSH cannot be qualified as same. Anyone who has .pem key and can log into the instance (presuming the ssh access is allowed in security groups).

How to connect Mongodb Admin GUI to Cloud Foundry?

I am looking a way to browse my Cloud Foundry Mongodb services. Look like there are two options:
Tunneling to a Cloud Foundry Service with Caldecott http://docs.cloudfoundry.com/tools/vmc/caldecott.html. I never tried this but I guess it may work.
My question is this: Is it possible to connect directly into Cloud Foundry from Mongodb Admin GUI such as mViewer or Mongovue? But if so, how do I know the username/password in process.env.VCAP_SERVICES['mongodb-1.8'][0]['credentials']?
https://github.com/Imaginea/mViewer
http://www.mongovue.com/2011/08/04/mongovue-connection-to-remote-server-over-ssh/
By using the GUI client you have to get a tunnel to the service. Once you open it in a CLI console the connection info will be generated and displayed, including the host address, usually 127.0.0.1, port number, username and password. You cannot connect using the values from VCAP_SERVICES if you try to do that from outside environment because these will be local values behind the CF router.
You need to create a tunnel using Caldecott.
See http://docs.cloudfoundry.com/tools/vmc/caldecott.html.
When you open the tunnel, it should provide you with either a command line client, or the credentials to use.
In case it does not, create a piece of code that returns a dump of process.env.VCAP_SERVICES when visit a certain url on your server.

Managing inter instance access on EC2

We are in the process of setting up our IT infrastructure on Amazon EC2.
Assume a setup along the lines of:
X production servers
Y staging servers
Log collation and Monitoring Server
Build Server
Obviously we have a need to have various servers talk to each other. A new build needs to be scp'd over to a staging server. The Log collator needs to pull logs from production servers. We are quickly realizing we are running into trouble managing access keys. Each server has its own key pair and possibly its own security group. We are ending up copying *.pem files over from server to server kind of making a mockery of security. The build server has the access keys of the staging servers in order to connect via ssh and push a new build. The staging servers similarly has access keys of the production instances (gulp!)
I did some extensive searching on the net but couldnt really find anyone talking about a sensible way to manage this issue. How are people with a setup similar to ours handling this issue? We know our current way of working is wrong. The question is - what is the right way ?
Appreciate your help!
Thanks
[Update]
Our situation is complicated by the fact that at least the build server needs to be accessible from an external server (specifically, github). We are using Jenkins and the post commit hook needs a publicly accessible URL. The bastion approach suggested by #rook fails in this situation.
A very good method of handling access to a collection of EC2 instances is using a Bastion Host.
All machines you use on EC2 should disallow SSH access to the open internet, except for the Bastion Host. Create a new security policy called "Bastion Host", and only allow port 22 incoming from the bastion to all other EC2 instances. All keys used by your EC2 collection are housed on the bastion host. Each user has their own account to the bastion host. These users should authenticate to the bastion using a password protected key file. Once they login they should have access to whatever keys they need to do their job. When someone is fired you remove their user account to the bastion. If a user copies keys from the bastion, it won't matter because they can't login unless they are first logged into the bastion.
Create two set of keypairs, one for your staging servers and one for your production servers. You can give you developers the staging keys and keep the production keys private.
I would put the new builds on to S3 and have a perl script running on the boxes to pull the lastest code from your S3 buckets and install them on to the respective servers. This way, you dont have to manually scp all the builds into it everytime. You can also automate this process using some sort of continuous build automation tools that would build and dump the build on to you S3 buckets respectively. Hope this helps..

Resources