How can I connect to AWS machines with ease from my laptop? - knife

I need to connect to aws servers but have hard time figuring out the IP or the ssh key. Is there a way i can use to easly connect ?
I would like a tool that will help me connect to the machines.

The DevOps have developed a tool to use with knife.
https://github.com/bluevine-dev/system/tree/master/misc/scripts/knife-con
Knife plugin which allows an elastic and easy-to-use interface to manage SSH connections to AWS EC2 instances and Chef nodes from within the same scope.
Some features of knife-con:
Dynamic Chef environments and AWS accounts manipulation with one, easy to understand and single config
Supports in-depth search of Chef node attributes - based on your needs
Dynamic search use cases: you can choose whater to use the combined interface (Chef and AWS), only AWS, or only Chef
Parallel SSH support on search results - you can pass a single shell command to the search results
Fast and easy search results selection with a beautiful UI
Automatic VPN switching - multi platform support (OPENVPN, TUNNELBLICK)
How the plugin works?
The plugin accepts multiple options such as regular expressions or Chef role patterns - during the search phase the plugin will prefer to grab it's results from a node search on the server - It will attempt to locate nodes which have checked-in during a pre-set healthy period - the default healthy period which can be overwritten is 24 hours. If no results are returned from the server, the plugin will attempt to locate AWS instances based on the search pattern. The plugin offers an easy to use and understandable interface to manage nodes and instances results - which will allow you to choose the relevant node or instance result and establish an SSH session from within your local machine.

Related

Host multiple services that need same ports open on gitlab ci

this is an issue I've been postponing for a while but I need to get fixed at some point.
Basically I have two services which I have containerized and registered in my gitlab registers. The two services represent different versions of the same program. In my automated tests I have 2 test suites which are testing for backwards compatibility with these services. The issue that I have is that when my automated tests run, only one service can run fine because there is a conflict over the ports they use I assume and so gitlab can't run all of the services at the same time.
Is there a way to get around this without making the ports specifiable in the code? This option would take the most amount of time and I'd rather leave it as a last resort.
It seems after some digging that making the ports configurable is my only option. As per the kubernetes documentation "You cannot use several services using the same port (e.g., you cannot have two mysql services at the same time)." https://docs.gitlab.com/runner/executors/kubernetes.html

Kubernetes cluster Nodes not creating automatically when other lost in Kubespray

I have successfully deployed a multi master Kubernetes cluster using the repo https://github.com/kubernetes-sigs/kubespray and everything works fine. But when I stop/terminate a node in the cluster, new node is not joining to the cluster.I had deployed kubernetes using KOPS, but the nodes were created automatically, when one deletes. Is this the expected behaviour in kubespray? Please help..
It is expected behavior because kubespray doesn't create any ASGs, which are AWS-specific resources. One will observe that kubespray only deals with existing machines; they do offer some terraform toys in their repo for provisioning machines, but kubespray itself does not get into that business.
You have a few options available to you:
Post-provision using scale.yml
Provision the new Node using your favorite mechanism
Create an inventory file containing it, and the etcd machines (presumably so kubespray can issue etcd certificates for the new Node
Invoke the scale.yml playbook
You may enjoy AWX in support of that.
Using plain kubeadm join
This is the mechanism I use for my clusters, FWIW
Create a kubeadm join token using kubeadm token create --ttl 0 (or whatever TTL you feel comfortable using)
You'll only need to do this once, or perhaps once per ASG, depending on your security tolerances
Use the cloud-init mechanism to ensure that docker, kubeadm, and kubelet binaries are present on the machine
You are welcome to use an AMI for doing that, too, if you enjoy building AMIs
Then invoke kubeadm join as described here: https://kubernetes.io/docs/setup/independent/high-availability/#install-workers
Use a Machine Controller
There are plenty of "machine controller" components that aim to use custom controllers inside Kubernetes to manage your node pools declaratively. I don't have experience with them, but I believe they do work. That link was just the first one that came to mind, but there are others, too
Our friends over at Kubedex have an entire page devoted to this question

should I configure my EC2 using user_data or Ansible

When launching EC2 using Terraform (or cloud formation), we can configure EC2 by putting some scripts in user_data/remote-exec. Alternatively, we can configure EC2 using Ansible/Chef, etc. What are the difference of configuring EC2 in user_data/remote-exec and do that with Ansible/Chef? when to use the former, when to use the latter (I know Ansible/Chef is idempotent)?
In my case, the EC2 is originally manually launched, then manually configured using a lot of linux commands. and the commands are not configured by me. Now I am the person to automate the whole structure using terraform, and configure EC2s. Using user_data/remote-exec to configure EC2 is straightforward. I just need to put all the existing linux commands they have in some scripts with a little change. And if the configuration result using my script is not successful, at least I can quickly figure out whether I miss some commands by comparing my script and the original linux commands. But if I use ansible/chef, I have to rewrite all the steps using different language. And if the configuration is not what expected, it is hard for me to figure out which steps are not correct, because the syntax of ansible/chef and linux commands are totally different.
My question is, in my case, should I use ansible/chef or user_data/remote-exec for configuration?
User Data is good for initial configuration of the system. If you need longer term maintenance a configuration management software like Ansible/Chef/Salt/Puppet is a great option.
Packer can be used for immutable infrastructure, i.e. doesn't change after creation. You can run all the scripts and installs on the system for it to be ready to just boot, this is also faster because you don't have to wait for user data to run.
A few questions you have to ask as well, how often are you going to patch these? Are you going to just update existing or replace with new. Ansible is great for configuration since it's just yaml files an
Blue/Green deployments generally replace servers with all new ones and gradually move traffic over to the new servers.
Some more things to consider with your Infrastructure as code

Windows Azure and a third-party Windows Service

I am developing a website that I intend to run within Windows Azure using a single Web Role. The site will make use of the Sphinx Search engine which will need to run as a Windows Service. So, my question is this...is it possible to install the Sphinx Search Windows Service inside of a Web Role.
From my initial research into Azure I am thinking "yes" for the reason that the Web Role is a VM running IIS. Therefore I should be able to remote in, install the service, and it should work. :)
Does this sound right?
Installing software via RDP is not a viable solution with Web/Worker role instances, as these changes won't persist. You need to install it either from a startup script or from OnStart(). Since you want to install as a service, that would imply startup script, since it would need elevated permissions. Note: The installer must support unattended mode, where all parameters are specified via command line with no human interaction.
What about scalability? If you have more than one instance of your web role running, can sphinx run across two instances? From what I read, it supports ODBC-compliant databases, and you might be able to use it against Windows Azure SQL Database. If that's the case, can two sphinx engines run on two different machines accessing the same data store? If so, this sounds like a viable solution.
If installation cannot be automated, or you need something additional like MySQL, you may want to consider placing the sphinx search engine inside a Virtual Machine (new in June 2012). Now you can spin up a Windows 2008 Server, RDP into it, configure it exactly how you want it.
Strictly speaking yes, you could do that. However this makes the assumption that you would be running on one VM instance and also that the instance would never need restarting.
You should consider looking at Azure worker roles for any functionality that would normally exist as a windows service.
After reading your answers, and thinking about it a bit more, I think dropping the idea of installing a service would be the best course of action. I've been looking at the API for Lucene.NET (this may be the same for Sphinx) and it's possible to encapsulate the writing/managing of indexes, etc, within in code and therefore no need for a service.
For the Azure, there is a library for managing index files using both local and Azure storage which could be of use. Scenarios I've read about show that it's then possible to have a Web Role that will process HTTP requests and perform the searches and a Worker Role to accept DB changes via a queue and have it write them to the indexes.

Recommended approach & tools to provision a VM instance(s) from node.js?

I am trying to implement a 'lab in the cloud' to allow people to have a sandbox to experiment and learn in; i.e. for devops (chef/puppet), installing or configuring software etc.
I have a node.js server implementation to manage this and looking for sane and reasonable ways to attack this problem.
The options are bewilderingly diverse: puppet or chef directly, or vagrant seems appropriate. But Openstack, cloudfoundry, Amazon EC2 also provide their own feature sets.
Ideally a micro-cloud solution (multiple VM's per instance) would be ideal as there isn't going to be any large computational load.
Suggestions most appreciated.
Cheers
After some investigation, it seems that LXC on EC2 might be the way forward:
It gives
lightweight, instances on a single EC2 instance
supports hibernate/restore
fast to standup
able to automate using chef/cucumber
EC2 virtualization using LXC
Chef-lxc
Testing infrastructure code in LXC using Cucumber

Resources