Dynamically updating EC2 Security Groups on deploy - node.js

I have a number of node apis that run on elastic beanstalk.
We are configuring the load balancer and a number of other things using .config files in the ebextensions folder.
Is it possible to get the secruity id from the newly created security group when eb create is run and the api is pushed to elastic beanstalk and started then add it to an inbound rule on another security group that already exists.
Would like to be able to have it all scripted so when we terminate and re create the rules will be re created.

Related

"no data" health status while deploying node app via beanstalk

I tried to deploy a simple node application (code here via AWS Elastic BeanStalk by going with all default options except that I uploaded the sample application as default one didn't work.
In all my attempts, I eventually get "No data" error as health status. After 12-16 mins of wait, the last line in the logs say
Environment health has transitioned from Pending to No Data. Initialization in progress (running for 16 minutes). There are no instances.
Could someone please help me here?
For the benefit of others, I managed to resolve above issue by configuring VPC which wasn't configured by default.
I deleted the environment and the application. Then I went to the VPC console and deleted the default VPC. Then I created a new default VPC. In that VPC I had subnets as well.
Then I went back to Elastic Beanstalk and created a new application. I specifically chose Advanced Configuration. Under Network, I chose the new default VPC. Then configured one of the subnets to be public. For the RDS database, I chose both the subnets to be private. This configuration worked to get the sample app running.

Is it possible to use Azure Dev Spaces with API Management?

I have got a Azure AKS cluster running on Azure cloud. It is accessed by frontend and mobile via Azure API Management. My Front end app is outside of the AKS.
Is it possible to use Azure Dev Spaces in this setup to test my changes in the isolated environment?
I've created a new namespace in the AKS and created a separate deployment slot for testing environment on the forntend app, but I can't figure out how to create an isolated routing on Azure API management.
As a result I'd like to have an isolated environment which shares most of the containers on AKS, but uses my local machine to host one service which is under testing at the moment.
I assume you intend to use Dev Spaces routing through a space.s. prefix on your domain name. For this to work, you ultimately need a Host header that includes such a prefix as part of the request to the Dev Spaces ingress controller running in your AKS cluster.
It sounds like in your case, you are running your frontend as an Azure Web App and backend services in AKS. Therefore your frontend would need to include the necessary logic to do one of two things:
Allow the slot instance to customize the space name to use (e.g. it might call the AKS backend services using something like testing.s.default.myservice.azds.io)
Read the Host header from the frontend request and propagate it to the backend request.
In either case, you will probably need to configure Azure API Management to correctly propagate appropriate requests to the testing slot you have created. I don't know enough about how API Management configures routing rules to help on this part, but hopefully I've been able to shed some light on the Dev Spaces part.

How does one deploy multiple micro-services in Node on a single AWS EC2 instance?

We are pretty new to AWS and looking to deploy multiple services into one EC2 instance.
Each micro-service is developed in its own repository.
Each service will have its own endpoint URL
Services may talk to each other
Services can be updated/deployed separately
Do we need a beanstalk for each? I hope not.
Thank you in advance
So the way we tackled a similar issue at our workplace was to leverage the multi-container docker platform supported by Elastic Beanstalk in most AWS regions.
The way this works in brief is, we had dedicated repositories for each of our services in ECR (Elastic Container Registry) where the different "versioned" images were deployed using a deploy script.
Once that is configured and set up, all you would need is deploy a Dockerrun.aws.json file which basically highlights all the apps you would want to deploy as part of the docker cluster into 1 EC2 instance (make sure it is big enough to handle multiple applications). This is the file where one would also highlight link between applications (so they can talk to one another), port configurations, logging drivers and groups (yea we used AWS CloudWatch for logging) and many other fields. This JSON is very similar to one's docker-compose.yml which is used to bring up your stack for local development and testing.
I would suggest checking out the sample example configuration that Amazon provides for more information. Also, I found the docker documentation to be pretty helpful in this regard.
Hope this helps!!
It is not clear if you have a particular tool in mind. If you are using any tool for deployment of a single micro-service, multiple should be the same.
How does one deploy multiple micro-services in Node on a single AWS
EC2 instance?
Each micro-service is developed in its own repository.
Services can be updated/deployed separately
This should be the same as deployment of a single micro-service. As long as they have different path and port that they are running on, it should be fine.
Each service will have its own endpoint URL
You can use nginx as a reverse proxy which can redirect your request from port 80 to the required port of your micro service.
Services may talk to each other
This again should not be an issue. You can either call them directly with the port number or via fully qualified name and come back via nginx.

Why Https doesn't work on EC2?

I have a running Elastic Beanstalk instance running on a security group that have http and https authorized in inbound. But https doesnt seems to work... Why?
Second question:
I am currently creating a ssl certificate for my domain name. Where am I supposed to upload it on AWS ?
Thank you
You can configure HTTPS for your Elastic Beanstalk environment.
Please read the following document:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html
You can upload your SSL certificate to AWS IAM using the console or CLI whichever you prefer.
You need not modify the security group of the EC2 instance directly.
More details on Step 3 of the documentation above:
Create a file called 01-ssl.config in a folder named .ebextensions inside your app source.
Put the following inside this file.
option_settings:
- namespace: aws:elb:loadbalancer
option_name: LoadBalancerHTTPSPort
value: 443
- namespace: aws:elb:loadbalancer
option_name: SSLCertificateId
value: <arn of your ssl certificate>
These option settings should automatically modify your security group ingress rules to allow traffic appropriately.
You can read more about customizing your Elastic Beanstalk environment using ebextensions here.
Details about all option settings supported including the ELB ones are available here.
Let me know if you run into any issues.
Update
By default when you create an Elastic Beanstalk environment it creates an EC2 instance and puts it behind an Elastic Load Balancer. If you do not need a load balancer then you can create a Single Instance environment as explained here or do you already have a single instance environment.
Once you have a single instance environment you can configure SSL for your environment as explained here.
Update on how to not put your certificate in your config file
Since you do not want to put the server.crt file in your ebextensions config file you can upload your file to S3 and then ask Elastic Beanstalk to download that file directly to your EC2 instance. The only thing that changes in the example here is that you use a source
instead of content to specify the contents of your file. In the source section you can put the URL from where you want the file to be downloaded.
Your ebextensions will then look like:
files:
/etc/pki/tls/certs/server.crt:
mode: "000777"
owner: ec2-user
group: ec2-user
source: <URL>
That way you don't need to put the contents in the repo. Read more about the file directive here.
In case you run into issues double check that your IAM instance profile (the one with which you run your beanstalk environment) has access to your S3 object.
If you need more details about IAM instance roles and Elastic Beanstalk read this and this.

Amazon EC2 IIS-8 User's Permission Write Access

I am using Amazon EC2 using Elastic Beanstalk deployment process through Visual Studio all is working well, except that when the application is deployed it does not have by default write permission; so I had to manually Remote Desktop the individual machine; and give it write permission through IIS site and under permissions.
How can I automate this process, since amazon servers adds on to load balancer using auto-scaling etc.?
Or If I change one, the other to follow will copy the exact same thing, which I had done manually?
I am little confused, first time deploying, please help?
Thanks
Yes, you can use ebextensions config to set permissions on the directory after the instance spins up. Here is an example of someone creating a directory and setting the permissions on the new directory, you should be able to adapt to your circumstances:
AWS Beanstalk ebextensions on windows

Resources