I have a domain name bought on Godaddy. The site is hosted on Squarespace, so I don't want to forward requests from https://example.com to a site on Elastic Bean Stalk.
I have an API hosted on EB and the Squarespace site makes requests to that API.
What I need to do is change the default EB URL https://dataservice-env.example.us-east-2.elasticbeanstalk.com to https://example.com/api
I'm pretty much a DNS noob here. I've found articles to forward godaddy domains to EB, but thats not what I want to do, which is what I think this is describing...
https://stackoverflow.com/a/38225802
EDIT -
If any one else is trying to do something similar (make API requests from one domain to EB over HTTPS on a subdomain) here's how I did it....
Register a domain in Route 53
Create a Hosted Zone
Exported zone file from GoDaddy
Import Zone File to Route 53 Hosted Zone
Request a certificate from AWS Certificate Manager
Use subdomain api.example.com for domain name value
Click ‘Create Record in Route 53'
In Route 53 click 'Create Record'
Name: api.css-llc.io
Type: A-IPv4 Address
Alias: Yes
Alias Target: EB URL - env.tstuff.us-east-2.elasticbeanstalk.com
Create Load Balancer. Most important is to create a listener for
HTTPS This will forward requests from port 443 to port 80, the .net
Core API is running on port 80
Listener Port: 443
Instance Port: 80
Listener Protocol: HTTPS
Instance Protcol: HTTP
Use api.example.com cert created above
Add this load balancer to EC2 Instance. The EC2 instance should be
created when deploying the Docker image. Allow HTTPS inbound traffic
on the two security groups created by the load balancer
Add CORS support to API Server. Example below for .net Core CORS
This should return the correct response headers and should be able to
make requests from example.com to api.example.com via HTTPS
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseCors(builder => builder
.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader());
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
}
All this was caused by the SSL cert registered to the wrong domain. CORS is functioning as it should.
Related
I have MERN stack app and both Reactjs and Nodejs are running on Same host/IP of EC2 . I have bought a domain from Godaddy so how can i point it to my domain . Am getting this error on Godaddy
Also how can i add SSL certificate for both (Frontend and NodejsServer / both running on same instance with different ports e.g 3002:react , 4000:nodejs)
It can be achieved using Route53, providing high level overview here and pointing to the AWS Documentation
Set static IP of your EC2 instance (Elastic IP)
Configure hosted zones in Route53
Create records in Godaddy
Full documentation here https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-ec2-instance.html
It depend how you want to setup SSL for FrontEnd and Backend
Ideally for FrontEnd you point your domain to the right port of FE 3002 so that when you open your site www.mysite.com then it opens your FE
For BE, you can use your static IP or the AWS provided host name and to setup SSL follow this SO post
Frontend (domain.com and www.domain.com)
Use AWS apmilify to host your static files, it will give you dns record that you can add on godaddy.
Backend (server.domain.com)
Put a loadbalancer Infront of your ec2 instance that will handle ssl and it will give you a dns record that you can add on godaddy. Loadbalancer should listen on 80 and 443 ports and can forward to 4000 on ec2 instance.
please note this is just a quick recommendation just for your use case, there are many better ways to deploy MERN apps.
I have an ec2 instance (ubuntu) that runs my node.js back-end: APi + Mongodb. I run the node server with PM2. I'm trying to add https to it.
Here's my node.js code:
const express = require("express");
const app = express();
const fs = require("fs");
const https = require("https");
var options = {
key: fs.readFileSync("./privatekey.pem"),
cert: fs.readFileSync("./server.crt"),
};
const port = process.env.PORT || 3000;
https
.createServer(options, app)
.listen(port, () => console.log(`listening to port ${port}`));
The HTTP works. When I go to the https at port 3000 I get this message:
"This connection is not private". This website may be impersonating "ec2-XX-XXX-XXX-XXX" ....."
I understand that the certificates are only good at localhost? But how can I make them work in prod? From my understanding, it's not possible to buy SSL certs for Ip address?
Any suggestions?
PS: I tried with port 443 but ran into a permission issue (showing in PM2 logs)...
-------EDIT----
My point was to connect my front end (in one instance) to my back-end (in another instance) without any SSL security error.
I ended up creating a sub-domain in AWS hosted zone that re-direct to my front-end domain. I used Nginx and Certobot to upload a new SSL cert to my subdomain instance.
I previously created a sub-domain before but did it the wrong way. I followed this tutorial this time around and it took 2min.
https://dev.to/arswaw/create-a-subdomain-in-amazon-route53-in-2-minutes-3hf0
it's not possible to buy SSL certs for Ip address?
You should get your own domain. Buy it on Route53 or any other domain re-seller you prefer. Then, you can get your own SSL cert for your domain.
If you don't want to use certbot on the instances, you can create an SSL cert (within ACM) and assign it to an ELB from within the AWS console.
Just place your instance that needs the SSL cert inside the ELB's target group and update your DNS to reflect this. If you have 2 EC2 instances, you can do this with another ELB too. It costs a bit more money, but now the onus is on AWS to renew the certificates and you wont need to use certbot on the instances at all. I'd advise to use DNS validation when it comes to creating your certificate in ACM and use a wildcard (*.mydomain.com) to allow for your backend subdomain.
You will need to create a subdomain for your backend service and update your nginx config (on the backend EC2 instance), to listen for traffic.
The end product should be 2 instances, 2 ELBs, one wildcard SSL cert.
We are using web app(angular) and api(node.js) running on nginx.
I need to use the AWS Load balancer , domain name (route 53), SSL Certificate (ACM).
I have obtained the SSL from ACM.
I have created the elastic load balancer and attached the ssl certificate.
I have assigned the web servers to the AWS load balancer.
I have created the domain name in route 53 and pointed to the load balancer.
I have updated the nginx configuration file and added the domain name to the server name
I am able to access the webservers from the https://example.com but the webservers are not connecting to the api servers. I am getting error "Failed to connect" when i try to login the web app
I believe that the api server should also have the ssl certificate attached.
Currently, I do not have the API server in the load balancer.
How can i configure the web app servers and api servers with the ssl certificate using the AWS load balancer?
How do I add ssl certificate to the api server? do i need to create an internal load balancer and add the ssl certificate. if yes, how do i use the internal load balancer with the web server load balancer?
What would be the architecture for the same?
Without the load balancer, I was able to connect to web app and api using the godaddy ssl and domain for web and api.
However, I am not able to use the aws load balancer and aws certificate manager
Nginx config
Web server
server
{
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
root /Website/dist/;
index index.html;
}
API server
server
{
listen 80 ;
server_name ipaddress_api;
location /
proxy_pass http://localhost:7000;
}
One solution will be to access the Back-end apis using Load Balancer.
You can differentiate front end and back end requests by appending "/api" in front of all back end requests. You will have to use separate "location" block in the nginx conf file.
I have a metabase docker image running on my Azure container. It can be accessed through azure's fqdn:port_number (port is 3000) or ip:port_number. I want to give a nice domain name for this application through cloudflare(. How can I do this?
Thanks in Advance!
PS: There were some topics on this in cloudflare community but I couldn't fine any answer relevant for me.
You could add a CNAME record to point subdomain such as www.example.com to the FQDN value of your Azure container like containerdns.westus.azurecontainer.io.
Example of a CNAME record:
name: www
record type: CNAME
value: FQDN value of your Azure container
TTL: 32600
ref: https://www.cloudflare.com/learning/dns/dns-records/dns-cname-record/
https://support.cloudflare.com/hc/en-us/articles/360019093151-
Update
From your comment, you want to access myapp.com which actually points to fqdn:port. In this case, you could create and configure an application gateway to host web sites with custom ports using the Azure portal. If you have multiple sites, you could follow this tutorial.
You could follow the steps below:
Create a public-facing application gateway with a public IP address in the same region as your container instance.
Create a backend pool with target hostname of your container FQDN like containerdns.westus.azurecontainer.io
Create a basic listener and provide name, frontend port 80 and protocol HTTP.
Create a health probe, provide protocol HTTP, check the box Pick host name from backend http settings and the remaining setting is the default.
Add an HTTP-settings with custom port 3000 as your desired and check the box Pick host name from backend address and use custom probe and select custom probe.
Create a basic rule with the backend pool and HTTP setting.
In the end, you could create a CNAME record mapping the subdomain www.myapp.com to the FQDN of your application gateway.
I validate this on my website on Azure VM, hope this could help you.
I'm trying to enable SSL on a subdomain from a domain I purchased from Google Domains (Managed with Netlify DNS). The domain is currently pointing to a static react app hosted by Netlify (it has SSL).
The subdomain (api.example.com) pointing to an elastic IP associated with an EC2 instance doesn't seem to be working when I try to access it with HTTPS (api.example.com’s server IP address could not be found.) but works with HTTP.
Does anyone know of a way I could use that SSL certificate I got from Netlify on the subdomain pointing to my aws instance?
I'm using only an A record for the subdomain -> elastic ip. For the purpose of getting everything to work, I've enabled all inbound/outbound ports for all traffic types on my instance security group.