I have configured webserver front ending with AWS ELB and cloudfront. I have deployed SSL certificate on load balancer (ELB) for https connection.
2 listener protocol we have set
1) soruce -> 80 -> ELB -> 80 -> EC2 webserver
2) soruce -> 443 -> ELB -> 80 -> EC2 webserver
Following of my blog post describe entire setup in detail
http://www.cloudometry.in/2015/04/dns-entry-confusion-for-aws-elb-backed.html
The issue is We are able to open dummy test index.html file but when i try to open our application index.php then everything falls apart and browser keeps on refreshing.
i have noticed following log in webserver's access log
[26/Apr/2015:17:08:21 +0000] "OPTIONS * HTTP/1.0" 200 125 "-" "Apache/2.4.7 (Ubuntu) mod_fastcgi/mod_fastcgi-SNAP-0910052141 PHP/5.5.9-1ubuntu4.6 (internal dummy connection)"
does anyone has faced this issue?
Am i missing any configuration?
Thanks in advance for your suggestions.
Related
I am trying to setup multiple containers listening on different ports hosting sites.
Example:
example1.com 8080 -> 80 container1 (apache)
example2.com 8081 -> 80 container2 (apache)
What is the correct way to do it?
I have tried http redirects/rewrites (inside containers) but cannot get it to work..
You need a reverse proxy on the server. I'd go with nginx and configure it similar to here to forward traffic to example1.com to its own port 8080. In Apache this can be done as well. It's called virtual hosts
I have followed this tutorial for deploying docker containers on AWS EC2 instance:
https://www.digitalocean.com/community/tutorials/how-to-secure-a-containerized-node-js-application-with-nginx-let-s-encrypt-and-docker-compose
and after reaching step 5 (where nginx is configured for HTTPS), the application just stops working. Here's my application: www.alphadevop.co
Here’s my nginx configuration:
https://github.com/cyrilcabo/alphadevelopment/blob/master/nginx-conf/nginx.conf
And here’s my docker-compose.yml:
https://github.com/cyrilcabo/alphadevelopment/blob/master/docker-compose.yml
[Here's the webserver logs][1]
[1]: https://i.stack.imgur.com/oawtD.png
Silly mistake, port 443 wasn't allowed on my application. I was confused because when i checked on my server, port 443 was open. Then I checked here, https://www.yougetsignal.com/tools/open-ports/ , saying it was closed. I then found out that there's an inbound rule for AWS EC2 instance top allow port 443.
Credits here: NGINX SSL Timeout
Hi everyone, I am using vps ubuntu of amazon ec2 for my porject. I
have 3 project on my vps, they are build on nodejs, so I run it with
3 port: 3000 and 3001 and 80.
I bought a domain in godaddy, the url like: abc.def, now, when I go to browser and type
abc.def:3000, abc.def:3001, abc.def, it run ok with 3 projects above.
The question is
How I can config it to when I type
abc.def -> it run project port 3000
site.abc.def -> it run portject port 3001
site2.abc.def -> it run portject port 80
Thank for your comment here
You need to create 2 sub-domains (site.abc.def and site2.abc.def) and create a nginx configuration file with all the 3 domains with different server directives for each site. Use proxy_pass and direct it as per your requirement.
Edit:
Something similar as in the following answer Nginx multiple server
I have deployed a node.js app to Elastic Beanstalk. When I try to access the page via HTTP: everything works fine. when I try to access via HTTPS: I get a refused to connect error. I have followed the instructions on
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-nodejs.html
I created a .ebextensions folder and my https-instance-single.config looks like:
Resources:
sslSecurityGroupIngress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: {"Fn::GetAtt" : ["AWSEBSecurityGroup", "GroupId"]}
IpProtocol: tcp
ToPort: 443
FromPort: 443
CidrIp: 0.0.0.0/0
I have uploaded and deployed the new zip file with these included and still the same thing. I can access via HTTP but not via HTTPS.
Any help would be greatly appreciated.
I would suggest you use LetsEncrypt for Elastic Beanstalk single instance. According to your current configuration, its only open port 443 to serve HTTPS but you are not pointing to certificate. This is the tutorial for LetsEncrypt SSL on Elastic Beanstalk:
https://www.tutcodex.com/ssl-on-single-instance-elastic-beanstalk-tutorial/
I've been trying for hours and have read what this site and the internet have to offer. I just can't quite seem to get Socket.IO to work properly here. I know nginx by default can't handle Socket.IO however, HAproxy can. I want nginx to serve the Node apps through unix sockets and that works great. Each have a sub directory location set by nginx, however, now I need Socket.IO for the last app and I'm at a loss of configuring at this point.
I have the latest socket.io, HAproxy 1.4.8 and nginx 1.2.1. Running ubuntu.
So reiterating, I need to get socket.io working though nginx to a node app in a subdirectory, ex: localhost/app/.
Diagram:
WEB => HAproxy => Nginx => {/app1 app1, /app2 app2, /app3 app3}
Let me now if I can offer anything else!
There is no reason to get "get socket.io working though nginx". Instead you just route HAProxy directly to Socket.IO (without Nginx in the middle).
I recommend you checkout the following links:
https://gist.github.com/1014904
http://blog.mixu.net/2011/08/13/nginx-websockets-ssl-and-socket-io-deployment/
You could use Haproxy on port 80 to front several node.js apps running on different ports.
E.g.
URL:80/app1 -> haproxy -> node app1:8080
URL:80/app2 -> haproxy -> node app2:8081
URL:80/app3 -> haproxy -> node app3:8083
UPDATE:
The following is an example HAPROXY configuration that routes requests made to http://server:80/hello to localhost:20001 and http://server:80/echo to localhost:20002
backend hello
server hellosvr 127.0.0.1:20002
backend echo
server echosvr 127.0.0.1:20001
frontend http_in
option httpclose
option forwardfor except 127.0.0.1 # stunnel already adds the header
bind *:80
acl rec_hello path_beg /hello/
use_backend hello if rec_hello
acl rec_echo path_beg /echo
use_backend echo if rec_echo