I have a NodeJS app on ubuntu EC2 with dokku. My domain is pointing on server with wildcard and I have a SSL certificate with wildcard as well. Some time ago I added keys to dokku in app/tls/. Back then I had two apps online, production and staging. The last created on dokku (created, deployed) was intercepting all requests to host so api.my.domain and api-stage.my.domain and blah and whatever. If I typed http:// then there was no redirect. Deadline was close so I wasn't fighting with it anymore and I just made production to be the one who intercepts everything. Today I had problems with deployment, I've seen rejects over and over. I've deleted some plugins including not used anywhere dokku-domains, restarted docker few times run this command:
sudo wget -O /etc/init/docker.conf https://raw.github.com/dotcloud/docker/master/contrib/init/upstart/docker.conf
and there was no rejects anymore but... all requests to host returns 502 Bad Gateway. and there was no rejects anymore but... all requests to host returns 502 Bad Gateway including those with green padlock. I remember that previously when app was during deployment there was some info about configuring SSL, now there is none. After deleting an app and creating from scratch there is no nginx.conf file and SSL doesn't work at all.
Related
I have deployed my website to a Digital Ocean droplet (Ubuntu 20.04 server).
Everything was working fine. Today, I did some changes to the website in my local machine. So I pushed the changes to GitHub and then cloned the GitHub repo again to the server. Then, I installed the dependencies and restarted PM2.
Now, when I visit my site https://sundaray.io, I get the following error.
The following is the error log.
How can I fix the error?
Simple meaning is
No HTTP server response, your Node Http server is not answering requests.
502 gateway mean server and Nginx is getting your request but there is issue with upstream.
you can use the command to show the logs of pm2
pm2 show
the application might be crashing or internal server 500 error.
I'm actually facing an issue which came up when using the proxy in Angular CLI.
But it's not related directly to Angular nor to node.js... it seems to have it's roots some levels deeper (namely on operating system level)
##Short version:
When I have a domain to IP mapping in my hosts file /etc/hosts and proxy it using node-http-proxy which is the underlying layer of the angular-cli proxy feature there's a delay of 5000ms before the request gets resolved and the response is provided.
Proxying is mandatory for backend communication to avoid cross origin errors in development because angular apps are served via port 4200.
##Longer version:
Operating System: OSX Catalina 10.15.4
Based on a deeper analysis it's not caused by Angular CLI and even not node.js.
It seems there's something going "wrong" with the system as I can reproduce the behavior in my terminal as well using the arp command
There's a mapping in the /etc/hosts file which looks like below:
127.0.0.1 service.company.local
When running then the command: arp service.company.local it won't resolve of course as this domain isn't known for DNS servers.
It finishes with the output: arp: service.company.local: Unknown host
Also when the computer is disconnected from internet/network (wifi of) the arp still takes 5000ms before it finishes with the Unknown host message, whereas it directly returns Unknown host for existing domains (then without delay).
The problem is pretty frustrating as it heavily slows down local development of an Angular app which is doing some cascading requests take so extremely long that a fluent work isn't possible.
Screenshot from Chrome Dev Tools:
Is there some known solution to get around this issue without moving away from the domain to ip mapping within the hosts file?
Addition (content of the hosts file)
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
127.0.0.1 service.company.local
# Added by Docker Desktop
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
I'm very thankful for any hints.
I have an existing ssl certificate through LetsEncrypt for my domain. On the same server as my site I have an express app running at port :8080. Before adding the SSL to the domain I was able to make requests to http://domainname:8080.com. Now that the domain making the requests is https it obviously can't make those requests. If I instead make requests to https://domainname:8080.com, I get no response and instead get a timeout error.
I have attempted to curl -X -POST on the server manually and it returns (35) gnutls_handshake() failed: The TLS connection was non-properly terminated. If I however run the same command pointing to the non https domain it executes correctly. I also tried installing the https modules for express and pointing it to the same certs I'm using for the domain. For all my effort I cannot get this to work. What am I missing here? I want to make requests to a port on the same server that is serving my app.
Setup a reverse proxy in my nginx site config from the domain to the ip address the express server was running on. This solved all the issues I was having.
I have the guest's HTTPS port set to 443 on it's Apache 2 installation.
In Vagrantfile
I have vm.forwarded_port set to forward from 443 to 8443
I have vm.hostname set to actualdomain.org
I've also installed the vagrant plugin install vagrant-hostsupdater so that actualdomain.org is written to my hosts file, so it pulls up the developer environment and not the actual site when it is requested.
List item
I ran vagrant connect...
I ran vagrant share --https 443 --domain actualdomain.org but it reports the following:
==> default: Detecting network information for machine...
default: Local machine address: 192.168.xx.10
default: Local HTTPS port: 443
==> default: Checking authentication and authorization...
==> default: Creating Vagrant Share session...
There was an error returned by the Vagrant Cloud server. The
error message is shown below:
Domain cannot be used with this account
But if I run vagrant share without the --domain parameter, I end up with the following in my logs when I try to contact the site remotely:
Hostname XXXXX-YYY-ZZZZ provided via SNI and hostname XXXXX-YYY-ZZZZ.vagrantshare.com provided via HTTP are different
And in the browser I am returned an HTTP 400 Bad Request.
Is there any easy way around this? It seems to me that this didn't happen the last time I used vagrant, and it seems as though there was something added to TLS that causes it to balk about the SNI error since then.
I even tried adding a server alias that was the same as the XXXXX-YYY-ZZZ.vagrantshare.com, and it still is giving me an issue; does that mean that I have to rebuild the certificate everytime the hashicorp URL changes if I want to show it off to somebody via their browser?
I have been spending days figuring out how to install the viral Ghost platform, and experienced numerous errors. Luckily, I have managed to install it - Ghost gives me a positive Ghost is running... message in SSH after I've done npm start --production. However, when I browse to my website - http://nick-s.se - Apache displays its default page and when I go to the ghost login area - /ghost, the site returns a 403 Forbidden.
P.S. I have specifically installed Ghost on a different port than the one Apache is running on. I don't know what's going on...
Update - I have found out that I can access my Ghost installation by adding the port number 2368 which I've configured in the config.js. Now, however my problem is - how can I run Ghost without using such ports?...
tell your browser you want to connect to the port Ghost is running on: http://nick-s.se:2368
So a few things, based on visiting:
1) It seems Apache isn't proxying the request onward to Ghost. Are you sure that you've configured it properly?
2) It also looks like Apache doesn't have access to the directory that you set as root. This shouldn't be necessary anyway if proxying is set up correctly, but could become an issue later if you wanted to use apache to serve things like the static assets.
If you are open to nginx instead of Apache, I have written a how to on this: link. You can skip the section on configuring Nginx. Otherwise, still might be useful if you figure out the conversion of rules from Nginx to Apache.
If you don't have any other sites running on your VPS you can just turn apache off and not have to deal with apache proxying the request to port 2368 and have Ghost run on port 80. If your VPS is running CentOS you can check out this how to on disabling apache and running Ghost on port 80.