Apache2 not working after changing the default port - linux

Apache2 is not working after changing the default port from 80 to 8099
I did the following:
In /etc/apache2/ports.conf, changed the port to:
Listen 8099
In /etc/apache2/sites-enabled/000-default.conf, did this:
<VirtualHost *:8099>
Then:
sudo /etc/init.d/apache2 reload
sudo /etc/init.d/apache2 restart
I tried to check the ports that are open using: sudo netstat -plunt, here is the outcome:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1179/mysqld
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1005/sshd
tcp6 0 0 :::8099 :::* LISTEN 6337/apache2
tcp6 0 0 :::22 :::* LISTEN 1005/sshd
udp 0 0 0.0.0.0:14717 0.0.0.0:* 641/dhclient
udp 0 0 0.0.0.0:68 0.0.0.0:* 641/dhclient
udp6 0 0 :::48002 :::* 641/dhclient
anything I am missing here? Thanks

Seems it works but strangely for IPv6 only.
Here is a way to force it to use IPv4: https://unix.stackexchange.com/a/237610
Listen 0.0.0.0:8099

Notice the bold portions:
<VirtualHost *:80>
#ServerName www.example.com
ServerAdmin webmaster#localhost
DocumentRoot /var/www/'Your folder acess'/public
<Directory /var/www/'Your folder acess'/public>
AllowOverride All
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

Related

Site deployed on EC2 returning ERR_CONNECTION_REFUSED when making requests to Node Express Server

I am trying to make a request to the my Node Express server via the browser and I am being returned with ERR_CONNECTION_REFUSED.
i.e.
POST http://localhost:9000/api/search net::ERR_CONNECTION_REFUSED
Requests from the Chrome console are also refused.
However, when I make curl requests from the EC2 terminal, the request is successful and i'm returned a JSON.
My nginx.conf file is detailed below:
server {
listen 80 default_server;
server_name _;
location / {
root /usr/share/nginx/html;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
add_header 'Access-Control-Allow-Headers' *;
add_header 'Access-Control-Allow-Origin' *;
add_header 'Access-Control-Allow-Methods' *;
}
location /sockjs-node {
proxy_pass http://localhost:80;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
proxy_pass http://localhost:9000;
}
}
From within the EC2 instance, the firewall status is:
sudo ufw status
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx Full ALLOW Anywhere
9000 ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
9000 (v6) ALLOW Anywhere (v6)
netstat -tunlp returns
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::9000 :::* LISTEN -
udp 0 0 127.0.0.53:53 0.0.0.0:* -
udp 0 0 172.31.2.45:68 0.0.0.0:* -
udp 0 0 127.0.0.1:323 0.0.0.0:* -
udp6 0 0 ::1:323 :::* -
My EC2 security group rules look like this
I've no idea what the issue could be. Any help would be appreciated.
SOLUTION: I've managed to resolve the issue by changing all fetch requests on the front-end to use the EC2 IP address instead of localhost. This doesn't seem very optimal though. Is there some sort of wildcard operator I could use instead as the EC2 IP address changes on restart. Any advances would be appreciated!
SOLUTION: I've managed to resolve the issue by changing all fetch requests on the front-end to use the EC2 IP address instead of localhost. This doesn't seem very optimal though. Is there some sort of wildcard operator I could use instead as the EC2 IP address changes on restart. Any advances would be appreciated!

Pointing EC2 instance via domain inside Route 53 with timeout

I've spent a lot of time looking for a solution, but this is a quite weird and tricky issue.
I have AWS EC2 instance (Ubuntu)
I have a configured domain in AWS Route 53
Everything works properly via IP address of EC2 instance in web browser, but when I'm changing nginx.conf and adding server_name with my domain properties it's instantly throwing a timeout.
To be clear:
Route 53:
added proper IP as A address
added proper NS addresses
checked everything via dig in terminal - it's okay.
EC2 instance:
ubuntu instance
node js app on port 8000
configured security group with Outbound: All, Inbound: HTTP port 80 and Custom TCP Rule Port Range 8000
server {
listen 80;
listen [::]:80;
server_name mydomain.dev www.mydomain.dev;
return 301 http://$server_name$request_uri;
root /home/ubuntu/mydomainfolder_dev/;
location / {
proxy_pass http://localhost:8000;
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection 'upgrade';
#proxy_set_header Host $host;
}
}
after this nginx.conf change and restarting a service (sudo service nginx restart) makes a proper redirect of EC2 address to my domain, but there is a timeout... any ideas how to fix it guys?
also: sudo netstat -tulpn output:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 4581/nginx: master
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 1608/systemd-resolv
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 935/sshd
tcp6 0 0 :::80 :::* LISTEN 4581/nginx: master
tcp6 0 0 :::22 :::* LISTEN 935/sshd
tcp6 0 0 :::8000 :::* LISTEN 2486/node /home/ubu
SOLUTION
I guess I found something, checking sudo nano /var/log/syslog gives me weird DNS error:
Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Alter the security group to allow port 443 as well as port 80
You will need an SSL certificate on the nginx server also

Node.js sever on ubuntu not receiving requests

When I place the IP Adress of the server and the port on the browser, there is no response and it does not get to the server. Here are some stats from the server; I am running the server on port 5403
sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22 ALLOW IN Anywhere
80 ALLOW IN Anywhere
443 ALLOW IN Anywhere
5403 ALLOW IN Anywhere
22 (v6) ALLOW IN Anywhere (v6)
80 (v6) ALLOW IN Anywhere (v6)
443 (v6) ALLOW IN Anywhere (v6)
5403 (v6) ALLOW IN Anywhere (v6)
Another command
netstat -an | grep "LISTEN "
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:25324 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 :::5403 :::* LISTEN
You only listen to ipv6, this often pose a problem. Try to disable IPv6 and run on IPv4 address in stead.

Fail to access https website config by mod_ssl on CentOS

I installed and config mod_ssl in httpd service on CentOS like this link:
https://www.digitalocean.com/community/tutorials/how-to-create-a-ssl-certificate-on-apache-for-centos-6
And this is my config in httpd.conf:
<VirtualHost x.x.x.x:80>
<Directory /var/www/html/source>
AllowOverride All
</Directory>
DocumentRoot /var/www/html/source
ServerName sub.domain.com.vn
</VirtualHost>
<VirtualHost x.x.x.x:443>
SSLEngine on
SSLCertificateFile /etc/httpd/ssl/apache.crt
SSLCertificateKeyFile /etc/httpd/ssl/apache.key
DocumentRoot /var/www/html/source
ServerName sub.domain.com.vn
ErrorLog logs/ssl_error_log
TransferLog logs/ssl_access_log
</VirtualHost>
x.x.x.x is my server IP
And the problem is I can access http:// sub.domain.com.vn but can not access to https:// sub.domain.com.vn. I can telnet to port 443 normally. Port 443 and HTTPS protocol is ready.
netstat -tap | grep https
tcp 0 0 *:https : LISTEN 1
netstat -tlnp | grep 443
tcp 0 0 :::443 :::* LISTEN 1
But when I access to https:// sub.domain.com.vn it print out a error "This webpage is not available ERR_CONNECTION_CLOSED". Can anybody give me a help please ? Forgive me if my english not good enough. Port 443 on my server is opened, I checked.

Why does Node.js work as a proxy to backend Node.js app, but not Nginx?

I have a simple nginx config file
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name ec2-x-x-x-x.compute-1.amazonaws.com;
#root /home/ec2-user/dashboard;
# Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://127.0.0.1:4000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
But when I send the request, it says it cannot access the server.
the server works fine from port 4000 though, and sudo netstat -tulpn gives me this
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6512/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1640/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1247/master
tcp6 0 0 :::80 :::* LISTEN 6512/nginx: master
tcp6 0 0 :::22 :::* LISTEN 1640/sshd
tcp6 0 0 :::3000 :::* LISTEN 15985/node
tcp6 0 0 ::1:25 :::* LISTEN 1247/master
tcp6 0 0 :::4000 :::* LISTEN 3488/node
udp 0 0 0.0.0.0:68 0.0.0.0:* 484/dhclient
udp 0 0 127.0.0.1:323 0.0.0.0:* 451/chronyd
udp 0 0 0.0.0.0:1510 0.0.0.0:* 484/dhclient
udp6 0 0 ::1:323 :::* 451/chronyd
udp6 0 0 :::1458 :::* 484/dhclient
Also, when I use node as a proxy server
var http = require('http'),
httpProxy = require('http-proxy');
httpProxy.createProxyServer({target:'http://localhost:4000'}).listen(80);
this works just fine.
any ideas as to what I'm doing wrong?
Thanks for the useful netstat output. It appears the issue is that your Node.js app is only listening on IPv6, as represented by :::* in the output.
Nginx is trying to connect it via IPv4, where it is not listening.
Your Node.js proxy probably works because it shares the same issue on both ends. :)
You didn't share which Node.js version you are using. Some versions had an issue where attempting to set up an IPv4 connection would result in an IPv6 connection. Either you've run into a bug like that, or your Node.js app is actually misconfigured to listen on IPv6.
If the Node.js app on port 400 was correctly configured to listen on IPv4, you would see this kind of entry in the netstat output:
tcp 0 0 127.0.0.1:4000 0.0.0.0:* LISTEN 12345/node

Resources