Cannot connect to Node.js application on shared server - node.js

I have setup a node.js application and it's running on port 4999, but when I browse to the url www.website.com:4999 I get a This site can’t be reached error in Chrome and Secure Connection Failed in Firefox
The is the code in SSH used to start the node app
[~/public_html/customer_portal]# gulp serv:prod
[13:48:50] Using gulpfile ~/public_html/customer_portal/gulpfile.js
[13:48:50] Starting 'ConcatScripts'...
[13:48:50] Starting 'ConcatCss'...
[13:48:50] Starting 'CopyAssets'...
[13:48:50] Finished 'ConcatCss' after 553 ms
[13:48:50] Starting 'UglyCss'...
[13:48:50] Finished 'CopyAssets' after 855 ms
[13:48:50] Finished 'UglyCss' after 322 ms
[13:48:50] Finished 'ConcatScripts' after 925 ms
[13:48:50] Starting 'UglyScripts'...
[13:49:08] Finished 'UglyScripts' after 18 s
[13:49:08] Starting 'Inject:PROD'...
[13:49:08] gulp-inject 1 files into index.build.ejs.
[13:49:08] gulp-inject 1 files into index.build.ejs.
[13:49:08] Finished 'Inject:PROD' after 218 ms
[13:49:08] Starting 'build:prod'...
[13:49:08] Finished 'build:prod' after 61 μs
[13:49:08] Starting 'serv:prod'...
[13:49:08] Finished 'serv:prod' after 48 ms
livereload[tiny-lr] listening on 35729 ...
Mon, 25 Jul 2016 03:49:09 GMT express-session deprecated undefined saveUninitialized option; provide saveUninitialized option at app.js:58:13
XXX service has been started at port: 4999 !!!

Just compiling the solution we derived from the comment in OP's post.
So OP had tested his nodeJS application locally and now he is wanting to expose it to the world wide web. While OP did not post content of his gulpFile but I am guessing that he is trying to use a development server spin up by gulp to serve his web page. Not impossible, however certainly not recommended.
A better replacement would be to use a real web server like nginx.
See:
https://nginx.org/en/docs/beginners_guide.html
Back to the original problem. The real reason why OP is getting hit by the error This site can’t be reached was probably because his server did not have the required port forwarded, in this case port 4999. To workaround this temporarily would be to update the Gulp file to host the application on port 80 instead.
However I am still dubious about the error message because I would have thought OP should see something like connection refused. Anyway, this is not important.
To sum up, OP should consider fixing his problem by;
install a real web server on his machine
place the application onto the installed web server

Related

Google App Engine restarting automatically randomly

We are running node server on GAE and for some reason a few times a day our server is offline (sometimes it can take a few mins to come back online).
Requests are the same throughout the day and there is also no exception that would be the cause of restart. There is no spike in requests or any special requests that could cause it.
Log when it happens:
2020-04-18T23:48:51.881806Z [GET /v1/util/example [36m304 [35.262 ms - -[ A
2020-04-18T23:50:17.119906Z [start] 2020/04/18 23:50:17.119185 Quitting on terminated signal A
2020-04-18T23:50:17.175632Z [start] 2020/04/18 23:50:17.175267 Start program failed: user application failed with exit code -1 (refer to stdout/stderr logs for more detail): signal: terminated
2020-04-18T23:51:38.772388Z GET 304 173 B 3.3 s Example-V2/3.1.13 (com.example.app; build:1; iOS 13.4.0) Alamofire/5.1.0 /v1/util/example GET 304 173 B 3.3 s Example-V2/3.1.13 (com.example.app; build:1; iOS 13.4.0) Alamofire/5.1.0 5e9b928a00ff0bc9244f94194c0001737e737065616b2d76322d32613166310001737065616b2d6170693a323032303034303374303630343431000100
2020-04-18T23:51:38.786760Z GET 404 324 B 2.4 s Unknown /_ah/start GET 404 324 B 2.4 s Unknown 5e9b928a00ff0c014898f5c27f0001737e737065616b2d76322d32613166310001737065616b2d6170693a323032303034303374303630343431000100
2020-04-18T23:51:39.529080Z [start] 2020/04/18 23:51:39.511828 No entrypoint specified, using default entrypoint: /serve
2020-04-18T23:51:39.529642Z [start] 2020/04/18 23:51:39.528742 Starting app
2020-04-18T23:51:39.529968Z [start] 2020/04/18 23:51:39.529100 Executing: /bin/sh -c exec /serve
2020-04-18T23:51:39.590085Z [start] 2020/04/18 23:51:39.589751 Waiting for network connection open. Subject:"app/invalid" Address:127.0.0.1:8080
2020-04-18T23:51:39.590571Z [start] 2020/04/18 23:51:39.590347 Waiting for network connection open. Subject:"app/valid" Address:127.0.0.1:8081
2020-04-18T23:51:39.764383Z [serve] 2020/04/18 23:51:39.763656 Serve started.
2020-04-18T23:51:39.764935Z [serve] 2020/04/18 23:51:39.764544 Args: {runtimeName:nodejs10 memoryMB:1024 positional:[]}
2020-04-18T23:51:39.766562Z [serve] 2020/04/18 23:51:39.765904 Running /bin/sh -c exec node server.js
2020-04-18T23:51:41.072621Z [start] 2020/04/18 23:51:41.071895 Wait successful. Subject:"app/valid" Address:127.0.0.1:8081 Attempts:296 Elapsed:1.481194491s
2020-04-18T23:51:41.072978Z Express server started on port: 8081
2020-04-18T23:51:41.073008Z [start] 2020/04/18 23:51:41.072411 Starting nginx
2020-04-18T23:51:41.085901Z [start] 2020/04/18 23:51:41.085451 Waiting for network connection open. Subject:"nginx" Address:127.0.0.1:8080
2020-04-18T23:51:41.132064Z [start] 2020/04/18 23:51:41.131572 Wait successful. Subject:"nginx" Address:127.0.0.1:8080 Attempts:9 Elapsed:45.911234ms
2020-04-18T23:51:41.170786Z [GET /_ah/start [33m404 [11.865 ms - 61[
There is always more than 70% memory free, so that could not be the issue. Only noticed very high CPU utilization when it restarts occurs (10x higher than normally).
In the bottom picture you can clearly see when the restarts happen:
This is my app.yaml
runtime: nodejs10
instance_class: B4
service: example-api
basic_scaling:
max_instances: 1
idle_timeout: 30m
handlers:
- url: .*
secure: always
script: auto
This is happening on our production server, so any help would be more than welcome.
Thanks!
Reading this document, it is mentioned that even though they try to keep basic and manual scaling instances running indefinitely, they are sometimes restarted for maintenance or they might fail due to some other reasons. That is why keeping your max instances as 1 is not considered best practice as it is prone to all of these failures. As mentioned in the other answer, I would also recommend to increase the number of instances so the likelyhood of more failing or being restarted at the same time is lower.
We had the same problem when we migrated our Ruby on Rails app to Google App Engine Standard a year ago. After emailing back and forth with Google Cloud Support, they suggested: "increasing the minimum number of instances will help because you will have more “backup” instances."
At the time we had two instances, and since we upped it the three instances, we have had no downtime related to unexpected server restarts.
We are still not sure why our servers are sometimes deemed unhealthy and restarted by App Engine, but having more instances can help you to avoid downtime in the short run while you investigate the underlying issue.

How to access administration component in API Platforms distribution 2.4.2?

I tried to set up API Platform on my local machine to explore it.
I tried to performed all the operations according to API Platform's "Getting Started" page. So I downloaded the latest offical distribution which happens to be v2.4.2 (https://github.com/api-platform/api-platform/releases/tag/v2.4.2) and I started it using Docker.
I cannot however access the administration backend at http://localhost:81 receiving "Unable to retrieve API documentation."
I searched for help at https://api-platform.com/docs/admin/getting-started/, but it describes steps that seems to be already done in the distribution
How can I enable the admin component or debug what went wrong?
Edit (2019-04-14)
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
40a5d5213cfe quay.io/api-platform/nginx "nginx -g 'daemon of…" 45 hours ago Up 6 minutes 0.0.0.0:8080->80/tcp apiplatformdemo_api_1
d53711c0ba0c quay.io/api-platform/php "docker-entrypoint p…" 45 hours ago Up 6 minutes 9000/tcp apiplatformdemo_php_1
2d4eb8d09e3e quay.io/api-platform/client "/bin/sh -c 'yarn st…" 45 hours ago Up 6 minutes 0.0.0.0:80->3000/tcp apiplatformdemo_client_1
abe3e3b41810 quay.io/api-platform/admin "/bin/sh -c 'yarn st…" 45 hours ago Up 6 minutes 0.0.0.0:81->3000/tcp apiplatformdemo_admin_1
4596a7f81cd8 postgres:10-alpine "docker-entrypoint.s…" 45 hours ago Up 6 minutes 0.0.0.0:5432->5432/tcp apiplatformdemo_db_1
c805fc2f11c9 dunglas/mercure "./mercure" 45 hours ago Up 6 minutes 443/tcp, 0.0.0.0:1337->80/tcp apiplatformdemo_mercure_1
Edit 2 (2019-04-14)
It is worth mentioning that although the API component at http://localhost:8080 works, the HTTPS variant at https://localhost:8443 does not. (Connection refused if I try to telnet it.)
Now it turned out it escaped my notice earlier that there is a message in the JS console saying there was a failed connection to https://localhost:8443. (It says about CORS, but I think the real reason is 8443 simply refuses connection). So although I entered the HTTP variant of Admin at http://localhost:81 it tried to access the API via HTTPS. What could be the reason HTTPS doesn't work?
Edit 3 (2019-04-15)
After looking into the logs of docker compose, I see it is relevant the Varnish container failed. h2-proxy depends on it and it is h2-proxy that governs the 8443 port.
cache-proxy_1 | Error:
cache-proxy_1 | Message from VCC-compiler:
cache-proxy_1 | Expected return action name.
cache-proxy_1 | ('/usr/local/etc/varnish/default.vcl' Line 67 Pos 13)
cache-proxy_1 | return (miss);
cache-proxy_1 | ------------####--
cache-proxy_1 |
cache-proxy_1 | Running VCC-compiler failed, exited with 2
cache-proxy_1 | VCL compilation failed
apiplatform242_cache-proxy_1 exited with code 2
h2-proxy_1 | 2019/04/15 08:09:17 [emerg] 1#1: host not found in upstream "cache-proxy" in /etc/nginx/conf.d/default.conf:58
h2-proxy_1 | nginx: [emerg] host not found in upstream "cache-proxy" in /etc/nginx/conf.d/default.conf:58
apiplatform242_h2-proxy_1 exited with code 1
I have solved this error by getting API Platform by cloning the current master and not download the tar.tgz release version (2.4.2)
git clone https://github.com/api-platform/api-platform.git
docker-compose build
docker-compose up -d
Works like a charm !

vanish cache not starting up due to file storage parameter

I have varnish 4.0 installed on my centos 7 box my params file looks like:
RELOAD_VCL=1
VARNISH_VCL_CONF=/etc/varnish/default.vcl
VARNISH_LISTEN_PORT=80
VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
VARNISH_ADMIN_LISTEN_PORT=6082
VARNISH_SECRET_FILE=/etc/varnish/secret
VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin
VARNISH_STORAGE_SIZE=5G
VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"
VARNISH_TTL=120
VARNISH_USER=varnish
VARNISH_GROUP=varnish
When I start up my varnish I get the following error:
Unit varnish.service has begun starting up.
varnishd[14888]: Error: (-sfile) "${VARNISH_STORAGE_FILE}" does not exist and could not be created
systemd[1]: varnish.service: control process exited, code=exited status=2
systemd[1]: Failed to start Varnish Cache, a high-performance HTTP accelerator.
The owner of the files is varnish, I have run out of ideas how can I get this working.

Zimbra is not working properly

A couple of days ago i tried this tool for a little project and it goes without errors for now, but the service looks inactive (exited) so this is what im doing actually.
Im using Centos 7 virtual machine on VMware and Zimbra open source 8.7.10. I read tutorials about setup zimbra, I make a dns name on the same virtual machine but Im not sure if this works good or I have to install another Centos for doing DNS server work.
I tested the DNS I made with Windows 8.1 (installed on my dektop machine) with nslookup adding ip on dns net properties and it got response.
I cant enter on Zimbra web console, maybe Im missing something.
[root#mail ~]# systemctl status zimbra
● zimbra.service - LSB: Zimbra mail service
Loaded: loaded (/etc/rc.d/init.d/zimbra; bad; vendor preset: disabled)
Active: active (exited) since vie 2017-06-30 11:43:34 -04; 19min ago
Docs: man:systemd-sysv-generator(8)
Process: 834 ExecStart=/etc/rc.d/init.d/zimbra start (code=exited,
status=0/SUCCESS)
jun 30 11:42:03 mail zimbra[834]: Starting opendkim...Done.
jun 30 11:42:03 mail zimbra[834]: Starting snmp...Done.
jun 30 11:42:05 mail zimbra[834]: Starting spell...Done.
jun 30 11:42:10 mail zimbra[834]: Starting mta...Done.
jun 30 11:42:12 mail zimbra[834]: Starting stats...Done.
jun 30 11:42:23 mail zimbra[834]: Starting service webapp...Done.
jun 30 11:42:29 mail zimbra[834]: Starting zimbra webapp...Done.
jun 30 11:42:30 mail zimbra[834]: Starting zimbraAdmin webapp...Done.
jun 30 11:42:30 mail zimbra[834]: Starting zimlet webapp...Done.
jun 30 11:43:35 mail systemd[1]: Started LSB: Zimbra mail service.
I hope you can help me, thank you in advance.
now i can use zimbra. The error were 2, the first one was allowing port 7071 on Server side, the other was the DNS config, this last one is just pointing to Zimbra IP in DNS field. Even i can send mails to gmail what i think it was impossible, jaja.
Anyway thank you, but if someone have a good practices for zimbra Im all ears. I mean certificates for tls/ssl for clients believe its safe.
Cheers!

Why is my application not being deployed on OpenShift?

I believe I have everything set up properly for my server but I keep getting this error
Starting NodeJS cartridge
Tue Jan 05 2016 10:49:19 GMT-0500 (EST): Starting application 'squadstream' ...
Waiting for application port (8080) become available ...
Application 'squadstream' failed to start (port 8080 not available)
-------------------------
Git Post-Receive Result: failure
Activation status: failure
Activation failed for the following gears:
568be5b67628e1805b0000f2 (Error activating gear: CLIENT_ERROR: Failed to
execute: 'control start' for /var/lib/openshift/568be5b67628e1805b0000f2/nodejs
#<IO:0x0000000082d2a0>
#<IO:0x0000000082d228>
)
Deployment completed with status: failure
postreceive failed
I have my git repo set up with all the steps followed properly.
https://github.com/ammark47/SquadStreamServer
Edit: I have another app on openshift that is on 8080. I'm not sure if that makes a difference.
If the other application is running on the same gear, then it is binding to port 8080 first, making it unavailable for your second application. You will need to run each application on it's own gear. Also, you need to make sure that you are binding to port 8080 on the correct IP address for your gear, you can't bind to 0.0.0.0 or 127.0.0.1

Resources