So I'm trying to build a simple PaaS for Node apps (http://www.github.com/darrensmith/baseplatform) and I'm getting some really strange behaviour.
Basically - you can run BasePlatform on your host and it starts up a proxy server on port 8080 using http-proxy and an instance of Swaggerize-express on port 8180. Initially it proxies all requests on port 8080 through to 8180 which is the API to install new apps.
You can upload an app and specify a domain name that has its DNS pointing at the same host (localhost for purpose of testing) and based on that domain it will proxy requests through to the app that is running on an alternate port.
So I created a second swaggerize-express app and uploaded it in to BasePlatform running on port 8005. However when I try and view the Swagger JSON that is automatically generated (http://localhost:8005/api/v1/api-docs) for the app running on port 8005 I get the JSON for the default app running on port 8180.
If I start the app independently and hit port 8005 I get the correct JSON.
I'm not understanding how a node process running on one port of my host is interferring with the node process running on the other and am looking for some insight.
Note - this is me trying to hit the installed app's json directly on the port that it was started on. If I try viewing it on port 8080 (via the proxy) I get the same behaviour. My static routes that aren't automatically being handled by Swaggerize are working as expected - there only seems to be crossover between the swaggerize handled routes.
Any help would be greatly appreciated!
Figured it out!
I implicitly left the current working directory as that of the parent process (BasePlatform) when launching the child process:
const fork = require('child_process').fork;
app.locals.settings.deployedProcesses[oldAppId+'-'+latestDeployId] = fork('./deployments/'+oldAppId+'-'+latestDeployId+'/server.js');
In doing so the Swaggerize router of the child process was picking up the swagger.yaml file of the parent process (because it was stuck in the parent's current working directory) instead of its own.
I revised it to set the current working directory to that of the child process:
const fork = require('child_process').fork;
app.locals.settings.deployedProcesses[oldAppId+'-'+latestDeployId] = fork('server.js',[],{
cwd: './deployments/'+oldAppId+'-'+latestDeployId
});
Related
I'm trying to run 2 instances of NodeJS on the same port and server from diffrent server.js files (diffrent dir, config etc). My server provider gave me an information that vhost is running for a diffrent domain, and there is the question. How to handle it in NodeJS Express app ? I've tried to use vhost from https://github.com/expressjs/vhost like that :
const app = express();
const vhost = require('vhost');
app.use(vhost('example1.org', app));
// Start up the Node server
app.listen(4100, () => {
console.log(`Node server listening on 4100`);
});
And for second application like that:
const app = express();
const vhost = require('vhost');
app.use(vhost('example2.org', app));
// Start up the Node server
app.listen(4100, () => {
console.log(`Node server listening on 4100`);
});
But when I'm trying to run second instance I'm getting EADDRINUSE ::: 4100, so vhost doesn't work here.
Do you know how to fix it ?
You can only have one process listen to one port, not just in Node.js, but generally (with exceptions that don't apply here).
You can achieve what you need to one of two ways:
Combine the node apps
You could make the apps into one application, listen once and then forward requests for each host to separate bits of code - if you wanted to achieve code separation still, the separate bits of code could be NPM modules that are actually written and maintained in isolation.
Use webserver to proxy the requests
You could run the 2 node processes on some free port, say 5000 and 5001, and use a webserver to forward requests to it automatically based on host. I'd recommend Nginx for this, as its proxying capabilities are both relatively easy to set up, and powerful. It's also fairly good at not using too many system resources. Apache and others can also be used for this, but my personal preference would be Nginx.
Conclusion
My recommendation would be that you install a webserver and forward requests on the exposed port to the separately running node processes. I'd actually recommend that you run node behind a proxy as default for a project, and only expose it directly in excpetional circumstances. You get a lot of configuration options, security, and scalability benefits if your app already involves a well hardened server setup.
I have just started google app engine with nodejs. I have created a local project that works fine on my machine. And If I hit
http://localhost:7000/services/user/getuser
it returns a json object.
I have deployed the same project on google app engine using
gcloud app deploy
Now when I hit
http://help-coin.appspot.com/services/user/getuser
it is showing
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
I have checked the logs on server
Load controllers from path '/app/app/services' non recursive.
--------make-runnable-output--------
undefined
------------------------------------
Up and running on port 7000
Loading controller 'UserService.js'.
No error on the server side. What is this issue? Am I missing something?
Here is the project that I have deployed https://github.com/ermarkar/nodejs-typescript-sample
Your app must listen to 8080, not 7000 or any else port.
See this.
Listening to port 8080
The App Engine front end will route incoming requests to the appropriate module on port 8080. You must be sure that your application code is listening on 8080.
I have a full web application using NodeJS, MongoDB (Mongoose as the driver) and ExpressJS.
The project works perfectly on my local machine. Today I decided to move everything to production. I'm using Google App Engine to host my application, and Compose (formally MongoHQ) to host my database.
App Engine servers my application perfectly, although my API does not seem to work. My API is served from example.com/api, and each request (GET, POST, DELETE and PUT) all returns a 502 (Bad Gateway) error.
I tried running my application on my local machine while connected to my remote MongoDB database and that worked perfectly fine. So it must be a problem with App Engine or NodeJS, not with MongoDB.
I have tried checking all error logs within Google Cloud, although there are no errors.
Why is App Engine/NodeJS serving my application's static content perfectly fine, although not allowing any requests to my API?
just make sure that your server listens on 8080 port
https://cloud.google.com/appengine/docs/flexible/custom-runtimes/build#listen_to_port_8080
502 Bad Gateway is usually an error on the Nginx side. Unfortunately those logs are not surfaced to Cloud Logging, yet.
A lot of times the problem is that your HTTP packets are too big for the buffers or something similar. The way you can see the nginx log is something like this:
Use just 1 VM. This isn't strictly necessary, but a lot of times it makes it easier to debug your application if you know that your requests on the one machine. You can accomplish this by adding this to your app.yaml:
manual_scaling:
instances: 1
then re-deploy
Switch the VM from "Google owned" to self-managed. This can be done in the Cloud Console. Go to Compute Engine, instances, click on the instance name that matches the App Engine version, and you should see an option to switch it to self-managed.
gcloud compute ssh <instance name> to SSH to the machine
docker ps to see your running containers. Look for the container named nginx and grab its id.
Once you have a container ID, you should be able to docker exec -it <container id> -- cat /var/log/nginx/error.log. You might want to ls that whole log directory.
You will likely see an error there which will be a bigger hint as to what's going wrong.
I know this is way more complicated than it should be :-\ If you have any problems with the steps above, leave a comment. If you do find an error and you're not sure what to do with it, also leave a comment.
I had the same problem, I was getting "nginx 502 bad gateway" error on GAE standard environment. There are many reasons for this but I finally got it working. Try these:
1) Run the app on the correct port. Google will set the PORT environment variable. I was running on port 8080, in the stackdriver logs I was getting this warning:
App is listening on port 8080. We recommend your app listen on the
port defined by the PORT environment variable to take advantage of an
NGINX layer on port 8080.
The code below gets the port from environment, if PORT is set otherwise defaults to 8080:
const PORT = process.env.PORT || 8080;
2) Go to google cloud console -> logging -> logs viewer. Select Google App Engine and then your service from the down and check you logs. Are you getting the requests at all or does it look like the requests do not react to your server. In my case, I was not getting them even after I fixed the port:
2020-03-02 21:50:07 backend[20200302t232314] Server listening on port
8081! 2020-03-02 21:50:08 backend[20200302t232314] "GET /create-user
HTTP/1.1" 502
Fix any error if it looks like your application is failing to start, throwing exceptions etc..
3) Don't pass an IP when you are running your server. It seems Google runs the app at a pre-defined IP address and do not want you to modify it:
server.listen(PORT);
4) Don't try to run on https! Google is running an nginx server in front of your app and it is handling the SSL and redirects to your app over http. You can use the environment variable NODE_ENV(it is set to "production" in GAE environment) to run on http on production and https elsewhere, like this:
let https = require('https');
let http = require('http');
if (process.env.NODE_ENV == "production") {
http.createServer(app).listen(PORT, function () {
console.log(`Server listening on port ${PORT}!`)
});
} else {
https.createServer({
key: fs.readFileSync('host.key'),
cert: fs.readFileSync('host.cert')
}, app).listen(PORT, function () {
console.log(`Server listening on port ${PORT}!`)
});
}
5) I didn't need to set any handlers in my yaml file, it might be causing errors for you if you have incorrect configuration. My yaml file is pretty straightforward:
runtime: nodejs12
env: standard
instance_class: F1
I have set up up a node.js 0.10 gear in OpenShift which I deployed a simple server which is based off peerjs-server. All I want this server to do is act as a signalling server to communicate the connection info between peers connected to my application and from then on they communicate peer-to-peer using WebRTC. Everything works when pointing to the demo "PeerJS Cloud" signalling server but when trying to use my own server set up I keep getting returned 503 status codes.
Here is the server creation code I use:
var host = process.env.OPENSHIFT_NODEJS_IP;
var port = process.env.OPENSHIFT_NODEJS_PORT || 8080;
var server = new PeerServer({ port: port, host: host});
NB: I have added host to peerjs-server so I can use OpenShift's IP, not sure if this was necessary but it wasn't working without this either.
The server peerjs-server uses is restify. Here is the server create and listen code:
this._app = restify.createServer(this._options.ssl);
/* A lot of set up code that I have not changed from peerjs-server */
this._app.listen(this._options.port, this._options.host);
Where this._options.port and this._options.host are the ones defined in the previous code segment and I am not using SSL so nothing is being passed in there.
When deploying this code to OpenShift I get no errors but when accessing the site on port 80 or 8000 (the open external ports) I get 503's. I also checked rhc tail and this is what I get:
Screenshot (Can't post images because I have no reputation..). Not sure exactly what that means if anything.
Any help is much appreciated, and if more info is needed I can add more, was not sure what was important information or not.
UPDATE: It's a scaled application using 1-3 small gears.
from https://github.com/peers/peerjs-server/blob/master/lib/server.js:
// Listen on user-specified port and IP address.
if (this._options.ip) {
this._app.listen(this._options.port, this._options.ip);
} else {
this._app.listen(this._options.port);
}
So, use 'ip' and not 'host'. Worked for me.
I am building windows azure application which is primarily based on .NET, but I also have to build a socket.io server using node.js hence i need to deploy a socket.io server and use this socket.io url to connect in my .NET application.
I followed all the steps listed here . And I am able to get the socket.io running on my local but when i deploy to cloud, it doesnt start. Please find below a code snippet for socket.io
var app = require('express')()
, server = require('http').createServer(app)
, io = require('socket.io').listen(server, { origins: '*:*' });
server.listen(4001);
When i hosted it in my local emulator, 127.0.0.1:81 was pointing to this in my browser
But 127.0.0.1:4001 showed "Cannot GET /" on the browser, which is an indicative that the socket.io server is running on that url.
But when i deploy the same to cloud, i get the same as the screenshot on the url where the cloud service is hosted but on port 4001 where the socket.io server should have started it says page cannot be displayed.
Please let me know if you need to see any other files like web.config etc.
I have been stuck on this issue from forever and its really crucial for my project, any suggestions or ideas would be deeply appreciated.
Thanks
The important part that you are missing from the sample is setting of the port number
var port = process.env.port || 1337;
and
.listen(port)
when you are running inside of the Azure environment (even emulated) the ports are assigned for you, the port environment variable will tell you where. 4001 is likely not the assigned port.
The 1337 would only be used if you are running by executing
node server.js
from the command line