I've been migrating datas for two days now, everything is ok in AWS - I used a Bitnami MEAN machine, it was only a very small app.
FYI, I'm moving from Heroku + Parse, set up also nginx on AWS to run more than one nodejs app.
I had to downgrade the default mongodb installation due to incompatibility with Parse (WHY?)
So, straight to the problem: installed node.js parse server, configured like they show on git
var api = new ParseServer({
databaseURI: 'mongodb://127.0.0.1:27017/database',
cloud: './cloud/main.js',
appId: 'my-app-id',
masterKey: 'my-master-key'
});
but when I try to execute any query I got
Error: Protocol not supported.
at send (/opt/bitnami/apps/bellboy-admin/node_modules/xmlhttprequest/lib/XMLHttpRequest.js:299:15)
at dispatch (/opt/bitnami/apps/bellboy-admin/node_modules/parse-server/node_modules/parse/lib/node/RESTController.js:137:11)
at Object.ajax (/opt/bitnami/apps/bellboy-admin/node_modules/parse-server/node_modules/parse/lib/node/RESTController.js:139:5)
at ParsePromise.<anonymous> (/opt/bitnami/apps/bellboy-admin/node_modules/parse-server/node_modules/parse/lib/node/RESTController.js:208:29)
at ParsePromise.wrappedResolvedCallback (/opt/bitnami/apps/bellboy-admin/node_modules/parse-server/node_modules/parse/lib/node/ParsePromise.js:135:41)
at /opt/bitnami/apps/bellboy-admin/node_modules/parse-server/node_modules/parse/lib/node/ParsePromise.js:196:35
at runLater (/opt/bitnami/apps/bellboy-admin/node_modules/parse-server/node_modules/parse/lib/node/ParsePromise.js:180:12)
at ParsePromise.then (/opt/bitnami/apps/bellboy-admin/node_modules/parse-server/node_modules/parse/lib/node/ParsePromise.js:195:9)
at Object.request (/opt/bitnami/apps/bellboy-admin/node_modules/parse-server/node_modules/parse/lib/node/RESTController.js:201:8)
at Object.find (/opt/bitnami/apps/bellboy-admin/node_modules/parse-server/node_modules/parse/lib/node/ParseQuery.js:1141:27)
I tried almost everything, any ideas from you?
Did you install the dependencies for ParseServer? More specifically, is the MondoDB NodeJS drive installed?
npm install mongodb
If it helps, I have a tutorial that explains how the ParseServer should be setup, providing you have MongoDB and NodeJS already installed to the correct versions.
Solved
I guessed it was something involving the http/https protocols between my node app and parse server so I just added the http:// before the address of Parse.serverURL
Parse.initialize('my-id','unused');
Parse.serverURL = 'http://localhost:3030/parse';
Maybe it goes by default on https when not specified.
Related
I have a split app using nestjs on the server and an Angular app as the client. Setting up websockets with socket.io seemed pretty easy using the #nestjs/websockets module and on the client I used ngx-socket-io. I used this repo as basis. Now when I update the project's #nestjs/websockets dependency to the latest version I get
CORS errors and
an error that the client couldn't load the socket.io client js file
I expected CORS problems and after the update, I could fix them by adding
app.enableCors({
origin: 'http://localhost:4200',
credentials: true,
});
to my main.ts file, but I don't know why the client file is not served. With the version of the repo (5.7.x) there are neither CORS errors nor problems with serving the file.
I tried a couple of settings of #WebSocketGateway(), moving to a different port, setting serveClient (even though it should be true by default), but nothing seemed to work. Any advice?
thanks
In my case
I replaced
app.useWebSocketAdapter(new WsAdapter(app));
from
import { WsAdapter } from '#nestjs/platform-ws';
with
app.useWebSocketAdapter(new IoAdapter(app));
in main .ts from
import { IoAdapter } from '#nestjs/platform-socket.io';
Worked like a charm!
The problem was that nestjs did separate the lower level platform (socket.io, express, fastify, ...) from the nestjs modules. The websocket module requires to install an underlying platform, for socket.io
npm install --save #nestjs/platform-socket.io
To serve the socket.io client file it seems like there also needs to be an HTTP platform installed, for express
npm install --save #nestjs/platform-express
More info in the migration guide for v6.
I had the same problem. i was opening the client side of the application in the web-browser, but directly from my filesystem (i would double click on the file index.html next to the little dummy fake-front-end.js on my desktop for example...). It seems that the CORS problem would persist until i actually accessed the index.html through a proper server. So i created a route on my backend, to serve the index.html, and the fake-front-end.js.
There is a section about CORS on the socket.io officual documentation. And there is a section on the nestjs website, but both didnt really helped in my case.
Have a locally running node.js app using mongoose to do crud with mlab's mongodb. No problems
I post my code to github, and then tell Azure to host my app as a webapp, which I have done many times without mongoose.
As best I can tell, Azure does an npm install on my package.json file on my behalf as it installs my app into one of their web server hosts.
If I comment out these lines in my users.js route:
//var mongoose = require('mongoose');
//var ObjectID = require('mongodb').ObjectID;
all is well, Azure runs my app.
If I do not comment them out, I get this error written in the Azure log:
npm http GET https://registry.npmjs.org/mongoose
npm ERR! Error: SSL Error: CERT_UNTRUSTED
In an effort to fix this in my package.json file, I have:
[1] forced azure to use this version of node
"engines": {
"node": "0.8.x"
},
[2] tried to force to a current version
"mongoose": "^5.3.11",
[3] tried to force negotiator to this version as GitHub was complaining about a security issue with negotiator
"negotiator": ">=0.6.1",
I would REALLY like to continue to have Azure run my web apps using GitHub, and not get into the Azure command line stuff to install my bits, so I don't have much control of the installation. There is something about Mongoose that is bad.
thanks
One thing you could try is updating to the newer certificates' definitions.
This version of config-defs.js might do the trick (usually located under /deps/npm/lib/utils).
You might also need to update that file with an additional certificate as described here
Hope it helps!
Neil Lunn fixed it, thanks. I was having problems running v10.x.x of node, and tried dropping back to V8.x.x to get around a breaking change in v10, and I screwed up the name of V8. "node": "0.8.x" does not get V8, 8.10.0 does and all is well.
I am trying to assist in setting up AppDynamics with an Angular 2 app that is hosted in IIS. The app is already up and running. There is a part I am having trouble on, the instructions for that part say say:
1) From the root directory of your Node.js application, run this command:
npm install appdynamics#4.3.5
For every Node.js application you are instrumenting, insert the following call in the application source code at the first line of the main module (such as the server.js file), before any other require statements:
require("appdynamics").profile({
controllerHostName: '<controller host name>',
controllerPort: <controller port number>,
controllerSslEnabled: false, // Set to true if controllerPort is SSL
accountName: '<AppDynamics_account_name>',
accountAccessKey: '<AppDynamics_account_key>',
applicationName: 'your_app_name',
tierName: 'choose_a_tier_name',
nodeName: 'choose_a_node_name'
});
2) Restart you application
I did step 1 locally in the console, but I don't know what to do for step 2. If I add that script to the page I get "The Reference error: require is not defined".
I learned that that function is not meant to run on the browser. It's meant to be run server-side, but I do not see node js or any server.js files on our dev web server.
Does anyone have any suggestions on where to put that snippet. Will it even work with the current setup?
It turns out the code I was given was completely wrong for angular 2 implementation. The code they gave me is for running on the web server's side with node js. Since angular 2 is an SPA that runs on the browser, it would never work.
I did some research and found this example application that I added a few tweaks to: https://github.com/derrekyoung/appd-sampleapp-angular2
I'm hoping to host a node.js server at OpenShift, utilizing a MongoDB database hosted at mlab.com (the new version of mongolab.com) Here's a pretty straight forward tutorial. According to that tutorial, things seem pretty straight forward. That tutorial may be a bit dated, but seems to have been targeted directly for my application (less the update from Mongolab --> mlab) I've used Mongolab in the past and they provide a great service.
So I've built my database. I've written my node code and tested it from local host, where it works great. Yes there are a few lines of difference, but not much. I'm using the same git directory as I'm pushing to OpenShift. The code is pretty straight forward.
databaseUrl = 'mongodb://UserNameHere:PasswordHere#ds012345.mlab.com:12345/DataBaseName';
if (process.env.MLAB_URI) {
databaseUrl = process.env.MLAB_URI;
}
MongoClient.connect(databaseUrl, function(err, db) {
assert.equal(null, err, "Database Connection Troubles: " + err);
test process.env.MLAB_URI from my terminal after a RHC login.
[ABC-XYZ.rhcloud.com xxxxxxxxxxx]\> echo $MLAB_URI
mongodb://<username>:<password>#ds012345.mlab.com:12345/DataBaseName
[ABC-XYZ.rhcloud.com xxxxxxxxxxx]\> echo $OPENSHIFT_REPO_DIR
/var/lib/openshift/xxxxxxxxxxxx/app-root/runtime/repo/
Test with $, use in code with process.env. call. Obviously I've changed my username, password and Openshift server identification, but I've checked and there appear to be no spelling errors. I get the same fail on openshift if I don't use the MLAB_URI environment variable. It's like the connection from the OpenShift server is shut off.
Mlab does provide some tools to verify the connection to a MongoDB there. Here's link to the Mlab assist stuff. I can ping the mlab location from a RHC login and it works just fine. Unfortunately I'm unable to do the % netcat -w 3 -v ds012345.mlab.com 12345 test. That tool (netcat / nc) isn't available at OpenShift.
Again, this thing works fine when I run my node file.js from my local host. I can see data being deposited at the mlab server. It fails if I run from Openshift, with a
throw err ^
AssertionError: Database Connection Troubles: MongoError: auth failed
The code works fine if I use a MongoDB cartridge in the same gear at OpenShift. Unfortunately I've got a few different servers at different locations that are all sharing information. Anybody know what's going on here?
Update: I've done some additional testing from a terminal with RHC login to OpenShift.
[ABC-XYZ.rhcloud.com xxxxxxxxxxx]\> mongo ds012345.mlab.com:12345/dbName -u <dbuser> -p <password>;
MongoDB shell version: 2.4.9
connecting to: 127.6.xyz.xyz:27017/admin
Fri Mar 11 04:14:52.770 Error: 18 { code: 18, ok: 0.0, errmsg: "auth fails" } at src/mongo/shell/db.js:228
exception: login failed
The one surprise is that connecting to: url:27017/admin line... I'd like to understand that better. Stay tuned.
Update for anybody else who may get here. I submitted a support request to mlab. I received an immediate response (Awesome support!)
You'll need to upgrade your mongo shell version to 3.0+ in order to
connect and authenticate to an mLab Sandbox database server. It looks
like version 2.4.9 is being used.
So I was definitely using mongo shell version 3.0 from my localhost. I have little control # OpenShift for that command line feature.. But whoa... Let's not forget the big picture here. I'm really trying to use my node server to contact mlab via a var MongoClient = require('mongodb').MongoClient; connect call. Let's make the same check. Do I have the latest version of mongodb listed in my package.json file? Oops..
Easy fix. Update package.json to require a newer version of mongodb. Success at OpenShift. Yipee!
I created a new Node.js app on Bluemix this morning and downloaded the boilerplate code. I worked on it locally and then pushed it up. On Bluemix, it refuses to start. The error according to the logs is:
Instance (index 0) failed to start accepting connections
So I Googled for that, in every case where I found the result, the answer was that my application was trying to use a specific port instead of letting Bluemix set it.
Ok, but I'm setting the host/port with the exact code the boilerplate uses:
var appEnv = cfenv.getAppEnv();
// start server on the specified port and binding host
app.listen(appEnv.port, function() {
// print a message when the server starts listening
console.log("server starting on " + appEnv.url);
});
So if this is incorrect, it means the code Bluemix told me to download itself is incorrect as well, and I can't imagine that is the issue.
To identify whether cfenv is at fault, I've tested that piece of code with a number of more complex Node.js apps I have, and they work perfectly on Bluemix.
That message can also come when an application you've deployed to Bluemix fails to start at all. Here's a few things you can do to troubleshoot your Node.js application on Bluemix.
Tail logs in another terminal while pushing with "cf logs
". Inspect logs after the failure to see if something
failed during the staging process.
Check that your start command in one of two recommended places, scripts.start in package.json or in a Procfile with web: node <start-script>.
Check that your application works locally. First (optional), create a .cfignore file with "/node_modules" in it, so that when you push the app to Bluemix, CF CLI doesn't push your entire folder of node_modules (as they will be installed dynamically). Next, wipe out your node_modules directory and do an npm install --production followed by npm start (or your custom start command). This is what Bluemix does when trying to start your application, so you should double check that it works locally.
Finally, try bumping up your memory, although this is very unlikely that this is why your application fails to start.