I used Daftmonk's Yeoman Full-stack generator as scaffolding for an app I'm making. I'd like to run it using Nodejitsu's nodejs service. But when I deploy, I get a cascading series of errors, and even once the errors stop in jitsu's cli, the app fails to deploy and gives a file not found.
I'm guessing the errors are coming because on my localhost Grunt is boxing up my app into a dist/ folder where it's being served. And I don't think Nodejitsu is able to account for that.
Has anyone had success deploying on nodejitsu a nodejs app that uses Grunt in this way? Sorry if this question is vague, I'd be happy to elaborate more, but I'm lost!
I also use yeoman with Angular Fullstack. It's almost funny how these tools come so close to working but leave you, the developer stuck when it comes to deploying. This isn't actually an answer. It's more of a supportive post acknowledging the problem. Here is what I've discovered.
Yeoman likes you to run grunt serve:dist to get a peek at what the minified build will look like. This puts all your production code into /dist/public
Nodejitsu will run node server/app.js when it runs
Unfortunately, this leaves nodejitsu looking in dist instead of dist/public so it can't find the files
------SEE BELOW FOR A BETTER ANSWER------
I've been playing with this more and as it turns out, the answer is almost too easy, especially with nodejitsu. I'm going to assume that you've already installed the nodejitsu tools with:
[sudo] npm install jitsu -g
I'll also assume you've registered as described here:
https://www.nodejitsu.com/documentation/jitsu/
Now, it's really easy.
Run grunt. This will minify and uglify your code. I had some problems with uglify (as many people seem to have) and so I added this to Gruntfile.js to fix the issue. Apparently the js is larger as a result but the headache factor is worth it for me:
Add it within the initConfig section (only take this step if you are getting an error related to loading modules from the console of your browser).
// Uglify Exceptions
uglify: {
options: {
mangle: false
}
},
Running grunt puts everything into /dist/public.
Now
cd /dist
jitsu deploy
Everything worked perfectly for me. The key here is being inside the dist dir when you run jitsu deploy. This way, it will only deploy the production compiled code.
Related
I have a laravel application deployed on a shared hosting server. I managed to deploy the app, install all the composer/node dependencies and it all runs with no error. I'm trying to make some minor changes on one of my components, but for some reason after the npm run dev(production) everything seems to be compilled with no error, but the actual application in the browser does not reflect the changes. I tried to clear all the caches in the app and in the browsers I'm using. I tried also to run 'npm run watch'. I replaced files, also I replaced the whole folder. If I remove something npm does display the error about the missing files, but the changes are not compiled. I've been googling now for 2 hours,but I cannot find anything useful . Any idea is welcomed. Thanks in advance.
Using Laravel Mix's .version() could help in cache busting. Don't forget to use mix('path') instead of asset('path') in your blade files
What is the best practice for deploying a nodejs application?
1) Directly moving the node_modules folders from the development server to production server, so that our same local environment can be created in the production also. Whatever changes made to any of the node modules remotely will not affect our code.
2) Run npm install command in the production server with the help of package.json. Here the problem is, any changes in the node modules will affect our code. I have faced some issues with the loopback module (issue link).
Can anyone help me?
Running npm install in production server cannot be done in certain scenario (lack of compiling tools, restricted internet access, etc...) and also if you have to deploy the same project on multiple machines, can be a waste of cpu, memory and bandwidth.
You should run npm install --production on a machine with the same libraries and node version of the production server, compress node_modules and deploy on production server. You should also keep the package-lock.json file to pinpoint versions.
This approach allows you also to build/test your code using development packages and then pruning the node_modules before the actual deploy.
Moving node_modules folder is overkilled.
Running npm install might break the version dependencies.
The best approach is npm ci. It uses the package_lock file and installs the required dependencies without modify the versions.
npm ci meant for continuous integration projects. LINK
I am an ASP.NET Core developer but I recently started working with Node.js apps. For me this was one of the challenges you mentioned to move the node_modules folder to production. Instead of moving the whole folder to production or only running the npm install command on production server, I figured out and tried a way of bundling my Node.js app using Webpack into a single/multiple bundles, and I just got rid of the mess of managing node_modules folder. It only picks up the required node_modules packages that are being used/referred in my app and bundles up in a single file along with my app code and I deploy that single file to production without moving the entire node_modules folder.
I found this approach useful in my case but please suggest me if this is not the correct way regarding the performance of the app or if any cons of this approach.
Definitely npm install. But you shouldn't do this by your own hand when it comes to deploying your app.
Use the tool for this like PM2.
As for your concern about changes in packages, the short answer is package-lock.json.
My guess is that by asking this question you don't really understand the point of the package.json file.
The package.json file is explicitly intended for this purpose (that, and uploading to the npm registry), the transfer of a node package without having to transfer the sizeable number of dependencies along with it.
I would go as far as to say that one should never manually move the node_modules directory at all.
Definitely use the npm install command on your production server, this is the proper way of doing it. To avoid any changes to the node_modules directory as compared to your local environment, use the package lock file. That should help with minimising changes to the source code in node_modules.
I mean no bad intent by saying this
i want to run https://gitlab.com/jmis/exilecraft on my own linux server, sadly there is no documentation aboout it and i can't reach the author.
I'm completly new to nodejs, typescript, webpack etc.
But this is what i figured out:
i need npm, webpack, webpack-dev-server
i had to change package.json and remove all "^" so i can
npm install
without any dependencies and compatibility problems, all modules are now installed in minimal required version.
There is a script "start" and after runing it the site is responding, there is some loading, site turns gray and then nothing.
Almost the same as www.exilecraft.org (original site) before showing any content.
I see tsconfig.json there so i understand that this is a typescript project.
I need some directions how to make it work?
What are some pro/cons to pushing built code vs. having the server build it?
This is a general question, but here's my specific scenario to illustrate what I'm talking about:
On Heroku I've got a React app that has a simple express server to do OAuth. Currently, I have a postinstall hook in my package.json that runs a webpack production config to do some extract-text stuff and create a dist/ directory with my bundled, uglifyied code. To get all of this to run on Heroku I had to mark pretty much all of my dependencies as 'dependencies' instead of 'devDependencies'.
I know it's a bad practice to check my dist/ into git but it would save me from having to install a dozen plus node_modules on the server. Any thoughts?
Having gone through and used demeteorizer. I wonder what are the main differences between setting up meteor vs demeteorizer and running it via node; on own server?
meteor only
hot swappable code?
problem in maintaining packages similar from production and dev
same meteor versions running on prod and dev
hardcoded environment settings (i.e. mongo)
demeteorizer
platform independant as this auto bundles dependancies and uses pure nodejs
organise and maintain mongodb how you like (backup scripts etc)
I have been using demeteorizer (packaging->upload->running forever), but wonder if there are any performance or issues in the long run.
I have seen packages such as "authentication" running okay locally but very slow on the test server (hangs on submit, indicating sync problems?)
thanks in advance.
ref: https://twitter.com/SachaGreif/status/424908644590030848
Demeteorizer builds on top of meteor bundle with one small difference: Demeteorizer builds a package.json for you and deletes the node_modules directories.
Without demeteorizer you would have a bit of trouble deploying your app, particularly if it was on a different platform to the one you built your app on.
If you see meteor's own docs, you have to remove fibers and manage your npm modules yourself, manually. With a package.json you can run npm install and have them all installed for you, including ones from packages.
Why is this useful? For services like modulus it means you can upload an app and have it install all your dependencies for you without you having to think about it, as if it were an ordinary node-js app.
Everything that applies to meteor bundle will also apply to demeteorizer as it is still the same meteor bundled app, just with the package.json. So you can use forever, hard coded/environment based settings, etc the same way.