I'm building an IRC bot (backed by MongoDB) and I want to give it a web interface on my server (not meteor.com). I want to use meteor because of the live-updating and because I want to learn how to use it.
For meteor to recognize that I have other subfolders that might have web interfaces (some modules will have web interfaces, and some will not), I need to have meteor run from the application root. When I do that, meteor looks inside my node_modules folder and determines that it can't run the same file twice (some duplicate dependency), then it crashes.
I either need to ignore node_modules when I run meteor, or move node_modules to a subdirectory (like meteor's lib/). I'm pretty sure I can't make npm install do that though, so what can I do?
I keep seeing people wishing for a .meteorignore, and I'm feeling that too.
For now I think the best solution is to introduce another folder between the root and the modules and setup meteor there, but I'll leave this open in case others have better ideas.
Related
Someday I just got curious about node_module in framework or UI libraries such as React. After searching some stuff, I found there should be no changes in node_modules unless the user really needs to, so here's my questions.
Why there shouldn't be changes in node_modules?
Even I change the code, there were no change in result. Why does this happen? Even deleting the file or folder inside node_modules there were no changes. (I thought it should show an error, but it worked Ok...)
When we start the framework (like npm start in React), does NPM downloads the external files for example from Github everytime and places in the DOM? If that's right, the files in node_modules are just readable ones?
Could someone give me an answer?
node_modules are the libraries / packages / modules (whatever name you call) written by the open source community. They can be inter-depending. If you change one of those files without reviewing the impact to their dependent, the execution of code may crash.
However, not every single file or every single line of codes are required for each execution of code. Most of the time, one package can do things way more than what your code truly needed. If your code doesn't depend on the files that you changed, your project can still run happily.
npm start doesn't download files automatically. npm install does. So files in node_modules are not readable only. However, in many case, files in node_modules were ignored from git commit. In server environment, packages are freshly pulled from remote, instead of from your local machine. Therefore your changes to packages would not be deployed unless you explicitly do so.
Technically you can modify the files in node_modules and NOT running npm update forever - not a good commercial practice. Acceptable for personal project, if you are the sole programmer and can fully control when to update packages.
Well, if you change your node module an npm update will eventually overwrite your code and you will lose your functionality possibly even without knowing where the problem is.
I have a repo which consists of several "micro-services" which I upload to AWS's Lambda. In addition I have a few shared libraries that I'd like to package up when sending to AWS.
Therefore my directory structure looks like:
/micro-service-1
/dist
package.json
index.js
/micro-service-2
/dist
package.json
index.js
/shared-component-1
/dist
package.json
component-name-1.js
/shared-component-2
/dist
package.json
component-name-2.js
The basic deployment leverages the handy node-lambda npm module but when I reference a local shared component with a statement like:
var sharedService = require('../../shared-component-1/dist/index');
This works just fine with the node-lambda run command but node-lambda deploy drops this local dependency. Probably makes sense because I'm going below the "root" directory in my dependency so I thought maybe I'd leverage gulp to make this work but I'm pretty darn new to it so I may be doing something dumb. My strategy was to:
Have gulp deploy depend on a local-deps task
the local-deps task would:
npm build --production to a directory
then pipe this directory over to the micro-service under the /local directory
clean up the install in the shared
I would then refer to all shared components like so:
var sharedService = require('local/component-name-1');
Hopefully this makes what I'm trying to achieve. Does this strategy make sense? Is there a simpler way I should be considering? Does anyone have any examples of anything like this in "gulp speak"?
I have an answer to this! :D
TL;DR - Use npm link to link create a symbolic link between your common component and the dependent component.
So, I have a a project with only two modules:
- main-module
- referenced-module
Each of these is a node module. If I cd into referenced-module and run npm link, then cd into main-module and npm link referenced-module, npm will 'install' my referenced-module into my main-module and store it in my node_modules folder. NOTE: When running the second npm link, the name of the project is the one you find in your package.json, not the name of the directory (see npm link documentation, previously linked).
Now, in my main-module all I need to do is var test = require('referenced-module') and I can use that to my hearts content. Be sure to module.exports your code from your referenced-module!
Now, when you zip up main-module to deploy it to AWS Lambda, the links are resolved and the real modules are put in their place! I've tested this and it works, though not with node-lambda yet, though I don't see why this should be a problem (unless it does something different with the package restores).
What's nice about this approach as well is that any changes I make to my referenced-module are automatically picked up by my main-module during development, so I don't have to run any gulp tasks or anything to sync them.
I find this is quite a nice, clean solution and I was able to get it working within a few minutes. If anything I've described above doesn't make any sense (as I've only just discovered this solution myself!), please leave a comment and I'll try and clarify for you.
UPDATE FEB 2016
Depending on your requirements and how large your application is, there may be an interesting alternative that solves this problem even more elegantly than using symlinking. Take a look at Serverless. It's quite a neat way of structuring serverless applications and includes useful features like being able to assign API Gateway endpoints that trigger the Lambda function you are writing. It even allows you to script CloudFormation configurations, so if you have other resources to deploy then you could do so here. Need a 'beta' or 'prod' stage? This can do it for you too. I've been using it for just over a week and while there is a bit of setup to do and things aren't always as clear as you'd like, it is quite flexible and the support community is good!
While using serverless we faced a similar issue, when having the need to share code between AWS Lambdas. Initially we used to duplication the code, across each microservice, but later as always it became difficult to manage.
Since the development done in Windows Environment, using symbolic links was not an option for us.
Then we came up with a solution to use a shared folder to keep the local dependencies and use a custom written gulp task to copy these dependencies across each of the microservice endpoints so that the dependency can be required similar to npm package.
One of the decisions we made is not to keep two places to define the dependencies for microservices, so we used the same package.json to define the local shared dependencies, where gulp task passes this file and copy the shared dependencies accordingly also installing the npm dependencies with a single command.
Later we made the code open source as npm modules serverless-dependency-install and gulp-dependency-install.
To be completely specific:
I am writing a Node.js app that is intended to be a websocket bot for Slack.
A Node project exists that abstracts the majority of the Slack API. (It is NOT an npm module.)
I'm not overly familiar with grunt, etc. but I can get the dependencies to install and utilize all this code by placing my own mybot.js in the root folder of this git clone and running node mybot.js with mybot.js being based on the files in the example folder.
Committing to my own repository, I don't want to commit any of the aforementioned project code -- it's not mine! I do, however, want it as a dependency. Unfortunately, this code by Slack is not an npm module that makes it easy to do. The project has a /bin folder and a /src folder full of coffee script, etc. that grunt builds to .js files.
The Slack project code has its own dependencies. In my way of thinking, those are sub-dependencies for me, or cascading dependencies. My project only depends on whatever the Slack project depends on.
I would like to be able to update my project with updates (manually, or via build) from the git repo of the Slack project as needed.
It seems there must be a way for me to include this project as a dependency, and once built, properly reference it's bin and src folder objects (bin/slack, src/message, client, channel, user, etc.) without committing it to my own repository. Especially great if it could be in a subfolder separate from my own model definitions. In a way, this seems no different to me than including jQuery in my website layout via a CDN. I'm only asking for the jQuery project and depending on my link flavor, I can get a specific version or the latest version, etc.
So, it turns out the comment by Ben pointing me to the npmjs.com slack-client npm module was the help I really needed. I just didn't really know how to ask the right question, I think.
And while I hate to look a gift horse in the mouth, a little more than a link, Ben, would've saved me another three hours, probably. Perhaps: "It is an npm module, not just a project from github." But thank you, even if it took me a while to decipher what you were saying.
I have a lot of files in my assets/js directory. At first I thought I was somehow losing the ability to see/serve files from sails. But after I let sails run for a little while, it seems sails found my files in the assets/js directory and I was able to run my intern tests. I'm assuming there is some type of behind the scenes cache going on that must run before I can successfully make a request. Is this the reason, and if so, how can I disable it for a more instant access to my files?
Sails.js needs to do several things before lifting the server, you can try sails lift --verbose to see what's happening.
Also, if you dont mind, take a look the .js files under tasks/config/, Sails.js uses them to link/copy/build assets before starting.
I'm working on including Ember into an already deployed Node/Express/EJS application. I don't want to disrupt any of the existing application behavior, but instead, want build out any additional feature to the app using Ember. The server side code for these new features has already been built, and each endpoint returns the JSON format that Ember Data expects. I've been looking into Ember App Kit and Ember-cli, but I'm not sure how to include these tools into my existing directory structure, and I'm not certain if these are in face the right tools for my use case. Does anyone have any experience with this particular use case?
For example, navigating to /foo returns the existing express route that renders an ejs template, but /bar would be an Ember route that hits the api endpoint of the same name.
Use ember-cli (ember-cli.org). It's perfect for this situation as it allows you to rapidly prototype out your ember app. It even comes with an expressJS based testing suite and mocks server.
Once you are ready to incorporate it to your NodeJS, Flask, or whatever other application all static files should be available in the ember-cli dist directory.
Just don't forget to build the ember-cli project before porting via the means of ember build. After that it's just a simple matter of moving the files in the ember projects dist folder into wherever you need in your environment.
Just to embellish a bit: Ember-cli has a great work-flow for use while building your ember app. Try ember serve for a quick example. I mention this because it speaks to your question of how to incorporate this into your existing project (by project, I assume you may mean workflow). I typically will build ember projects purely using ember-cli and consider the back-end (usually a REST-API exposed via either Flask or NodeJS) a separate concern. When importing the app all I have to concern myself with is making sure my server serves the correct static dist files.
I would not recommend using the Ember App Kit (EAK) as it has been deprecated in favor of ember-cli. It really is.. much, much better.
Ok so I'm going to try to be more complete in this answer. Let's start with the isolated question - ember-cli or eak? Definitely Ember-Cli, but why?
EAK is officially heading for deprecation in favor of ember-cli.
Ember-cli produces more structured, cleaner, maintainable ember code.
Ember-cli integrates your entire ember-app workflow.
Managing all types of dependencies and assets is made simple via bower install --save and Brocfile.js edits. (See the ember-cli docus for explanation)
Now the more complicated part of the question. How do I integrate this with an existing workflow? I recently ran into this when building a webrtc-included ember app. It just so happened that this was my first real use of ember as well. So, not yet realizing the full potential of my new hammer I wrote the REST API, Backend ORM layer, the signalling service, the session cache, and built a complete CI workflow first. Then I was ready to build my ember app and ended up in your exact position.
To short circuit a long story - the lesson I learned was that I should treat my ember-cli app like a completely separate concern. What I mean here is - there's my backend (NodeJS, Apache, Nginx... whatever) and what I code here is built, tested and integrated separately. It normally even lives in its own git repository. It's a separate concern to my front end equation which typically would consist of several components itself. My I-Phone Native app would have its own workflow from build-to-test and integrate to my backend via a REST API. My Android native app another. My web app another. For all intents and purposes, in my workflow these are entirely separate workflows that only tie together when we start talking Continuous Integration.
There's a lot of arguments for why you'd want to do this. Most importantly - it scales.
The beauty of ember-cli is that it makes it fairly trivial to get a workflow for your ember app going and trivial to redeploy your app + workflow on a new box/instance. I would certainly recommend referring to the official ember-cli setup instructions but I'm going to include them here in case the URL goes bad one day:
No really, refer to the link my instructions will suck in comparison...
Deploying a new Ember App
Install NodeJS, NPM and Git (ember-cli will as a default load git going for you on new apps) on your system via sudo apt-get install nodejs and sudo apt-get install npm, sudo apt-get install git.
Note: On Ubuntu 14.04 and some other Debian systems use sudo apt-get install nodejs-legacy instead. If in doubt, use legacy. If you experience problems using the node command after install, it's definitely that you need to use nodejs-legacy. Don't bother trying to do the linking manually.
Install required node modules globally: sudo npm install -g ember-cli, sudo npm install -g bower, sudo npm install -g phantomjs
Create new ember-cli app: cd <Desired Directory>, ember new my-app-name
Now you can look at ember help to begin learning how to use ember-cli. Hint: The --dry-run flag is your friend. You'll notice that when you installed ember-cli all the scaffolding was taken care of for you. You'll see that you can add things with simple ember generate commands and they will not only create the required objects, but the test files as well. Best of all, using ember serve you can start scaffolding your app and via simple flags you can configure the test server to actually proxy and use your already-existing REST API (if you have one) or the expressJS mocks server to build a psuedo-API.
Integrating it with your larger workflow from here is a simple matter of configuring whatever tools you use (I use Jenkins and Ansible for this kind of stuff) to distribute the dist folder of ember-cli to where it should go to be served as static content (it is just a single page webapp in the end).
If you want to instead play with an existing ember-cli app that operates in an isolated workflow and already makes use of most of the goodies in order to get some familiarity - as I suspect you'll quickly realize how to fit this into whatever your current structure is - feel free to clone and play with this one here.
And so finally - to answer the more specific question of how this might fit into an existing directory structure, I would break this down into two categories. When we're talking src - I would have it in it's own "structure", separated at least by being in a separate sub-directory of its own. When we're talking built and deliverable I would include the contents of the /dist folder in whatever static web server directory you want to serve your ember app from.
EDIT: I Added some more detail - hopefully useful details below the line break. Let me know if you have more questions or if I can explain anything better.
I am facing a similar situation. I am planning to use EAK as a "prototyping tool" in a separate project folder. Then build the distribution directory from EAK using grunt dist and insert that into the assets folder of my main Node.js project.