I have a Windows Laptop, and a Linux desktop.
Whenever I switch from my Laptop to my desktop (my Meteor project is on Dropbox, so it syncs), I can't get my Meteor project to run, getting the following error:
Error: Can't find npm module 'double-ended-queue'. Did you forget to
call 'Npm.depends' in package.js within the 'meteor' package?
The odd thing is, I've tried removing .meteor/local , and since my project is a git repo, I could easily check and see that git diff is not giving anything, so I'm not sure what changed.
Any ideas about how I could solve this issue?
This happens because meteor builds your project according to your OS it is recommended you ignore .meteor/local directory from syncing. That is why it is added to your .gitignore file by meteor automatically, hence no result on git diff.
Dropbox isn't really a suitable place to store code. Meteor uses a .gitignore file to avoid the problem you're having, which implies that they expect you to be using git repositories. Depending on your needs you could try GitHub (https://github.com) or BitBucket (https://bitbucket.org). BitBucket has the benefit of allowing private repositories free of charge.
If you really want to use Dropbox you should be able to ignore the .meteor/local folder with selective-sync. More info at https://www.dropbox.com/en/help/175
In terms of the reason for the synced code not working when you switch OS, it is because meteor compiles packages into the .meteor/local directory. Some but not all of this code is OS-specific (basically any binary packages will fail if you switch OS because they are compiled for your specific OS and processor architecture).
Related
I am trying to get some git information (for instance command hash) from an electron app. I am reading a file from a certain path and I want to check if this file inside a repo then I will get the super project and get the commit hash.
This is doable if the git is installed on the machine , but if not then this will fail.
I can deal with file paths and get to .git folder and get information from it, but still I prefer if there is another way.
I tried to install npm nodegit, but it has its problems with electron and webpack.
Also there is git-revision but its methods are simple and I need more functionalities (like getting the super project )
Is there a way or a library to achieve that if we need to run the project on machine with no git installed on it?
When I run "npm install" in a project it often modifies package-lock.json, for example if I work on the same project from another computer (with different node or npm version).
But at the same time the documentation suggests that the file is supposed to be added to version control (git in my case):
https://docs.npmjs.com/files/package-lock.json
This file is intended to be committed into source repositories, and
serves various purposes: ...
So should I commit the changes made by npm back and forth when switching work machines or when somebody else does npm install? This would be a nightmare.
Currently I just discard any changes to package-lock.json made by npm, and it's been working fine. So I might as well add it to .gitignore...
Am I doing it wrong? Should I use npm ci instead? I wouldn't call my computer a "CI", it's just a development machine, why should I use it there?
Basically I have the same question as this gentleman:
https://github.com/npm/npm/issues/18103#issuecomment-370401935
(Sadly I can't add a comment on that issue or create a new issue at all, the npm repo has issues disabled)
Yes you want to commit your package-lock.json file to source control. The reasoning behind this is to ensure that all of the same versions of each package are downloaded and installed for each user that pulls down the code. There are some other reasons to include the file such as tracking changes to your package tree for auditing.
I'm trying to write an npm package that will be published and used as a framework in other projects. The problem is -- I can't figure out a solid workflow for working on it at the same time as working on projects that depend on it.
I know this seems super basic and that npm link solves the issue, but this is a bigger one than just being able to import one local package from another.
I have my framework package scaffolded out; let's call it gumby, It exports a function that does console.log('hello from gumby'). That's all that matters for right now.
Now I'm ready to create a project that will use gumby. Let's call this one client. I set that up too and npm link gumby so client can import from it, etc. OK cool, it's working as expected.
So now it's time to publish gumby. I run npm publish and it goes out to npm as version 0.0.1.
At this point, how do I get the published, npm-hosted version of gumby into the package.json for client? I mean, I could just delete the symlinked copy from my node_modules and then yarn add gumby, but what if I want to go back and work on it locally again? And then run it against the npm version again? And then work on it some more? And then...
You get the point, I imagine. There's no obvious way to switch between the npm copy of a package that you're working on, and the local one. There's the additional problem of how to do that without messing with your package.json too much, e.g. what if I accidentally commit to it version control with some weird file:// dependency path. Any suggestions would be much appreciated.
For local development, having the package symlinked is definitely the way to go, the idea of constantly publishing / re-installing the package sounds like a total pain.
The real issue sounds more like you’re concerned about committing a dev configuration to prod - you could address that problem with something as simple as a pre-commit hook on your VCS e.g. block if it detects any local file references in the package.json.
I have a very special requirement from my client. We have been using npm to install karma and phantomjs for quite a while. Everything works fine until we have to move everything off the cloud to internal infrastructure. Now things get complicated. The internal infrastructure doesn't have internet access so we cannot use npm to resolve dependencies anymore. We tried to move node_modules folder dev machine to the internal infrastructure machine. It didn't work because dev machine is OSX and Windows and the server is Centos and phantomjs is OS specific but npm is able to workout the versioning. What options do we have to resolve dependencies? I just learn that node_modules name cannot be changed. I was thinking of checking in OS specific node_modules but that wouldn't work since npm only looks for node_modules folder.
I got the same error as this thread PhantomJS Crash - Exit Code 126 when I was trying to use node_modules from OSX in Centos.
Install all dependencies on first OS (i.e. OSX), assuming that you have package.json with all dependencies.
npm install
Rename created npm_modules to npm_modules_mac
Repeat steps above for different OS (i.e. Windows), rename node_modules to something like node_modules_windows.
On target OS, move folders created above to your app folder, create symbolic link (node_modules), which will point to appropriate folder (npm_modules -> npm_modules_mac in OSX)
Why don't you just host your private registry? You can store the registry in the internal infrastructure.
The defacto registry is #isaacs own npmjs.org. This can be found here:
https://github.com/isaacs/npmjs.org
It does require using CouchDB as the database, however, and that can be daunting. There are alternatives that allow you to do this. For example, reggie:
https://github.com/mbrevoort/node-reggie
I've been using Node.js and npm for a few weeks with great success and have started to question the best practice for installing local modules. I understand the Global vs Local argument, however, my question has more to do with where to place a local install. Let's say that I have a project located at ~/ProjectA/ which is version controlled and worked on by multiple developers. When initially playing with Node.js and npm I wasn't aware of the default local installation paths and just simply installed the necessary modules in a default terminal which resulted in a installation path of ~/node_modules. What this ended up doing is requiring all the other developers working on the project to install the modules on their own machines in order to run the application. Having seen where some of the developers ran npm install I'm still really surprised that it worked on their machines at all (I guess it relates to how Node.js and require() looks for modules), but needless to say, it worked.
Now that the project is getting past the "toying around" stage, I would like to setup the project folder correctly. So, my question is, should the modules be installed at ~/ProjectA/node_modules and therefore be part of the version controlled project files, or should it continue to be located at a developer-machine specific location...or does it not really matter at all?
I'm just looking for a little "best-practice" guidance on this one and what others do when setting up your projects.
I think that the "best practice" here is to keep the dependencies within the project folder.
Almostly all Node projects I've seen so far (I'm a Node developer has about 8 months now) do that.
You don't need to version control the dependencies. That's how I manage my Node projects:
Keep the versions locked in the package.json file, so everyone gets the same working version, or use the npm shrinkwrap command in your project root.
Add the node_modules folder to your VCS ignore file (I use git, so mine is .gitignore)
Be happy, you're done!