I've been looking around for the best/most appropriate way to install node.js/npm in such a way that using commands like npm install -g bower does not require sudo, as using sudo for such a command can cause issues later on. Initially I followed this answer: Installing with nvm but this installs it into the users home directory which I read may not be a good a idea in production to have node installed in your home directory so I followed an expansion on above tutorial with this: Installing with NVM (digital ocean) however this left me still requiring sudo.
On a side note - on my macbook I installed node with homebrew, is this a good idea or is there a more standard approach.
Thanks for all your help, feel free to ask for clarifications.
I forgot to say, the machine I am planning on installing this on is running XUbuntu 14.04. (also I have my macbook running mavericks - but this is just an addition)
Sudo gives you permissions to change/add/remove files not owned by your user. Those files are as a rule everything except /home/YOU (in MacOS: /Users/YOU)
Your desire is to have Node installed as appropriate (system wide, rather than your home directory), that is good. And as you guessed you need sudo to initially install it on a system path.
But then you wish to have modules installed without sudo, meaning you want modules to be located in a directory, where your user has write access to. That would be available by default if Node was installed in your home.
To enforce your wish on a system path, you will need to give write permission to the folder where modules are located, that is change write permissions or ownership of:
/usr/local/share/npm/lib/node_modules, so that modules can be saved on your disk.
/usr/local/share/npm/bin, to allow modules executables be reachable.
You might have to alter few other folders as well.
That answers your question, but I strongly recommend you not doing so. Instead I suggest you stick to default methodologies. Everyone here without doubt will say it is absolutely safe approach to use sudo when you are installing modules globally, it is even safer to not have write permissions to global infrastructure of your install without super privileges.
Related
I haven't been able to find any questions/answers regarding how to install two versions of Node.js (such as v10 and v14) on the same computer without using NVM. I can't use windows-nvm because it requires admin privileges, and I'm working on a company laptop as a standard user.
I need to be able to install multiple different versions of Node.js because different projects under the same company use different versions of Node.js as a necessity.
Is the only way to uninstall the installed version and install a new version every time? Is there any way I can have v10 under C:\Program Files\node10, and v14 under C:\Program Files\node14?
To be clear, the admins are willing to grant me specific privileges or install any software needed in order to get this working. We have tried using something called RunAsTool to try to let me run NVM as an admin, but this doesn't work because of its limitations.
Another option would be to grant me admin rights to any files and directories needed for NVM to function, but there is no list of those files/folders that I can find.
A third option would be to simply install two different versions, but when you install a new version, the previous version gets removed, even if it's installed under an unusual path like C:\Program Files\node16.
There's no easy way to do this, I think. Broadly you need two things to get node working on Windows: the nodejs folder with the executable in it, by default c:\Program Files\nodejs, and the path to that to be on the system path before any other node paths.
Unfortunately both writing to c:\Program Files and changing the system path require admin rights.
However, there is a somewhat clunky workaround. The overall idea is to put the nodejs folder somewhere where you have write access, point the system path at it, and it should run. Then you can switch versions without admin rights by replacing the folder. To do this:
With admin, install the first version you want to use. Copy the c:\Program Files\nodejs folder somewhere where it won't get deleted on a new install: say c:\nodejsbackups\v10\nodejs if it's version 10.
Install the second version you want to use, and copy the nodejs file to the same place, say c:\nodejsbackups\v14\nodejs.
Also copy it to a place you will run it from and where you have write access, say c:\nodejs if you have write access on the c: drive, or your user profile somewhere if not.
Still with admin rights, edit the system Path environment variable (NOT the user path). Find the entry to c:\Program Files\nodejs and remove it. Add an entry for c:\nodejs. Or just edit it.
I found that to get Visual Studio node apps to work I then had to also uninstall the original node using Control Panel/Programs and Features.
Now fire up a command prompt and do node --version and npm --version and you should see the second version is working.
To switch versions, without admin delete c:\nodejs and then copy the first version to there from c:\nodejsbackups\v10\nodejs. Restart your command prompt, issue the same commands, and you should see the first version is now working.
This seems to work on some very limited testing, but I think you need to test it all works for your use cases. There may be programs like Visual Studio that assume node is at c:\Program Files without using the path. In the end it may be better to beg for admin rights.
Install here:
Delete %NVM_HOME% and %NVM_SYMLINK%
Add path relative
Enjoy :)
I'm configuring software on my first web server, so I am not totally familiar with how everything works, but here is the basic problem:
I have purchased hosting on a web server that runs on CentOS. I have been able to install postgreSQL via an install process that the hoster provides, so that my database will be local only to my home folder. That is working fine.
However, I am trying to install a postgreSQL extension called PostGIS. I have tried to compile it from source on my web server, but it now requires an additional library called GEOS. I downloaded the library from http://download/osgeo.org/geos/geos-3.6.2.tar.bz2, extracted it, and used make install to run it.
Now the problem is that it fails due to this error:
/bin/mkdir: cannot create directory /usr/local/include/geos: Permission Denied
It's not really a surprise, because it is trying to make a new directory in the system root folders, rather than within my personal home folder, which is the only one I have access to. I can't think of any other way around this. Am I just unable to install this library? Or can I "trick" it somehow into installing in in my home directory, where I have full admin rights?
I think You need to execute a command with root user privileges.
Because, make install command need root user privileges.
Like,
sudo make install
or with root user. Like,
sudo su
make install
I'm writing a program that requires LLVM, and thinking of using autotools to ship it on Linux, so from the user's viewpoint the process would look like the well-known ./configure && make && sudo make install.
With autotools, one normally relies on the system package manager to install dependencies. The problem is that, for whatever reason, this doesn't work with LLVM; on Ubuntu 14.04, apt-get thinks the latest version is 3.4, whereas a more recent version would actually be needed. Thus, I need to supply a script to download and build LLVM first (a local copy thereof, not interfering with any older version that might be on the system), a process which takes a few hours.
The most obvious place to put this process is at the start of configure. Is this considered normal and reasonable? Or is there a convention that configure should only contain the things autotools normally puts in it, and installing dependencies should be another script that the user runs first and separately? In the latter case, is there a convention regarding what that separate script should be called?
Don't install anything during configure. The scripts name is "configure" not "install-dependencies".
Write a configure check, and if llvm is missing, Give the user an explanation how to install it. If necessary provide a separate script to download llvm.
It is good practice to run configure (and make) as normal unprivileged user and not as root. So you may not even have permissions to install anything. You would have to check if "sudo" is installed, etc.
It may also happen that the system the user is installing has no network connectivity (firewall etc.), so your download will fail.
Once a program is installed in Linux, sometimes I find out that it is easier to put in a different location. In general, what is the significance of the location of the files of an installed program on Linux?
Often the advice on the internet is to add the (wrong or inconvenient) paths to environment variables. I'd much rather move the files to locations where they are automatically found by commands and programs.
One recent example is site-packages of Python. My Python did not appear to check the PYTHONPATH variable, moving the libraries there to the Python2.7/ directory worked well.
Now Ia m facing the same issue with OpenCV.
I also wonder why Linux installation does not prompt (like Windows) for the desired installation directory and why, so often, things wind up in places where they don't work?
In general, programs are installed in /usr/bin (for binaries) and /usr/lib, or a specific path to that specific linux distro, so that any program that you install that uses a specific library/program will search in that path for it. If you install a program in a different path, let's say /home/user/program, it will be installed locally and other programs won't be able by default to access it.
You can install any program wherever you want. However, it is good use to use the repo and install them in the general path.
I don't know how you install programs, but I use apt-get and dpkg on Ubuntu. You can also install some python modules this way.
Generally you are supposed to use the package system provided by your distro (IMHO).
If you do not use packages then you are on your own.
About PYTHONPATH. Did you add it to your .bashrc and made sure that it was set in the terminal you are using?
Also please see:
http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
I have a problem when installing npm modules. NodeJS is installed on Ubuntu 11.10 running on Virtual Box on Windows host. My project files are on NTFS partition (I have to share them with windows). When I try to install some npm module I get an error, and module is not installed. I've found out that problem occurs when npm tries to create symbolic links.
Probably you can not create symlinks on NTFS partition, when I'm installing module "inside" Linux file system, everything works fine.
How can I fix this? I don't want to resolve dependencies manually :/
Since version 1.2.21, npm has a new option for the install command. --no-bin-links
You can use if for installing a specific node module
npm install express --no-bin-links
and also for a package.json install
npm install --no-bin-links
With this option I've been able to install many npm modules without problems in my shared forlder inside the VM (Ubuntu guest, Windows Host)
The commit where the option was added to the npm code is b4c58617039c21c10889a9869f8e86a23e17d3a0
Try this - http://ahtik.com/blog/2012/08/16/fixing-your-virtualbox-shared-folder-symlink-error/
Works for me!
Basically you set a parameter
VBoxManage setextradata YOURVMNAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/YOURSHAREFOLDERNAME 1
And then run the VM as an administrator....
The Symlink permissions, or the --no-bin-links didn't work for us. Instead we opted to move our node_modules away from the /vagrant share. We created a symlink from /vagrant/node_modules to /tmp/node_modules. You can only do this if your node_modules is not in version control. Check this first!
Also see http://kmile.nl/post/73956428426/npm-vagrant-and-symlinks-on-windows
I am pretty certain symlinks can't be created on the shared drive ("shared folder"). Even more impossible with a Windows host machine and a Linux guest.
The host machines are not aware of the filesystem of the guests. A guest machine is a blackbox for the host. You can't say to the host "Well this links to /etc/..." when the host doesn't know where this /etc is :).
So in short: unfortunately no.
In some more detail:
I would be really happy if I am wrong! It is a major pain in my development process.
I tried so many options. By default the filesystem that the "shared folders" use is vboxsf, something if not the same as samba (default network sharing protocol for windows) so:
I tried using native Windows network sharing and then mounting the network drive in the guest as the guest and host are on the same network. The problem was still there.
I tried running a NFS server on windows (Hanewin NFS Server) along with SFU/SUA (Windows Services for UNIX) but this has problems with GIT locks. Probably other problems as well - it was a while ago and I don't clearly remember
I tried the reverse: sharing a directory on the virtual machine to windows. But that is stupid as all the files will be on the virtual box and is reaally slow to access on windows
I was being stupid and I though "well let's mount a virtual drive on both windows and linux" - don't try this, corrupts the virtual disk. Something I should have known.
There might be a network sharing protocol other than samba and nfs which will perhaps copy the files whenever "symlink" creation is attempted? I don't know really.
However I haven't found one yet and also "locking" seems to to be a task of the file-system itself so I doubt any network protocol (unless having a dedicated registry of some sort for locks) can do this.
For anyone still having this problem after trying npm install --no-bin-links.
I wasn't able to get any of the above solutions to work when I came across a similar issue running npm install on a Laravel Homestead Vagrant box on a Windows 7 host using VirtualBox. The guest box has a mapped directory to the Windows file system.
The problem was causing various error messages and failed package installations. The one that is most relevant to the question was npm ERR! UNKNOWN, symlink '<some filename>'.
To fix this, I was able to successfully run npm install on the Git bash command line on Windows rather than bash on the guest Linux.
To do this, you will need to install Git for Windows and NodeJS (both on your Windows box).
e.g.
Install Chocolatey https://chocolatey.org/
choco install nodejs.install
choco install git.install
Run C:\Program Files (x86)\Git\Git Bash.vbs
In the Git Bash command line, change directory to the location of your package.json file e.g. cd /c/projects/projectname
Run npm install
Everything appears to install successfully.
If you don't use native modules (compiled from C/C++) you can just use npm on your Ubuntu VM and copy the node_modules folder to you windows drive.
fsutil behavior set SymlinkEvaluation L2L:1 R2R:1 L2R:1 R2L:1
this command enables symlinks on windows. for a better explanation to the cryptic commands at the end visit: How do I overcome the "The symbolic link cannot be followed because its type is disabled." error when getting the target of a symbolic link on Server 2008?
in summary
The behavior codes for fsutil behavior set SymlinkEvaluation - namely L2L, L2R, R2L, and R2R - mean the following:
L stands for "Local", and R for "Remote" (who would've thunk?)
The FIRST L or R - before the 2 - refers to the location of the link itself (as opposed to its target) relative to the machine ACCESSING the link.
The SECOND L or R - after the 2 - refers to the location of the link's target relative to the machine where the LINK itself is located.