R package management on Linux - linux

I have two accounts on a linux server, one with sudo power, one without.
When I install packages using the account with sudo power, it all works fine.
Then I logged in using another account without sudo power, it shows me the library not found.
Is there a way to solve this like changing the permission of the library? or install globally?
I have to use the account since the all the apps are running on it.

So after I checked my R packages location, I found all the new packages were installed under my personal directory. After I move it all to /usr/share/R/library it got solved.

Related

Installing software on server machine - the install process wants to modify root folders which I cannot access

I'm configuring software on my first web server, so I am not totally familiar with how everything works, but here is the basic problem:
I have purchased hosting on a web server that runs on CentOS. I have been able to install postgreSQL via an install process that the hoster provides, so that my database will be local only to my home folder. That is working fine.
However, I am trying to install a postgreSQL extension called PostGIS. I have tried to compile it from source on my web server, but it now requires an additional library called GEOS. I downloaded the library from http://download/osgeo.org/geos/geos-3.6.2.tar.bz2, extracted it, and used make install to run it.
Now the problem is that it fails due to this error:
/bin/mkdir: cannot create directory /usr/local/include/geos: Permission Denied
It's not really a surprise, because it is trying to make a new directory in the system root folders, rather than within my personal home folder, which is the only one I have access to. I can't think of any other way around this. Am I just unable to install this library? Or can I "trick" it somehow into installing in in my home directory, where I have full admin rights?
I think You need to execute a command with root user privileges.
Because, make install command need root user privileges.
Like,
sudo make install
or with root user. Like,
sudo su
make install

Best Approach to installing Node.js/npm without sudo

I've been looking around for the best/most appropriate way to install node.js/npm in such a way that using commands like npm install -g bower does not require sudo, as using sudo for such a command can cause issues later on. Initially I followed this answer: Installing with nvm but this installs it into the users home directory which I read may not be a good a idea in production to have node installed in your home directory so I followed an expansion on above tutorial with this: Installing with NVM (digital ocean) however this left me still requiring sudo.
On a side note - on my macbook I installed node with homebrew, is this a good idea or is there a more standard approach.
Thanks for all your help, feel free to ask for clarifications.
I forgot to say, the machine I am planning on installing this on is running XUbuntu 14.04. (also I have my macbook running mavericks - but this is just an addition)
Sudo gives you permissions to change/add/remove files not owned by your user. Those files are as a rule everything except /home/YOU (in MacOS: /Users/YOU)
Your desire is to have Node installed as appropriate (system wide, rather than your home directory), that is good. And as you guessed you need sudo to initially install it on a system path.
But then you wish to have modules installed without sudo, meaning you want modules to be located in a directory, where your user has write access to. That would be available by default if Node was installed in your home.
To enforce your wish on a system path, you will need to give write permission to the folder where modules are located, that is change write permissions or ownership of:
/usr/local/share/npm/lib/node_modules, so that modules can be saved on your disk.
/usr/local/share/npm/bin, to allow modules executables be reachable.
You might have to alter few other folders as well.
That answers your question, but I strongly recommend you not doing so. Instead I suggest you stick to default methodologies. Everyone here without doubt will say it is absolutely safe approach to use sudo when you are installing modules globally, it is even safer to not have write permissions to global infrastructure of your install without super privileges.

Installing softwares witout sudo in centos for a single user only?

I am using centos 6 for which I am not having sudo access.I have a user account and have full access for that account. Is there a way to install packages/softwares for a particular user in Centos.
Just copy the executables into your home directory. You may also add it to your PATH variable. Many people has a ~/bin directory for this kind of stuffs.

Azure Regain sudo/su Access

While trying to install a GUI application today I found myself having to disable waagent. Turns out the NetworkManager packages are not compatible with waagent. A few, obviously outdated, posts on the Microsoft forums had me run the following process to get the packages to install:
# yum remove WALinuxAgent
# yum install NetworkManager
... do desktop installs ...
# yum remove NetworkManager
# yum install WALinuxAgent
/usr/sbin/waagent --install
So I did that.
Now I can no longer access root or any sudo commands from my default login.
Without higher level privs I cannot perform any of the "fixes" that are noted in multiple forum posts.
Any way to find a hint as to what the default root password is on my Azure CentOS 6.4 image? Or how to restore sudo access to my default login with no sudo commands?
Is this image hosed? It is running but without elevated privs it is kind of useless as I cannot maintain the system.
Suggestions?
After multiple discussions, at length, with a senior Azure engineer at Microsoft, the bottom line is "the image is hosed". Because of the way the CentOS (and other Linux) images are built, if you lose the waagent without deprovisioning it first, you will obliterate all access to an elevated privilege account.
Microsoft's Azure team has bumped the request to allow for console level access to the Linux images, but there is no ETA or even a confirmation that this feature will ever be considered.
For now the only answer is "rebuild the system on a new image".

Nodejs + npm, installing modules on ntfs partition

I have a problem when installing npm modules. NodeJS is installed on Ubuntu 11.10 running on Virtual Box on Windows host. My project files are on NTFS partition (I have to share them with windows). When I try to install some npm module I get an error, and module is not installed. I've found out that problem occurs when npm tries to create symbolic links.
Probably you can not create symlinks on NTFS partition, when I'm installing module "inside" Linux file system, everything works fine.
How can I fix this? I don't want to resolve dependencies manually :/
Since version 1.2.21, npm has a new option for the install command. --no-bin-links
You can use if for installing a specific node module
npm install express --no-bin-links
and also for a package.json install
npm install --no-bin-links
With this option I've been able to install many npm modules without problems in my shared forlder inside the VM (Ubuntu guest, Windows Host)
The commit where the option was added to the npm code is b4c58617039c21c10889a9869f8e86a23e17d3a0
Try this - http://ahtik.com/blog/2012/08/16/fixing-your-virtualbox-shared-folder-symlink-error/
Works for me!
Basically you set a parameter
VBoxManage setextradata YOURVMNAME VBoxInternal2/SharedFoldersEnableSymlinksCreate/YOURSHAREFOLDERNAME 1
And then run the VM as an administrator....
The Symlink permissions, or the --no-bin-links didn't work for us. Instead we opted to move our node_modules away from the /vagrant share. We created a symlink from /vagrant/node_modules to /tmp/node_modules. You can only do this if your node_modules is not in version control. Check this first!
Also see http://kmile.nl/post/73956428426/npm-vagrant-and-symlinks-on-windows
I am pretty certain symlinks can't be created on the shared drive ("shared folder"). Even more impossible with a Windows host machine and a Linux guest.
The host machines are not aware of the filesystem of the guests. A guest machine is a blackbox for the host. You can't say to the host "Well this links to /etc/..." when the host doesn't know where this /etc is :).
So in short: unfortunately no.
In some more detail:
I would be really happy if I am wrong! It is a major pain in my development process.
I tried so many options. By default the filesystem that the "shared folders" use is vboxsf, something if not the same as samba (default network sharing protocol for windows) so:
I tried using native Windows network sharing and then mounting the network drive in the guest as the guest and host are on the same network. The problem was still there.
I tried running a NFS server on windows (Hanewin NFS Server) along with SFU/SUA (Windows Services for UNIX) but this has problems with GIT locks. Probably other problems as well - it was a while ago and I don't clearly remember
I tried the reverse: sharing a directory on the virtual machine to windows. But that is stupid as all the files will be on the virtual box and is reaally slow to access on windows
I was being stupid and I though "well let's mount a virtual drive on both windows and linux" - don't try this, corrupts the virtual disk. Something I should have known.
There might be a network sharing protocol other than samba and nfs which will perhaps copy the files whenever "symlink" creation is attempted? I don't know really.
However I haven't found one yet and also "locking" seems to to be a task of the file-system itself so I doubt any network protocol (unless having a dedicated registry of some sort for locks) can do this.
For anyone still having this problem after trying npm install --no-bin-links.
I wasn't able to get any of the above solutions to work when I came across a similar issue running npm install on a Laravel Homestead Vagrant box on a Windows 7 host using VirtualBox. The guest box has a mapped directory to the Windows file system.
The problem was causing various error messages and failed package installations. The one that is most relevant to the question was npm ERR! UNKNOWN, symlink '<some filename>'.
To fix this, I was able to successfully run npm install on the Git bash command line on Windows rather than bash on the guest Linux.
To do this, you will need to install Git for Windows and NodeJS (both on your Windows box).
e.g.
Install Chocolatey https://chocolatey.org/
choco install nodejs.install
choco install git.install
Run C:\Program Files (x86)\Git\Git Bash.vbs
In the Git Bash command line, change directory to the location of your package.json file e.g. cd /c/projects/projectname
Run npm install
Everything appears to install successfully.
If you don't use native modules (compiled from C/C++) you can just use npm on your Ubuntu VM and copy the node_modules folder to you windows drive.
fsutil behavior set SymlinkEvaluation L2L:1 R2R:1 L2R:1 R2L:1
this command enables symlinks on windows. for a better explanation to the cryptic commands at the end visit: How do I overcome the "The symbolic link cannot be followed because its type is disabled." error when getting the target of a symbolic link on Server 2008?
in summary
The behavior codes for fsutil behavior set SymlinkEvaluation - namely L2L, L2R, R2L, and R2R - mean the following:
L stands for "Local", and R for "Remote" (who would've thunk?)
The FIRST L or R - before the 2 - refers to the location of the link itself (as opposed to its target) relative to the machine ACCESSING the link.
The SECOND L or R - after the 2 - refers to the location of the link's target relative to the machine where the LINK itself is located.

Resources