(Why) is it a bad idea to allow node sudo access with third party packages? - node.js

(I am not sure if this is the correct stack exchange forum to post this question; please let me know if you know a more appropriate forum.)
I am trying to install react-native on mac OSX and am receiving an error. This specific question is QnA'd here, and is also solved here. I am already using homebrew (per the example in the docs) and solution #3 failed for me. I am a bit hesitant to use sudo (as suggested in the first link) due to my inexperience. In the second link (above), a user stated:
sudo will solve this, but you should not be using that. This means
node was installed in the wrong way. You should try to uninstall node
and install it via homebrew. Allowing node sudo access with third
party packages is just stupid.
Instead of taking this statement at face value, I wanted to understand why this could be the case. What exactly does it mean to allow node sudo access with third party packages, and why is this a "stupid" idea?

(I think this question is better go to Unix & Linux StackExchange, but I am also not sure.)
The command sudo will gain a temporary root privilege on the executed command, which the executing program will have full control to the device. If a third party have some harmful action, it may cause a disaster if you give it the full access with sudo.
Besides, there has already been a report (a news report for example) about crypto-mining code inside several Node.js NPM packages. So this warning has a real history before.

Related

How to install a full package Node JS, so avoid using npm for installing modules/packages?

I want to learn and use nodejs at work, but there I have network issues while using de npm command to install modules/packages. Is it possible I build, using my home computer, a full package node js, and then install it in another computer (my workplace computer) so i do not need using npm at all? Both computers work using Windows 7 operating system.
Node applications don’t need installation the way you’re thinking of it. So long as you have the same node runtime installed on both computers and all the packages are installed locally (ie without the -g flag), you can just copy the directory the project is in to the new computer and run it there in most cases.
The exceptions will be if your systems are radically different and depend on binaries (eg if you’re using a module like ffmpeg that pulls an OS-appropriate binary down and you’re home and work computers are different OS’s. )
The way around that would be to package using Docker and run in a container on both systems.
That said, I wouldn’t do that. Depending on company policies you might still get in trouble, and it’ll be a lot harder to maintain.
Instead, I’d look at the variety of posts here about getting npm to work behind corporate proxies (you may just be doing it wrong), and in my company it just took persistence with the InfoSec people to prove there was a business need before they made modifications to make it easier to do.

What is the safest way to deliver an Application to novice Linux users?

My customers are novice Linux users, and so am i.
When I gave them my App packaged with ansible, they saw ansible problems, when i gave them manual steps, they also screwed that up, now i have 3 last options, either a perl/bash script or a snappy/deb/rpm package or Linux containers, can anyone share their experience on the safest way to see less problems when installing my app (Written in C)?
This depends on the nature of your application. Debs, rpms etc. are all fine but depend on which distro you're using.
If it's C application, it might make sense to make it a static binary. That way, you'll have to download a single file and just click on it to make it run. It will be big but it should work fine regardless of what else is there. Otherwise, you'll have to worry about dependencies etc.
As it was commented before it depends what you did to deploy the product.
In general, if you have dependencies (previous packages that you assume were already installed) or your installation is complex - use rpm or deb.
However if you target multi-platform bare in mind you will have at least two releases (one rpm and one deb...)
If configuration or installation is easier you can just give them an install script.
If your application requires a specific environment with specific configuration/packages I'd consider containers although I never done that personally before.

Ubuntu apt-get with .pac files

I would like to use Ubuntu's apt-get on my computer, which is behind a company proxy server. Our browsers use a .pac file to figure out the right proxy. In the past, I browsed through that file and manually picked a proxy, which I used to configure apt, and that usually worked. However, they are constantly changing this file, and I'm getting so tired of this that I would love if I could just somehow tell apt to use the .pac file.
I've done some research, but every thread I found regarding this topic so far ended in: 'it's probably not possible, but why don't you just read the .pac file and manually pick a proxy'. It's a pain in the neck and sounds like computer stone age, that's why.
I find it hard to believe that something as - what I believe to be, but I may be wrong - ubiquitous as the .pac method has not been addressed yet by Ubuntu. Can someone give me a definitive answer for that? Are there other distributions that allow that sort of thing?
The console’ish way
If you don’t want to use dconf-editor, or if you use another flavor of Linux, your can use this second method.
Create a .proxy file in your home directory. Have it only read/write-able by yourself, as we will store your credentials in there (including your password).
...
Optional step: if you want your proxy setting to be propagated when you’re using sudo, open the sudo config file with sudo visudo and add the following line after the other Defaults lines:
Defaults env_keep += "http_proxy https_proxy no_proxy"
From http://nknu.net/ubuntu-14-04-proxy-authentication-config/

How to verify if the disabled or stopped unix/linux service has a valid binary path?

I would like to find out a given disabled or stopped unix/linux service, can we verify if it has a valid binary path?
I am not sure if it necessary to check out this? I am newbie to unix/linux world.
Or is it by default all installed unix/linux services are verified with the existence of its binary and path?
For example, the installed unix/linux service, sshd.
Check its status through the command "service sshd status"
Then, check its binary path using command "which sshd".
Is there a better way to check this out?
How if I would like to checkout all stopped/disabled unix/linux services that are either stopped or disabled?
Modern GNU/Linux-Distributions allow to manage software installations using a software management system. Different such systems are in use, but they all share a common principle: a package carries within itself a description of all of its dependencies, dependencies for installation, configuration and runtime. All of these dependencies are checked prior to an installation. So unlike one forces any actions here one has the guarantee that if a package is installed, then all its requirements are fulfilled.
Looking at your example of the sshd (the openssh daemon) this means:
if you installed this package, then you can rely on the fact that all required components are installed and match in their versions and so on. IN this case the daemon control scripts and the main executable are actually contained inside the same package, this is typical for all daemon packages. So if that package is installed, then all of its content is installed.
You could manually check if all contained files of a package are actually existing. But if you really feel this has to be done, for example if you suspect some haves somehow were deleted by accident (which is actually pretty hard to do for such system files), then you can ask the software management system to verify the package integrities! For example on an openSUSE system you can say zypper verify sshd. That command verifies both: if all files mentioned in that package actually exist, are the ones originally installed (unaltered) and have correct permissions. In addition all package dependencies are checked as well. So if no error is thrown you can rely pretty much on the fact that everything is fine.
This is obviously jsut an example, different distributions use different management systems, as mentioned above. But they all offer more or less the same features. And once you understood the idea behind this approach you probably never want to miss that elegance and security any more...

Should I use another user than the root when installing NGiNX

I herd that it would be better to use a sub-user for installing NGiNX. Is it true? I am thinking to use NGiNX to install virtual-host that my clients could use for there website and I don't want them to have to much control over NGiNX...
I am using Ubuntu Linux distro.
Thanks in advance for any help and/or tips.
How are you planning to install these applications? Since you say you're using Ubuntu, then I would assume that you'll be installing apps via either the graphical manager or by apt-get or aptitude.
If you're using the graphical program manager, then it should prompt you for your password; this performs a sudo under the hood.
If you're using either apt-get or aptitude or something similar, those programs need to be run as root to install.
In both instances above, the installation scripts for the packages will (should) handle any user-related issues that are necessary for the program you're installing to function properly. For example, when I did an apt-get install jenkins, the installation scripts automatically created a jenkins user for me, and my Jenkins CI server runs as the jenkins user automatically.
Of course, if you're compiling all of these programs by hand, all bets are off and you'll need to figure out how best to do all of this yourself. Of course, if you're compiling these programs by hand to get them installed, I'd have to question why you're using Ubuntu in the first place; one of the best parts to using a Linux distribution with sane package management capabilities is actually USING said package management! (Note: by this statement, I mean anything Debian-based for sure; and I understand that Red Hat's yum provides very similar capabilities, but I haven't used anything RedHat since around 2003.)
You don't want a process to have any more access than it needs. So yes, you should use a user besides root -- one that has the minimal privileges required to read the files it needs. Typically this involves creating a new nginx (or www or similar) user specifically for the task.

Resources