I feel like this is a silly question but I can't get node to run a .noderc file, not even to just log to the console and not even on linux where I would expect everything to work.
My use case is that for work I have to use Windows and npm has installed modules to a particular location (%HOME%\AppData\Roaming\npm\node_modules\), whereas the default module.paths in node does not include that directory. I am fine with the location, so I don't want to fix this on the npm end of things. I have easily fixed the issue by appending this path to modules.paths, so the real solution should be for me to add that to an rc file.
I tried making a .noderc in my Windows home directory, and to my surprise it seems to not be running. I did the same on my personal laptop running a linux distro (~/.noderc) and the same thing happens. A log to console or definition of a test var does not show up in the REPL.
Is there something obvious I am missing? Usually programs have a hierarchy they run through, with default configs, a system level config file (if it exists), and a user level config file (if it exists). In the case of a program like X, they are executed in order and overwrite options, where as in something like bash, they are checked in reverse order and the first one found is executed (it is common for the first line of a user level bash config to source the system level one). How does node function?
EDIT:
In the comments below where I link to an old SO thread I noticed that there is a bit of a hack involving an alias to get the .noderc to work. So I guess a better question is, how are things like module.paths configured in node? There must be a way not involving a full rebuild.
As there has been no answer for over 10 days, I am going to just post my workaround form the comment above. It looks like there is no node config file. Any further info on that welcome. In order to solve my particular problem, I used the NODE_PATH environment variable.
I personally prefer to use use config files and not environment variables for scripting issues that need to be addressed every time. Config files are always read automatically, while using an environment variable requires you to always remember to add the variable or to permanently add the variable to your environment (which clutters the environment). I prefer to restrict environment variables to specific variations from the default. However, as I said, I can't find a config file for node.
Related
On my PC I have quite a few aliases, path variables and modules like npm, scoop shims, go modules, powershell/bash functions/modules and my question is, does the pc search through all of these things the moment after I run a command? or is there some kind of registry that stores all of these values so they are quickly accessible? That would be my guess but both on my linux machine and my windows pc I have syntax highlighting on and it "knows" that a command is valid even prior to running it.
I was really curious about what process is taking place here earlier today when I installed gum (charmbracelets go TUI module) and it automatically recognized "gum -file" as a valid command when I hadn't explicitly defined it anywhere and it isn't prepended with "go" or "scoop" (assuming that I used one or the other to install the module)
I tried googling this question but I was inundated with pages of irrelevant stuff regarding basic questions about path errors and bloated articles about how to add stuff to your path.
In a graphical DE, like KDE, what command can be used to add a new environment variable that can be used by any other process?
Note:
1) I'm aware of export A=B, but it only works for subsequent processes started in the same shell that executed the export, processes started else where, like a graphical application such as Chrome, won't be aware of the export.
2) I'm also aware that you can put it into ~/.bash_profile or alike, but that would need a restart/relogin for the setting to take effect.
Is there something like export but have effect for all applications and doesn't require a significant restart?
Your assumption that you need to restart after placing a variable definition (whether through an export statement or otherwise) in ~/.bash_profile, is flawed. You only need to source the file again after making modifications:
source ~/.bash_profile
or the more portable version:
. ~/.bash_profile
Either statement will (re)load any definitions in that file into your current shell. Sourcing is not the same as executing the script: it will modify the environment in the calling shell itself, not a subshell running the script.
A file like ~/.bash_profile may have many other definitions and settings in it that will mess with the shell. It is better to create a small (temporary) snippet with just the variables you want, and source that instead, as #JeremiahMegel suggests.
If you want to change the environment for a single process you run from the command line, you can set the variables on the same command line:
VAR=value /usr/bin/gedit
This will run gedit with the environment variable VAR set to value, but only for that one child process.
Unfortunately, your desktop applications are a bit more static than that. Most of the graphical applications you see in the menus are probably going to be represented by .desktop files in a folder like /usr/share/applications. These files are run in an environment that has almost none of the variables you are expecting. They rely on absolute paths, and most of the configuration is done by pointing the .desktop file to a script that performs its own setup. You can modify some of these files on an individual basis if you absolutely have to, but I would not recommend doing that. If you do insist on messing around with the graphical apps on your desktop, I would recommend making a copy of the desktop files you plan to modify in to ~/.local/share/applications, or whatever the equivalent is on your system. Those files will override anything found in /usr/share/applications and will only affect you.
I am trying to run my Expect script from two different locations and it will work with the following Expect executables referenced:
My linux home directory (#!/usr/bin/expect)
A clearcase view on another server (#!/clearlib/vobs/otherdir/bin/expect)
The problem is that I cannot run the script in both places unless I change the reference of the Expect executable location to the first line of the file.
How can I get the correct instance of the Expect executable for the corresponding directory?
If your path is correctly set on both servers, you could use /usr/bin/env:
#!/usr/bin/env expect
That would use the expect as found in the PATH (/usr/bin in one case, /clearlib/vobs/otherdir/bin in the other)
By instead using env as in the example, the interpreter is searched for and located at the time the script is run.
This makes the script more portable, but also increases the risk that the wrong interpreter is selected because it searches for a match in every directory on the executable search path.
It also suffers from the same problem in that the path to the env binary may also be different on a per-machine basis.
And if you have issue with setting the right PATH, then "/usr/bin/env questions regarding shebang line pecularities" can help.
Hi I'm sure there is some way of doing what I want, but maybe I'm just attacking it the wrong way. Hope someone can help.
I have a dev box that I SSH in to from several other machines. In order to debug remotely I need to configure my debugger with my client machine's IP, which changes when I log in from different machines. I'm getting bored of doing this manually all the time so thought I'd try and automate it.
I'm creating a script that is automatically run upon SSH connection that will modify a configuration setting in a PHP ini file. The problem is the PHP ini files are all owned by root so I'm not sure how to modify those files if I'm just logging in as a normal user.
There's not really a security concern with my dev box so I could just change the owner of the ini file, but I wanted it to be more automated than that.
My current attempt is a python script located in my home dir, which is called from .bashrc when I connect via SSH. I don't see how I can gain root privileges from there, I am pretty new to linux though. I thought maybe there would be some other method I'm not aware of.
You have a file that is owned by root. You clearly need to either find a way to mark the file as modifiable by you; or a way for you to elevate your privileges so that you are allowed to modify it.
This leads to the two traditional unix approachs to doing this. They are:
To create a group with which to mark the file, ie. initdebug; chgrp/chmod the file so it has the initdebug group and is group writable; and, add yourself to the initdebug group so you can use the group write permission to modify the file.
To create a very small, audited binary executable (this won't work with a script) that will perform the specific modifications you desire (for simplicity I would suggest copying one of a selection of root owned PHP ini files into the right place). Then chown'ing the file so it is owned by root, and setting the suid bit on the executable so it will execute as root.
You can also combine the two approaches, either:
Not making yourself a member of the initdebug group or suid on the executable, but rather setting group of the executable to initdebug and setting its sgid bit; or,
Keeping the executable suid root but making it only executable by initdebug and therefore only executable by users added to that group.
The security trade off is in the ease/risk of privilege escalation should someone hack your account. If there is a stack/heap overflow or similar vulnerability in the executable and it is executing as root, then you are gone. If the PHP ini file can be modified to open a remote-vulnerability then if they can directly access the ini file you are gone.
As I suspect the latter is possible, you are probably best off with a small executable.
Note: As I alluded to above, unix does not acknowledge the s[ug]id bits on scripts (defined as anything using the #!... interpreter syntax). Hence, you will have to use something you can compile down to a binary. So that probably means either C, C++, Java(using gcj), ML, Scheme(mit), Haskell(ghc).
If you haven't done any C or C++ programming before, I would recommend one of the others as a suid binary is not a project with which to learn C/C++. If you don't know any of the other languages, I would recommend either ML or Java as the easiest to to write something small and simple.
(btw, http://en.wikipedia.org/wiki/List_of_compilers includes a list of alternative compilers you can use. Just make sure it compiles to native, not bytecode. As far as the OS is concerned a bytecode vm is just another interpreter).
you can do it with insert your user to sudoers file on mechine that you want to remote,
for the example you can see my blog.
this is the url : http://nanamo3lyana.blogspot.com/2012/06/give-priviledge-normal-user-as-root.html
and then on your automaticly script add sudo on your command.
i'm sorry my english not good.
I am trying to write a bash script to do a task, I have done pretty well so far, and have it working to an extent, but I want to set it up so it's distributable to other people, and will be opening it up as open source, so I want to start doing things the "conventional" way. Unfortunately I'm not all that sure what the conventional way is.
Ideally I want a link to an in depth online resource that discusses this and surrounding topics in depth, but I'm having difficulty finding keywords that will locate this on google.
At the start of my script I set a bunch of global variables that store the names of the dirs that it will be accessing, this means that I can modify the dir's quickly, but this is programming shortcuts, not user shortcuts, I can't tell the users that they have to fiddle with this stuff. Also, I need for individual users' settings not to get wiped out on every upgrade.
Questions:
Name of settings folder: ~/.foo/ -- this is well and good, but how do I keep my working copy and my development copy separate? tweek the reference in the source of the dev version?
If my program needs to maintain and update library of data (gps tracklog data in this case) where should this directory be? the user will need to access some of this data, but it's mostly for internal use. I personally work in cygwin, and I like to keep this data on separate drive, so the path is wierd, I suspect many users could find this. for a default however I'm thinking ~/gpsdata/ -- would this be normal, or should I hard code a system that ask the user at first run where to put it, and stores this in the settings folder? whatever happens I'm going ot have to store the directory reference in a file in the settings folder.
The program needs a data "inbox" that is a folder that the user can dump files, then run the script to process these files. I was thinking ~/gpsdata/in/ ?? though there will always be an option to add a file or folder to the command line to use that as well (it processed files all locations listed, including the "inbox")
Where should the script its self go? it's already smart enough that it can create all of it's ancillary/settings files (once I figure out the "correct" directory) if run with "./foo --setup" I could shove it in /usr/bin/ or /bin or ~/.foo/bin (and add that to the path) what's normal?
I need to store login details for a web service that it will connect to (using curl -u if it matters) plan on including a setting whereby it asks for a username and password every execution, but it currently stores it plane text in a file in ~/.foo/ -- I know, this is not good. The webservice (osm.org) does support oauth, but I have no idea how to get curl to use it -- getting curl to speak to the service in the first place was a hack. Is there a simple way to do a really basic encryption on a file like this to deter idiots armed with notepad?
Sorry for the list of questions, I believe they are closely related enough for a single post. This is all stuff that stabbing at, but would like clarification/confirmation over.
Name of settings folder: ~/.foo/ -- this is well and good, but how do I keep my working copy and my development copy separate?
Have a default of ~/.foo, and an option (for example --config-directory) that you can use to override the default while developing.
If my program needs to maintain and update library of data (gps tracklog data in this case) where should this directory be?
If your script is running under a normal user account, this will have to be somewhere in the user's home directory; elsewhere, you'll have no write permissions. Perhaps ~/.foo/tracklog or something? Again, add a command line option, and also an option in the configuration file, to override this.
I'm not a fan of your ~/gpsdata default; I don't want my home directory cluttered with all sorts of directories that programs created without my consent. You see this happen on Windows a lot, and it's really annoying. (Saved games in My Documents? Get out of here!)
The program needs a data "inbox" that is a folder that the user can dump files, then run the script to process these files. I was thinking ~/gpsdata/in/ ?
As stated above, I'd prefer ~/.foo/inbox. Also with command-line option and configuration file option to change this.
But do you really need an inbox? If the user needs to run the script manually over some files, it might be better just to accept those file names on the command line. They could just be processed wherever, without having to move them to a "magic" location.
Where should the script its self go?
This is usually up to the packaging system of the particular OS you're running on. When installing from source, /usr/local/bin is a sensible default that won't interfere with package managers.
Is there a simple way to do a really basic encryption on a file like this to deter idiots armed with notepad?
Yes, there is. But it's better not to, because it creates a false sense of security. Without a master password or something, secure storage is not possible! Pidgin, for example, explicitly stores passwords in plain text, so that users won't make any false assumptions about their passwords being stored "securely". So it's best just to store them in plain text, complain if the file is world-readable, and add a clear note to the manual to warn the user what's going on.
Bottom line: don't try to reinvent the wheel. There have been thousands of scripts and programs that faced the same issues; most of them ended up adopting the same conventions, and for good reasons. Look at what they do, and mimic them instead of reinventing the wheel.
You can start with the Filesystem Hierarchy Standard. I'm not sure how well followed it is, but it does provide some guidance. In general, I try to use the following:
$HOME/.foo/ is used for user-specific settings - it is hidden
$PREFIX/etc/foo/ is for system-wide configuration
$PREFIX/foo/bin/ is for system-wide binaries
sym-links from $PREFIX/foo/bin are added to $PREFIX/bin/ for ease of use
$PREFIX/foo/var/ is where variable data would live - this is where your input spools and log files would live
$PREFIX should default to /opt/foo even though almost everyone seems to plop stuff in /usr/local by default (thanks GNU!). If someone wants to install the package in their home directory, then substitute $HOME for $PREFIX. At least that is my take on how this should all work.