We have a few ksh scripts, running as user cron jobs on RHEL7. user is set with /sbin/nologin.
What is the proper way to export environment variables so it is available to the ksh script running from cron?
/etc/environment
/etc/profile
.profile
.bashrc
another option?
I understand some only run on interactive session and some are bash specific, just not sure how it translates to ksh script running via cron.
Related
I'm trying to run some crawler with Linux crontab.
This should go to the Python environment with
pyenv shell jake-crawler
Here is my crontab -e
*/10 * * * * /home/ammt/apps/crawler/scripts/bat_start.sh
This will run every 10 minutes. This command line works fine when I type
(jake-crawler) [jake#KIBA_OM crawler]$ /home/jake/apps/crawler/scripts/bat_start.sh
[DEBUG|run.py:30] 2017-09-24 19:55:49,980 > BATCH_SN:1, COLL_SN:1, 1955 equal 0908 = False
Inside of bat_start.sh I have init.sh which changes the environment to Python.
Here is my init.sh
#!/usr/bin/env bash
export PATH="${HOME}/.pyenv/scripts:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
pyenv shell jake-crawler
This has no problem when I personally run it from command line. But when cron run it by itself, it cannot find the pyenv command.
I think that you can specify which user should run that script in cron configuration file.
So, if that script is working with your user, then define it in your cron configuration filr.
See this answer for example... https://stackoverflow.com/a/8475757/3827004.
There are two things that diferentiate when you launch an application from the terminal, and when you do from a crontab file:
the environment is not the same, at least if you don't execute your .profile script from your cron job.
You don't have access to a terminal. Cron jobs don't use a terminal, so you will not be able, for example to open /dev/tty. You will have to be very careful on how redirections are handled, as you have them all directed to your tty when running on an interactive session, but all of them will be redirected possibly to a pipe, when run from cron(8).
This makes your environment quite different and is normally a source of errors. Read crontab(1) man page for details.
With bash, you can set your ~/.bashrc file to run something every time a new bash shell is created. Is it possible to do the same thing with /bin/sh? (This is on Debian, by the way).
For now, I just want to echo 'I am sh' when /bin/sh is invoked. It's easy to do in bash ("echo 'I am bash'" placed at the top of the file).
Thanks!
When starting a login shell of dash, which is /bin/sh on debian-like systems, it will read ~/.profile. If you also want a configuration file read for interactive non-login shells, add the following line to your ~/.profile file:
ENV=$HOME/.shinit; export ENV
Then, with the variable ENV appearing in the environment, the file $HOME/.shinit will be sourced with every new interactive (dash) shell.
You may change the file name specified by ENV to any file name you prefer.
To assure a dash login shell has added ENV to the environment, you may need to logout and log back in, or possibly reboot, depending on your system setup.
Documentation
This is documented in man dash:
A login shell first reads
commands from the files /etc/profile and .profile if they exist. If the environment variable ENV
is set on entry to an interactive shell, or is set in the .profile of a login shell, the shell
next reads commands from the file named in ENV. Therefore, a user should place commands that are
to be executed only at login time in the .profile file, and commands that are executed for every
interactive shell inside the ENV file.
Example
Suppose that we have files set up like:
$ echo "ENV=$HOME/.shinit; export ENV" >>~/.profile
$ cat .shinit
echo FOUND ME
Since I just added the ENV line to the ~/.profile file, ENV is not yet in the environment. If we run dash:
$ dash
$
Nothing happened because this is a non-login shell and ENV is not yet in the environment.
If we start a login shell, ENV is placed in the environment and ~/.shinit is run:
$ dash -l
FOUND ME
If, as a child of that shell, we run an interactive non-login shell, then ~/.shinit will be run because the parent shell created the ENV variable:
$ dash
FOUND ME
The environment created by the login shell above only affects its children. To assure that all interactive dash shells have ENV in their environment may, as mentioned above, require logging out and back in, or a reboot.
I am running a shell script by which we are scheduling a task using at command. But it schedules the at task but its not running the same beacuse its using shell /sbin/nologin when we are calling it from php code. It works fine if we run it from terminal.
You should check the "$PATH" env variable. When you are logged in from terminal the shell has initialized it's search path via .bashrc etc. "cron" or "at" jobs don't do that.
So try to log the environment variables to a file in your 'at' jobs and check if it is set up right.
I'm trying to interactively test code before I put it into a script and was wondering if there are any things that behave differently in a script?
When you execute a script it has its own environment variables which are inherited from the parent process (the shell from which you executed the command). Only exported variables will be visible to the child script.
More information:
http://en.wikipedia.org/wiki/Environment_variable
http://www.kingcomputerservices.com/unix_101/understanding_unix_shells_and_environment_variables.htm
By the way, if you want your script to run in the same environment as the shell it is executed in, you can do it with the point command:
. script.sh
This will avoid creating a new process for you shell script.
A script runs in exactly the same way as if you typed the content in at a shell prompt. Even loops and if statements can be typed in at the shell prompt. The shell will keep asking for more until it has a complete statement to execute.
As David rightly pointed out, watch out for environment variables.
Depending on how you intend to launch your script, variables set in .profile and .bashrc may not be available. This is subject to whether the script is launched in interactive mode and whether it was a login shell. See Quick Startup File Reference.
A common problem I see is scripts that work when run from the shell but fail when run from another application (cron, nagios, buildbot, etc.) because $PATH was not set.
To test if a command/script would work in a clean session, you can login using:
ssh -t localhost "/bin/bash --noprofile --norc"
This ensures that we don't inherit any exported variables from the parent shell, and nothing from .profile or .rc.
If it works in a clean session and none of you're commands expect to be in interactive mode, then you're good to go!
I am currently looking for a way to set enviroment variables in Linux via a simple shell script. Within the script I am currently using the 'export' command, however this only has scope within the script where system-wide scope is needed.
Is there anyway I can do this via a shell script, or will another method need to be used?
When you run a shell script, it executes in a sub-shell. What you need is to execute it in the context of the current shell, by sourcing it with:
source myshell.sh
or:
. myshell.sh
The latter is my preferred approach since I'm inherently lazy.
If you're talking about system-wide scope inasmuch as you want to affect everybody, you'll need to put your commands in a place where they're sourced at login time (or shell creation time), /etc/profile for example. Where you put your commands depends on the shell being used.
You can find out what scripts get executed by examining the man page for your shell:
man bash
The bash shell, when invoked as a login shell (including as a non-login shell but with the --login parameter), will use /etc/profile and the first of ~/.bash_profile, ~/.bash_login or ~/.profile.
Non-login bash shells will use. unless invoked with --norc or --rcfile <filename>, the files /etc/bash.bashrc and ~/.bashrc.
I'm pretty certain it's even more convoluted than that depending on how the shell is run, but that's as far as my memory stretches. The man page should detail it all.
You could have your script check for the existence of something like /var/myprog/env-vars-to-load and 'source' it then unlink it if it exists, perhaps using trap and a signal. Its hard to say, I'm not familiar with your program.
There is no way to 'inject' environmental variables into another process' address space, so you'll have to find some method of IPC which will can instruct the process on what to set.
A fundamental aspect of environment variables is that you cannot affect the environment for any process but your own and child processes that you spawn. You can't create a script that sets "system wide" environment variables that somehow become usable by other processes.
On the shell prompt:
$ source script.sh
And set the env vars in script.sh
test.sh
#!/bin/bash
echo "export MY_VAR=STACK_OVERFLOW" >> $HOME/.bashrc
. $HOME/.bashrc
sh task.sh
task.sh
#!/bin/sh
echo $MY_VAR
Add executable rights:
chmod +x test.sh task.sh
And lauch test.sh
./test.sh
Result:
STACK_OVERFLOW