persist environment variable value which changed from ant sshexec task - linux

I have created an environment variable named "COUNTER" in "etc/environment" and assigned a value 0 to it. I want to increment and persist its value by using ant script's SSHEXEC task. I wrote the following code to increment its value:
<target name="incrementCounter">
<sshexec
host="${remote.host.ip}"
username="${remote.user.id}"
password="${remote.user.ssh.password}"
command="((++COUNTER))"
trust="true"
useSystemIn="true"
/>
</target>
After command is executed successfully I logged in into linux machine through Secure Shell Client and printed its value, it showed me "0". Is there any way I can achieve this?

Environment variable changes are local to the process which makes them, so this could not possibly work.
You can hack up an ever-increasing counter just by maintaining appropriate file and sourcing it., much like /etc/environment is sourced.
However, the real question is what you are really trying to achieve.

Related

Windows Environment Variables - Troubling accessing updated environment variables in program

I have wrote an initialization script that sets user environment variables which are keys that have been hashed and encrypted...Once the keys have been created the key encryption exe is no longer required. I want to launch the main application and remove the init file containing the hashing and key encryption functions.
I am not having any trouble with any of the above...Everything works as should when independent of each other. The problem is that in order for the main application to have access to the newly created environment variables I need the init script to completely exit...
Everything I have tried, Popen with flags, os.system() and others have still left me in a situation where the parent process ends and the main application launches, however, the environment variables have not updated...I close and relaunch main.py and...boom the program sees the updated variables and all is fine.
All I want is the init script to run, spawn a new process that is not linked at all with init.py and then exit so it can be removed. I thought this would be simple but after many hours of head scratching and trying numerous things, I am still no closer.
If I have to I will simply bundle it as two separate .exe files but I wanted it to be a one click install type thing.
I am running windows 10 and this can be platform specific.
Links looked at:
How to stop/terminate a python script from running?
Using a Python subprocess call to invoke a Python script
Starting a separate process
https://docs.python.org/2/library/subprocess.html
Python: Howto launch a full process not a child process and retrieve the PID
And more...
Current closest result
p = Popen(["python","UserInterface.py"], stdin=PIPE, stdout=PIPE, stderr=PIPE,
creationflags=DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP)
Create an environment block, set the environment variable using SetEnvironmentVariable, and use CreateProcess to specify this environment block for the created process.
MSDN DOC:
To specify a different environment for a process, create a new
environment block and pass the pointer to it as a parameter to the
CreateProcess function.
...
To programmatically add or modify system environment variables, add
them to the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session
Manager\Environment registry key, then broadcast a WM_SETTINGCHANGE
message with lParam set to the string "Environment". This allows
applications, such as the shell, to pick up your updates.

Maintain a session across multiple instances of app when called from same shell

I'm trying to have data (generated by an application only after its launch) persisted across multiple invocations of an application, but only when they're started from the same shell session.
One possible way to do that would be to pass the data back from the application to the calling shell, but since environment variable changes are only passed from parent to child, I don't know how to implement that.
Practical example:
There is job command that create subdirectory with current datetime and does work inside. Sometimes job needs to be killed and restarted, so it need directory where if finished, like job --resume 21Fri_1849/data. I would like to save 21Jan_1849/data so I don't have to check and type it each time I need to resume job. If I created something like .last_job, and wanted to restart job in another session, it could resume wrong (last) job, so files are not solution (AFAIK).
How can this be done?
Since you're only trying to target Linux, there are a fair number of tricks available here. Consider this one:
#!/usr/bin/env bash
current_boot_id=$(</proc/sys/kernel/random/boot_id)
# honor myprog_shell_pid if set and valid, fall back to PPID otherwise
if [[ $myprog_shell_pid ]] && [[ -e /proc/$myprog_shell_pid/stat ]]; then
parent_pid=$myprog_shell_pid
else
parent_pid=$PPID
fi
parent_start_time=$(awk '{print $22}' "/proc/$parent_pid/stat")
mkdir -p "$HOME/.cache/myscript-sessions"
data=$HOME/.cache/myscript-sessions/${current_boot_id}:${parent_pid}:${parent_start_time}
Now, we have a data file name that changes:
When we're rebooted (because current_boot_id is updated)
If we're run from a different shell (because our PPID changes).
If we're run from a different shell with the same PID (because the start time for the parent PID will be different).
...and you can easily delete files with the wrong boot id (because the system rebooted), or with names that refer to PID/start-time combinations that don't exist.
One caveat is that by default, this is sensitive to being called by subshells (output=$(./yourprog) will have a different PPID than ./yourprog will), but if the parent shell runs export myprog_shell_pid=$$, that issue goes away.
You're crossing over to where you need a simple job management engine instead of just shell. Using 'make' and writing Makefiles is the probably the simplest way to set this up. You can write a rule that tells how to turn a stage 1 file into a stage 2 file based on file extension, and then make will know how far things got and how to resume next time you run it.

Passing and updating an environment variable constantly

I need a way to constantly update a environment variable from a script. I need to use that environment variable in another program almost real time.
What I have is this code:
# !bin/bash
while :
do
AMA="$(cut -c 1-13 text.txt)"
source directory/this_script
done
I am getting the right information from my file when running cut this way. I just need to get this variable permanently updating if possible. Also a way to even get it to the environment.
Slackware Linux

Read file contents to variable in grub.cfg file

Q1. Wanted to know how do you read the contents of a file to a variable at boot time in grub.cfg?
Q2. Could that be extended to read an .ini type file where you can read the values for various name entries?
[section]
nothisone=whatever
thisone=this is what I want to get
TIA!!
In order to do exactly what you are asking for, you would probably need to write your own GRUB module.
However, you should be able to achieve what you're after either using the configfile command, or with some clever application of the environment block feature.
Use "source" command to include another config file but unlike "configfile" which will change context.
Source is like an online macro while configfile likes a function - environment changes in configfile will not be preserved but source is expanding whatever in the source file and put in the main block, environment variable can be changed in this way.
https://www.gnu.org/software/grub/manual/grub/grub.html#source
https://www.gnu.org/software/grub/manual/grub/grub.html#configfile

Change environment variable value during execution [duplicate]

This question already has answers here:
Is there a way to change the environment variables of another process in Unix?
(12 answers)
Closed 9 years ago.
Consider the following Ruby code
sleep 10
puts "Foo is #{ENV['foo']}"
Saving this file to envtest.rb
Running this from the shell:
export foo=bar
ruby envtest.rb &
export foo=baz
( ... 10 seconds later ... )
=> Foo is bar
It appears that the environment is evaluated when the ruby interpreter is launched. Is it possible to update environment variables during execution and have those changes reflected in running processes? If so, how?
You can change the value during runtime - from inside the ruby script - using:
ENV['VARIABLE_NAME'] = 'value'
There is no option to change environment values from outside the process after it has been started. That's by design, as the environment will be passed at process startup.
No. This is not possible. One process can never directly manipulate the environment of a different already-running process. All you can ever do is set the environment on unborn children, then create them.
The only other approach is via active, negotiated communication back to the parent. That’s why the output from tset(1) (that is, of tset -s) is always evaluated by the parent.

Resources