I have wrote an initialization script that sets user environment variables which are keys that have been hashed and encrypted...Once the keys have been created the key encryption exe is no longer required. I want to launch the main application and remove the init file containing the hashing and key encryption functions.
I am not having any trouble with any of the above...Everything works as should when independent of each other. The problem is that in order for the main application to have access to the newly created environment variables I need the init script to completely exit...
Everything I have tried, Popen with flags, os.system() and others have still left me in a situation where the parent process ends and the main application launches, however, the environment variables have not updated...I close and relaunch main.py and...boom the program sees the updated variables and all is fine.
All I want is the init script to run, spawn a new process that is not linked at all with init.py and then exit so it can be removed. I thought this would be simple but after many hours of head scratching and trying numerous things, I am still no closer.
If I have to I will simply bundle it as two separate .exe files but I wanted it to be a one click install type thing.
I am running windows 10 and this can be platform specific.
Links looked at:
How to stop/terminate a python script from running?
Using a Python subprocess call to invoke a Python script
Starting a separate process
https://docs.python.org/2/library/subprocess.html
Python: Howto launch a full process not a child process and retrieve the PID
And more...
Current closest result
p = Popen(["python","UserInterface.py"], stdin=PIPE, stdout=PIPE, stderr=PIPE,
creationflags=DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP)
Create an environment block, set the environment variable using SetEnvironmentVariable, and use CreateProcess to specify this environment block for the created process.
MSDN DOC:
To specify a different environment for a process, create a new
environment block and pass the pointer to it as a parameter to the
CreateProcess function.
...
To programmatically add or modify system environment variables, add
them to the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session
Manager\Environment registry key, then broadcast a WM_SETTINGCHANGE
message with lParam set to the string "Environment". This allows
applications, such as the shell, to pick up your updates.
Related
I don't know if being inside a docker container has any relation with the problem but for the record I am running all inside a container.
I tried running this script
import os
os.environ['A_VAR']='aValue'
thevalue=os.environ.get('A_VAR',None)
print(thevalue)
with this I set the environment A_VAR to some value and I can see it from the print that is set
then I run the following
import os
'
thevalue=os.environ.get('A_VAR',None)
print(thevalue)
and no, the value is not set.
Running ``printenv` also shows that the values is not set.
Why is setting the environment variable not working and how it should be done?
Environment variables are local the the process setting it and all processes spawned from it (they inherit the environment). So you can set the environment for the child processes, but not for the parent.
Your python script runs as its own process so any changes it made to the environment disappear when this process exits.
I'm working on a server bot in python3 (using asyncio), and I would like to incorporate an update function for collaborators to instantly test their contributions. It is hosted on a VPS that I access via ssh. I run the process in tmux and it is often difficult for other contributors to relaunch the script once they have made a commit, etc. I'm very new to python, and I just use what I can find. So far I have used subprocess.Popen to run git pull, but I have no way for it to automatically restart the script.
Is there any way to terminate a running asyncio loop (ideally without errors) and restart it again?
You can not start a event loop stopped by event_loop.stop()
And in order to incorporate the changes you have to restart the script anyways (some methods might not exist on the objects you have, etc.)
I would recommend something like:
asyncio.ensure_future(git_tracker)
async def git_tracker():
# check for changes in version control, maybe wait for a sync point and then:
sys.exit(0)
This raises SystemExit, but despite that exits the program cleanly.
And around the python $file.py a while true; do git pull && python $file.py ; done
This is (as far as I know) the simplest approach to solve your problem.
For your use case, to stay on the safe side, you would probably need to kill the process and relaunch it.
See also: Restart process on file change in Linux
As a necromancer, I thought I give an up-to-date solution which we use in our UNIX system.
Using the os.execl function you can tell python to replace the current process with a new one:
These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions.
In our case, we have a bash script which executes the killall python3.7, sending the SIGTERM signal to our python apps which in turn listen to it via the signal module and gracefully shutdown:
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe(loop.stop)
sys.exit(0)
The script than starts the apps in background and finishes.
Note that killall python3.7 will send SIGTERM signal to every python3.7 process!
When we need to restart we jus rune the following command:
os.execl("./restart.sh", 'restart.sh')
The first parameter is the path to the file and the second is the name of the process.
I need a way to constantly update a environment variable from a script. I need to use that environment variable in another program almost real time.
What I have is this code:
# !bin/bash
while :
do
AMA="$(cut -c 1-13 text.txt)"
source directory/this_script
done
I am getting the right information from my file when running cut this way. I just need to get this variable permanently updating if possible. Also a way to even get it to the environment.
Slackware Linux
Now, I am writting a Groovy script to invoke other's interface. But I need change my current working path when running the script. I know it is not possible in Java. Is it possible in Groovy?
If you can run other script as separate process, you can give ProcessBuilder parameter working dir:
def processBuilder=new ProcessBuilder(command)
processBuilder.directory(new File("Working dir"))
def process = processBuilder.start()
or
command.execute(null, new File("Working dir"))
so that process will switch to your new folder and execute it there.
As Groovy runs on JVM, the same restrictions apply. Unfortunately it is not possible.
Changing the current working directory in Java?
JDK bug
Java/groovy doesn't really "Have" a working directory as far as I can tell. The shell that launched groovy has one and any child "commands" inherit from that shell diretly.
Java also seems to read the current directory of the shell and store it in "user.dir". This is used as a base for the "File" object so if you System.setProperty("user.dir", "c:/windows") it will change future invocations of new File(".") but will not change the parent shell directory (and therefore not the child directories).
Here are three "Work-Arounds" that may work for different scenarios:
1) I KIND OF overcame this for a very specific task... I wanted to implement "cd" as a groovy script. It was only possible because all my scripts were already being "wrapped" in a batch file. I made it so that my script could create a file called "afterburner.cmd" that, if it existed, would be executed when the script exits. There was some batch file trickery to make this work.
A startup cmd file could also "Set" the current directory before invoking your groovy script/app.
By the way, Having a startup cmd has been much more helpful than I'd thought it would be--It makes your environment constant and allows you to more easily deploy your "Scripts" to other machines. I even have mine compile my scripts to .classes because it turned out to be faster to compile a .groovy to a .class and start the .class with "Java" than it was to just run the script with "groovy"--and usually you can skip the compile step which makes it a LOT faster!
2) For a few small commands, you might write a method like this:
def currentDir = "C:\\"
def exec(command, dir = null) {
"cmd /c cd /d ${dir?:currentDir} && $command".execute().text
}
// Default dir is currentDir
assert exec("dir").endsWith("C:\\>")
// different dir for this command only
assert exec("dir", "c:\\users").endsWith("C:\\users")
// Change default dir
currentDir = "C:\\windows"
assert exec("dir").endsWith("C:\\windows")
it will be slower than "".execute() if "cmd" is not required.
3) Code a small class that maintains an "Open" command shell (I did this once, there is a bit of complexity), but the idea is:
def process="cmd".execute()
def in=process.in
def out=process.out
def err=process.err
Now "in" is an input stream that you could spin off/read from and "out" is an output stream that you can write commands to, keep an eye on "err" to detect errors.
The class should write a command to the output, read the input until the command has completed then return the output to the user.
The problem is detecting when the output of any given command is complete. In general you can detect a "C:..." prompt and assume that this means that the command has finished executing. You could also use a timeout. Both are pretty fallible. You can set that shell's prompt to something unique to make it much less fallible.
The advantage is that this shell can remain open for the entire life of your app and can significantly increase speed since you aren't repeatedly creating "cmd" shells. If you create a class (let's call it "CommandShell") that wraps your Process object then it should be really easy to use:
def cmd=new CommandShell()
println cmd.execute("cd /d c:\\")
println cmd.execute("dir") // Will be the dir of c:\
I wrote a groovy class like this once, it's a lot of experimenting and your instance can be trashed by commands like "exit" but it's possible.
you can wrap it up in a dir block.
eg :
dir('yourdirectory') {
codeblock
}
I am using visual studio 2008 and MFC. I accept arguments using a subclass of CCommandLineInfo and overriding ParseParam().
Now I want to pass these arguments to the application while running. For example "test.exe /start" and then to type in the console "test.exe /initialize" to be initialized again.
is there any way to do that?
Edit 1: Some clarifications. My program starts with "test.exe /start". I want to type "test.exe /initialize" and initialize the one and only running process (without closing/opening). And by initialize I mean to read a different XML file, to change some values of the interface and other things.
I cannot think of an easy way to accomplish what you're asking about.
However, you could develop your application to specifically receive commands, and given those commands take any actions you wanted based upon receiving them. Since you're already using MFC, you can do this rather easily. Create a Window (HWND) for your application and register it. It doesn't have to be visible (this won't necessarily make you application a GUI application). Implement a WndProc, and define specific messages that you will receive based on WM_USER + <xxx>.
First and obvious question is why you want to have threads, instead of processes.
You may use GetCommandLine and CommandLineToArgvW to get the fully formatted command line. Detect the arguments, and the call CreateProcess or ShellExecute passing /watever to spawn the process. You may also want to use GetModuleBaseName to get the name of your own EXE.