I'm writing a small application for my Raspberry Pi to surveil my surroundings and I want to know when the Raspberry Pi loses power using NodeJs.
I've found that you can use a signal that the Pi send, SIGPWR, when it loses power.
I made this little test script:
// test.js
var fs = require('fs');
var path = '/home/pi/Documents/Code/surveillanceCam/log/logfile.txt';
fs.writeFileSync(path, new Date().toString() + ': Start\n');
process.on('exit', function () {
fs.appendFileSync(path, new Date().toString() + ': exit\n');
});
process.on('SIGPWR', function () {
fs.appendFileSync(path, new Date().toString() + ': SIGPWR\n');
});
process.stdin.resume();
If I run the script with node test.js I get a line in logfile.txt that end with Start so that works, and if I remove the last line (so the scripts don't run until I stop it), I also get a new line with exit in the end.
But if I keep the last line so the script keeps on running, and then I pull the cable of the Pi, then insert it again. When I go to watch the file after it boots I only get the line with the start in the end..
I want to have 2 lines in my logfile.txt one with the time of start, and one with the time of power loss..
And from what I've read the SIGPWR signal is sent when power is lost. Does the script not have enough time to write to the file or is there something else I can do?
EDIT: What I want to do is to simulate a power loss and when the power is lost write to the file.
EDIT2: I think how I will solve this problem is to add a process.on('SIGINT',...) signal. So when the user ends the program with Ctrl+C I will then and only then write to the file. And then when the node server start I will check if there is something in the file. If there isn't then the node server didn't shut of gracefully and I should display an error.
Unplugging the power would not be a graceful shutdown, and I doubt that you could depend on the reliability of any event for figuring out if it shut down. This is why so many things have troubles recovering when the power goes out, because things are not able to be shut down gracefully. Everything dies before the OS can probably even realize it, so nothing gets gracefully shutdown.
The standard way you detect this sort of situation is by creating flag file when the Pi boots up, and removing it on a normal shutdown.
If the flag file exists next time the Pi starts up (you check before creating it), you know that that Pi crashed for someone reason (you don't know if it was a power loss, kernel panic, or something else).
There is no way to respond directly to a power loss in your code unless you add some sort of battery backup unit that sends the appropriate signal.
Related
I am new to odoo and code profiling. I am using py-spy to profile my odoo code, as I need a flame graph as the output of profiling. Everything is working fine with py-spy, but the issue is, the py-spy needs to be stopped by pressing ctrl + C on the terminal where it is running or shutting the odoo server down. I can't stop or reset the odoo server neither I can do ** Ctrl + C** on the server.
I had tried to create to do this
To start py-spy
def start_pyflame(self):
pyflame_started = self.return_py_spy_pid('py-spy')
error = False
if not pyflame_started:
self.start_pyflame()
else:
error = 'PyFlame Graph process already created. Use Stop button if needed.'
_logger.error(error)
which is working fine, the problem is with this one
def stop_pyflame_and_download_graph(self):
pyflame_running = self.return_py_spy_pid('py-spy')
if pyflame_running:
subprocess.run(["sudo", "pkill", "py-spy"])
Now the issue is when I am killing the process with pkill or kill it kills the process but along with this, it also terminates the py-spy, because of which, the output file is not generated.
Is there any way to stop or soft kill py-spy so that the output file will be created.
Thanks in advance for help
After some research, I came to know that all these kill commands are just killing the process whereas in this case, we need to stop the process.
This thing I have achieved by
sudo kill -SIGINT <pid>
As it is clear from the name, this command is not killing/terminating the process, it is simply asking the process to stop working by passing an interrupt signal.
This worked for me.
I'm using fluent-ffmpeg in a node application. Recording from the screen/camera to an mp4 file. Would like a server request to start and another request to stop recording (links to a web interface - testing some tech with a view to making an Electron App later with it).
Starting is fine, but cannot figure out how to stop it.
This is the code to start (to run on MacOS):
recordingProcessVideo = ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.native()
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.save(filePath);
This is what I thought would stop it from documentation and reading around the subject:
recordingProcessVideo.kill('SIGINT');
However, when I call this command, the server quits with the following ambiguous message:
code ELIFECYCLE
errno 1
Also, the video file produced will not open as if it quit before it completed. Can't seem to work it out, as from the docs and what people have written, to start and stop the recorder should be to make the process, then kill it when ready. Anyone know the correct way - been looking for ages but can't find any answers.
Using Node v10.15.2 and Ffmpeg version V92718-g092cb17983 running on MacOS 10.14.3.
Thanks for any help.
I have solved the issue through tracing out all the messages FFMpeg issued in the terminal. For some unknown reason, my installation of FFMpeg throws an error when completing the video and does not correctly close the file. This is happening in the terminal as well, though the error doesn't really display, and ends up with an MP4 that actually works in all video players - even the browser - with the exception of Quicktime, which is what I was using on this occasion. To prevent the error from crashing my Node application, I just needed to add an error handler to the video call. Indeed, I was adding the handler in my original code, but I was adding it to the process and NOT the original call to FFMPeg. So the code which works looks like this (I catch all of the end events and log them in this example).
recordingProcessVideo = ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.native()
.on('error', error => console.log(`Encoding Error: ${error.message}`))
.on('exit', () => console.log('Video recorder exited'))
.on('close', () => console.log('Video recorder closed'))
.on('end', () => console.log('Video Transcoding succeeded !'))
.save(file.video);
I have two version of FFMpeg on my laptop and both fail. The official downloaded release installed on my computer (V4.1.1) and the Node packaged version my app is using, which will make distribution via Electron easier, as it won't have the dependency of installing FFMpeg on the local machine running the app (#ffmpeg-installer/ffmpeg). So the reason the video fails to export is some magical reason to do with my laptop, which I have to figure out, but importantly, my code works now and is resilient to this failing now.
Maybe it will help someone in the future.
To complete the ffmpeg conversion process
You need to run ffmpeg conversion like this as follows:
recordingProcessVideo = ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.native()
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.save(filePath);
recordingProcessVideo.run();
And then you can disable your conversion command ffmpeg:
recordingProcessVideo.kill();
The key point is the launch method .run() and you need to run it when the template has already been passed to the variable recordingProcessVideo
After such a launch recordingProcessVideo.run();
You will be able to disable recordingProcessVideo.kill();
The bottom line is that, ffmpeg only passes the template to your variable recordingProcessVideo and if you run .run() immediately when creating a template example:
ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.save(filePath);
.run();
Then the variable recordingProcessVideo will be empty.
This is my first comment on this site, do not scold much for mistakes :)
I'm working on a server bot in python3 (using asyncio), and I would like to incorporate an update function for collaborators to instantly test their contributions. It is hosted on a VPS that I access via ssh. I run the process in tmux and it is often difficult for other contributors to relaunch the script once they have made a commit, etc. I'm very new to python, and I just use what I can find. So far I have used subprocess.Popen to run git pull, but I have no way for it to automatically restart the script.
Is there any way to terminate a running asyncio loop (ideally without errors) and restart it again?
You can not start a event loop stopped by event_loop.stop()
And in order to incorporate the changes you have to restart the script anyways (some methods might not exist on the objects you have, etc.)
I would recommend something like:
asyncio.ensure_future(git_tracker)
async def git_tracker():
# check for changes in version control, maybe wait for a sync point and then:
sys.exit(0)
This raises SystemExit, but despite that exits the program cleanly.
And around the python $file.py a while true; do git pull && python $file.py ; done
This is (as far as I know) the simplest approach to solve your problem.
For your use case, to stay on the safe side, you would probably need to kill the process and relaunch it.
See also: Restart process on file change in Linux
As a necromancer, I thought I give an up-to-date solution which we use in our UNIX system.
Using the os.execl function you can tell python to replace the current process with a new one:
These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions.
In our case, we have a bash script which executes the killall python3.7, sending the SIGTERM signal to our python apps which in turn listen to it via the signal module and gracefully shutdown:
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe(loop.stop)
sys.exit(0)
The script than starts the apps in background and finishes.
Note that killall python3.7 will send SIGTERM signal to every python3.7 process!
When we need to restart we jus rune the following command:
os.execl("./restart.sh", 'restart.sh')
The first parameter is the path to the file and the second is the name of the process.
I'm writing a CGI script that is supposed to send data to a user until they disconnect, then run logging tasks afterwards.
THE PROBLEM: Instead of break executing and the logging getting completed when the client disconnects (detected by inability to write to the stdout buffer), the script ends or is killed (I cannot find any logs anywhere for how this exit is occurring)
Here is a snippet of the code:
for block in r.iter_content(262144):
if stopRecord == True:
r.close()
if not block:
break
if not sys.stdout.buffer.write(block): #The code fails here after a client disconnects
break
cacheTemp.close()
####write data to other logs and exit gracefully####
I have tried using "except:" as well as "except SystemExit:" but to no avail. Has anyone been able to solve this problem? (It is for a CGI script which is supposed to log when the client terminates their connection)
UPDATE: I have now tried using signal to interrupt the kill process in the script, which also didn't work. Where can I see an error log? I know exactly which line fails and under which conditions, but there is no error log or anything like I would get if I ran a script which failed in a terminal.
When you say it kills the program, you mean the main python process exits - and not by some thrown exception? That's kinda weird. A workaround might be to have the task run in a separate Thread or process, and then monitor that until it dies and subsequently execute the second task.
I have a node.js application, which connect everyday to a server.
On this server, a new version of the app can be available, if so, the installed app download it, check if the download is complete, and if so, stop itself calling a shell script, which replace the old app by the new one, and start it.
I m struggling at starting the update script.
I know I can start it with child_process_execFile function, which I do:
var execF = require('child_process').execFile;
var PATH = process.argv[1].substr(0, process.argv[1].lastIndexOf('/')+1),
filename = 'newapp.js',
execF(PATH + 'up.sh', [PATH + filename], function () {console.log('done'); return ;});
up.sh, for now is just:
cat $1 > /home/pi/test
I get 'done' printed in the console, but test isn t created.
I know that execFile create a subprocess, is it what block the script to do that?
If I suceed to start this, I know I only have to make some cp in the script to have my app auto-updating.
EDIT:
Started as usual (calling the script from console), it work well, is there a reason for the script to don t execute when called from node.js?
I'd suggest that you consider using a module that can do this for you automatically rather than duplicating the effort. Or, at least use their technique as inspiration for you own requirements.
One example is: https://github.com/edwardhotchkiss/always
It's simple to use:
Usage: always <app.js>
=> always app.js
Then, anytime your code changes, the app is killed, and restarted.
As you can see in the source, it uses the Monitor class to watch a specified file, and then uses spawn to kick it off (and of course kill to end the process when a change has happened).
Unfortunately, the [always] output is currently hardcoded into the code, but it would be a simple change/pull request I'm sure to make it optional/configurable. If the author doesn't accept your change, you could just modify a local copy of the code (as it's quite simple overall).
Make sure when you spawn/exec the process you are executing the shell that will be processing the script and not the script itself.
Should be something like
execF("/usr/bin/sh", [PATH + 'up.sh', PATH + filename]);