What happens when php fpm process crashes? - linux

For example I import a csv file for some reason the process crashes, in this case the process is restarted so I want to know the restarted process will again import the file?

Since the process crashed, you will need to start the import again. It's possible during the import only a limited amount of data was imported, so you'll also want to check on that situation.

Related

does python 3 multiprocessing freeze_support() sets the starting method to spawn?

well recently I encountered some freezing in my applications in Long run.
my program uses an infinite while loop to constantly check for new processes from a redis db and if there is any job to work on it will spawns a new process to run it in the background.
so I had issue with its freezing after 20 minutes, sometimes 10 minutes. it took me one week to figure it out that the problem rise from lack of this line before my while loop:
multiprocessing.set_start_method('spawn')
it looks like python does not do that on Windows and since windows does not support fork it's gonna stuck.
anyway, it seems this will solve my problem but I have another question.
in order to make a exe file for this program with something like pyinstaller I need to add another line as below to make sure its not freezing in the exe execution:
multiprocessing.freeze_support()
I want to know does this freeze_support() automatically sets the start method to 'spawn' too? I mean should I use both of these lines or just running one of them is ok? if so which one should I use from now on?
In the case of windows, spawn is already the default method so it would not be necessary to run the set_start_method ('spawn') line of code.
The freeze_support () is a different thing that does not affect the definition of start methods. You must use it in this scenario to generate an .exe.

Tailer following changed file

I've been using tailer to tail a log file for a program.
I've been running into some issues as the program I am reading the log for creates a new log file on restart (of the same name); this causes me a major issue as tailer will not follow the new logfile (of the same name) when this occurs. It is running within a thread and it has to share memory with several other locations including code that has not been called through threading. Since tailer has an active thread open and running I can't just join the thread as it is still executing code and thus it's stuck. Is there a way around this (without using multiprocessing and killing it through that)?
import tailer
for line in tailer.follow(open("mytestfile.log", encoding='utf-8')):
#do some stuff with the line
That would be an example follow. Any recommendations to get around that?

Datalab occasionally very slow on simple tasks

I've noticed that datalab is occasionally extremely slow (to the point where I believe it's just hanging).
import json
import pprint
import subprocess
import pyspark
For instance, on this really trivial code block, the code takes forever to train. If I keep trying to refresh the page and run it, sometimes it works. What can cause this?
Resize your VM or create a new one with a larger memory/CPUs. You can do this by stopping the VM and then editing it in Compute Engine.

Node.js Library to start tailing a file from specific line or line number?

I am searching for library/module that starts tailing from a specific line number so that even if my server dies down or restarts it should start from the last read line. I am little new to node.js
I am unaware of such module, but all you actually need is to use some NPM library for file read access and then you persist the data about the last read line, so it can resume after Node.js restart. Or even better- you fork a child process and this process is reading the data. If it reach an error, the child process terminates with message to the main Node.js process about the last read line. I hope that helps!
This is how I tackled the problem.
I maintained two log files one of which use to store the lines already read so even if my server dies down reading the main log file I can skip all the logs I have already logged by comparing the time of last logged line with the time of the main log and skip the lines of main log till the time of my own log is greater than or equal to the main log. Once the mainlog time is greater I start reading the from that point again.

Running a method in the background of the Python interpreter

I've made a class in Python 3.x, that acts as a server. One method manages sending and receiving data via UDP/IP using the socket module (the data is stored in self.cmd, and self.msr respectively). I want to be able to modify the the self.msr, self.cmd variables from within the python interpreter online. For example:
>>> from myserver import MyServer
>>> s = MyServer()
>>> s.bakcground_recv_send() # runs in the background, constantly calling s.recv_msr(), s.send_cmd()
>>> process_data(s.msr) # I use the latest received data
>>> s.cmd[0] = 5 # this will be sent automatically
>>> s.msr # I can see what the newest data is
So far, s.bakcground_recv_send() does not exist. I need to manually call s.recv_msr() each time I want to see update the value of s.msr (s.recv_msr uses a blocking socket), and then call s.send_cmd() to send s.cmd.
In this particular case, which module makes more sense: multiprocess or threading?
Any hints how could I best solve this? I have no experience with either processes or threads (just read a lot, but I am still unsure which way to go).
In this case, threading makes most sense. In short, multiprocessing is for running processes on different procesors, threading is for doing things in the background.

Resources