I am using docker-py to read container logs as a stream. by setting the stream flag to True as indicated in the docs. Basically, I am iterating through all my containers and reading their container logs in as a generator and writing it out to a file like the following:
for service in service_names:
dkg = self.container.logs(service, stream=True)
with open(path, 'wb') as output_file:
try:
while True:
line = next(dkg).decode("utf-8")
print('line is: ' + str(line))
if not line or "\n" not in line: # none of these work
print('Breaking...')
break
output_file.write(str(line.strip()))
except Exception as exc: # nor this
print('an exception occurred: ' + str(exc))
However, it only reads the first service and hangs at the end of the file. It doesn't break out of the loop nor raise an exception (e,g. StopIteration exception). According to the docs if stream=True it should return a generator, I printed out the generator type and it shows up as a docker.types.daemon.CancellableStream so don't think it would follow the traditional python generator and exception out if we hit the end of the container log generator and call next().
As you can see I've tried checking if eol is falsy or contains newline, even see if it'll catch any type of exception but no luck. Is there another way I can. determine if it hits the end of the stream for the service and break out of the while loop and continue writing the next service? The reason why I wanted to use a stream is because the large amount of data was causing my system to run low on memory so I prefer to use a generator.
The problem is that the stream doesn't really stop until the container is stopped, it is just paused waiting for the next data to arrive. To illustrate this, when it hangs on the first container, if you do docker stop on that container, you'll get a StopIteration exception and your for loop will move on to the next container's logs.
You can tell .logs() not to follow the logs by using follow = False. Curiously, the docs say the default value is False, but that doesn't seem to be the case, at least not for streaming.
I experienced the same problem you did, and this excerpt of code using follow = False does not hang on the first container's logs:
import docker
client = docker.from_env()
container_names = ['container1','container2','container3']
for container_name in container_names:
dkg = client.containers.get(container_name).logs(stream = True, follow = False)
try:
while True:
line = next(dkg).decode("utf-8")
print(line)
except StopIteration:
print(f'log stream ended for {container_name}')
Related
First, thank for fixing my post. I'm still not sure how to include a sketch. I've been reading posts here for many months, but never posted one before.
My headless RasPi is running two sketches of mine, one reads data from a pm2.5 sensor (PMS7003) and the other is the program listed above that sends information to another Pi, the client, that turns on a pm2.5 capable air filter. (I live in California) The program that reads the PMS7003 sorts the data, called max_index, into one of six categories, 0 thru 5 and saves the current category to a text file. I'm using the 'w' mode during the write operation, so there is only one character in the text file at any time. The server program listed above reads the text file and sends it to a client that turns on the air filter for categories above 2. The client sends the word "done" back to the server to end the transaction.
Until you mentioned it, I didn't realize my mistake, clientsocket.recv(2). I'll fix that and try again.
So, the listener socket should go outside the while loop, leaving the send and receive inside???
Troubleshooting: I start the two programs using nice nohup python3 xxx.py & nice nohup python3 yyy.py. The program that reads the PMS7003 continues running and updating the text file with current category, but the server program falls out of existence after a few days. top -c -u pi reveals only the PMS7003 program running, while the server program is missing. Also, there's nothing in nohup.out or in socketexceptions.txt and I tried looking through system logs in /var/log but was overwhelmed by information and found nothing that made any sense to me.
Since writing to the socketexceptions.txt file is not in a try/except block, the crash might be happening there.
import socket
import time
index = " "
clientsocket = ""
def getmaxindex():
try:
with open('/home/pi/pm25/fan.txt','r')as f:
stat = f.read() #gets max_index from pm25b.py
return(stat)
except:
with open("/home/pi/pm25/socketexceptions.txt",'a')as f:
f.write("Failed to read max index")
def setup(index):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR,1)
s.bind(("192.168.1.70", 5050))
except:
with open("/home/pi/pm25/socketexceptions.txt",'a')as f:
f.write("Failed to bind")
try:
s.listen(1)
clientsocket, address = s.accept()
clientsocket.send(index)
rx = clientsocket.recv(2)
if rx == "done":
clientsocket.close()
except:
with open("/home/pi/pm25/socketexceptions.txt",'a')as f:
f.write("Failed to communicate with flient")
while True:
index = getmaxindex().encode('utf-8')
setup(index)
time.sleep(5)
enter code here
It is unknown what program is supposed to do and where exactly you run into problems, since there is only a code dump and no useful error description (what does "stop" mean - hang or exit, where exactly does it stop). But the following condition can never be met:
rx = clientsocket.recv(2)
if rx == "done":
The will receive at most 2 bytes (recv(2)) which is definitely not enough to store the value "done".
Apart from that it makes not real sense to recreate the same listener socket again and again, just to accept a single client and exchange some data. Instead the listener should only be created once and multiple accept should be called on the same listener socket, where each will result in a new client connection.
I'm using the output suppression contextmanager described in charleslparker's answer to this question: How to suppress console output in Python? - this works beautifully, but it's nested in a larger block of code (snippet below) dedicated to downloading files from a specific website, which utilizes a higher-level try/except block to catch connection errors. This should be simple, as described in a_guest's beautiful answer to this similar question: Catching an exception while using a Python 'with' statement - however, my problem is that I have an extra block of code (specifically, checking for matching file sizes locally and on the website) that needs to execute upon successful completion of the download, which is getting called every time the ConnectionError exception is raised. Basically, the exception registered within the contextmanager does not propagate upwards correctly, and the code deletes the partial files before restarting the process. I want the exception encountered by the download within the context manager to skip straight to the explicit exception block, and I am totally stuck on how to force that.
#contextmanager
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
except (ConnectionErrorType1, ConnectionErrorType2):
raise ConnectionError()
finally:
sys.stdout = old_stdout
while True:
try:
with suppress_stdout():
<download from the website>
if check_file_sizes:
<get target file size>
if download_size != target_size:
<delete files and try again>
except ConnectionErrorType1:
<print error-specific message>
raise ConnectionError()
except ConnectionErrorType2:
<print error-specific message>
raise ConnectionError()
except:
print("Something happened but I don't know what")
raise ConnectionError()
Any and all insight is appreciated, and I'm happy to provide further context or clarification as needed.
I am trying to automating a long running job, and I want to be able to upload all console outputs to another log like on CloudWatch Logs. For the most part this can be done by making and using a custom function instead of print. But there are functions in MachineLearning like Model.summary() or progress bars while training that outputs to stdout on their own.
I can get all get all console outputs at the very end, via an internal console log. But what I need is real-time uploading of stdout as its called by whomever. So that one can check the progress by taking a look at the logs on Cloudwatch instead of having to log into the machine and check the internal console logs.
Basically what I need is:
From: call_to_stdout -> Console(and probably other stuff)
To: call_to_stdout -> uploadLog() -> Console(and probably other stuff)
pseudocode of what I need
class stdout_PassThru:
def __init__(self, in_old_stdout):
self.old_stdout = in_old_stdout
def write(self, msg):
self.old_stdout.write(msg)
uploadLogToCloudwatch(msg)
def uploadLogToCloudwatch(msg):
# Botocore stuff to upload to Cloudwatch
myPassThru = stdout_PassThru(sys.stdout)
sys.stdout = myPassThru
I've tried googling this, but the best I ever get is stringIO stuff, where I can capture stdout, but I cannot do anything with it until the function I called ends and I can insert code again. I would like to run my upload Log code everytime stdout is used.
Is this even possible?
Please and thank you.
EDIT: Someone suggested redirect/output to file. The problem is that, that just streams/writes to the file as things are outputted. I need to call a function that does work on each call to stdout, which is not a stream. If stdout outputs everytime it flushes itself, then having the function call then would be good too.
I solved my problem. Sort of hidden in some other answers.
The initial problem I had with this solution is that when it is tested within a Jupyter Notebook, the sys.stdout = myClass(sys.stdout) causes Jupyter to... wait? Not sure but it never finishes processing the paragraph.
But when I put it into a python file and ran with python test.py it ran perfectly and as expected.
This allows me to in a sense pass thru calls to print, while executing my own function every call to print.
def addLog(message):
# my boto function to upload Cloudwatch logs
class sendToLog:
def __init__(self, stream):
self.stream = stream
def write(self, o):
self.stream.write(o)
addLog(o)
self.stream.flush()
def writelines(self, o):
self.stream.writelines(o)
addLog(o)
self.stream.flush()
def __getattr__(self, attr):
return getattr(self.stream, attr)
sys.stdout = sendToLog(sys.stdout)
I'm using Python 3.7.4 and I have created two functions, the first one executes a callable using multiprocessing.Process and the second one just prints "Hello World". Everything seems to work fine until I try redirecting the stdout, doing so prevents me from getting any printed values during the process execution. I have simplified the example to the maximum and this is the current code I have of the problem.
These are my functions:
import io
import multiprocessing
from contextlib import redirect_stdout
def call_function(func: callable):
queue = multiprocessing.Queue()
process = multiprocessing.Process(target=lambda:queue.put(func()))
process.start()
while True:
if not queue.empty():
return queue.get()
def print_hello_world():
print("Hello World")
This works:
call_function(print_hello_world)
The previous code works and successfully prints "Hello World"
This does not work:
with redirect_stdout(io.StringIO()) as out:
call_function(print_hello_world)
print(out.getvalue())
With the previous code I do not get anything printed in the console.
Any suggestion would be very much appreciated. I have been able to narrow the problem to this point and I think is related to the process ending after the io.StringIO() is already closed but I have no idea how to test my hypothesis and even less how to implement a solution.
This is the workaround I found. It seems that if I use a file instead of a StringIO object I can get the things to work.
with open("./tmp_stdout.txt", "w") as tmp_stdout_file:
with redirect_stdout(tmp_stdout_file):
call_function(print_hello_world)
stdout_str = ""
for line in tmp_stdout_file.readlines():
stdout_str += line
stdout_str = stdout_str.strip()
print(stdout_str) # This variable will have the captured stdout of the process
Another thing that might be important to know is that the multiprocessing library buffers the stdout, meaning that the prints only get displayed after the function has executed/failed, to solve this you can force the stdout to flush when needed within the function that is being called, in this case, would be inside print_hello_world (I actually had to do this for a daemon process that needed to be terminated if it ran for more than a specified time)
sys.stdout.flush() # This will force the stdout to be printed
I am trying to update the firmware of a controller through a serial interface. To do this, I must send a reboot message to the controller (no problem there) and then send another message (the character 'w') THE MOMENT it starts up so that it may start up in write mode. This is easily done with the minicom utility by pressing w continuously while the device restarts.
I want to achieve this functionality using python code instead, but I can't figure out how to send a message until the device is up without throwing exceptions (since the device is not connected).
This is what I have tried, but it does not work (with pyserial):
def send_w(serial_port, baud_rate):
msgw = "w_"
ans = ""
ser = serial.Serial(port=serial_port, baudrate=baud_rate,timeout = 10)
ser.write(msgw)
ans = ser.read(24)
ser.close()
print(ans)
return ans
def set_firmware_version(serial_port, baud_rate):
s = ""
try:
with serial.Serial(serial_port,baud_rate,timeout=1) as ser:
msgr = "%reset "+sk+"_"
ser.write(msgr)
ser.close()
print("reset")
except (IOError) as e:
print("Error in: set_firmware_version")
print(e)
return s
time.sleep(1)
send_w(serial_port, baud_rate)
set_firmware_version(sp,br)
This gives the following error:
serial.serialutil.SerialException: device reports readiness to read but returned no data (device disconnected or multiple access on port?)
I also tried sending the messages in a loop with a short timeout, but had the same problem. Is there any way to send a message continuously and disregard exceptions if the device is not found?
Any help will be greatly appreciated.
Full Traceback:
Traceback (most recent call last):
File "mc_config.py", line 69, in <module>
set_firmware_version(sp,br)
File "mc_config.py", line 64, in set_firmware_version
send_w(serial_port, baud_rate)
File "mc_config.py", line 46, in send_w
ans = ser.read(24)
File "/home/avidbots/.local/lib/python2.7/site-packages/serial/serialposix.py", line 501, in read
'device reports readiness to read but returned no data '
serial.serialutil.SerialException: device reports readiness to read but returned no data (device disconnected or multiple access on port?)
(I am using ubuntu 16.04 and python 3)
What if you put the excepting code into a try and then catch the exception with an except serial.serialutil.SerialException {...} block?
Clearly there's a significant window of time to submit the w ( otherwise the "press w" method wouldn't often work.) Your requirement, then, would be to retry only the part of the code that's absolutely necessary to send the w, so that you send it quickly enough to "catch" the system in its bootup state. Since the backtrace shows that the exceptions occurs in send_w, then you can add try/except blocks and a while loop around what is now one line at the end of set_firmware_version.
Instead of just this:
send_w(serial_port, baud_rate)
Something like this might solve the problem:
while True:
try:
send_w(serial_port, baud_rate)
break
except serial.serialutil.SerialException:
pass # retry
You may need to change your imports to expose that exception, fyi. And you may need to consider whether you're catching too many possible exceptions - it's possible that exception might also represent other errors that shouldn't be retried. You might also need to add a small sleep time there - this is essentially a busy wait loop (https://en.wikipedia.org/wiki/Busy_waiting).