How to combine multiple custom management commands in Django? - python-3.x

I wrote a set of commands for my Django project in the usual way. Is it possible to combine multiple commands into one command?
class Command(BaseCommand):
""" import all files in media in dir and add to db
after call process to generate thumbnails """
def handle(self, *args, **options):
...
To make easy steps I have commands like: import files, read metadata from files, create thumbnails etc. The goal now is to create a "do all" command that somehow imports these commands and executes them one after another.
How to do that?

You can define a DoAll command and use django django.core.management.call_command to run all your subcommands. Try something like below:
class FoobarCommand(BaseCommand):
""" call foo and bar commands """
def handle(self, *args, **options):
call_command("foo_command", *args, **options)
call_command("bar_command", *args, **options)

Related

Python unittest framework, unfortunately no build-in timeout possibility

I'm using the python unittest framework to perform unit tests. The python version i'm using is 3.6. I'm using the windows OS.
My production code is currently a little bit unstable (as i added some new functionality) and tends to hangup in some internal asynchronous loops. I'm working on fixing those hangups, currently.
However, the tracking of those bugs is hindered by the corresponding testcases hanging up, too.
I would merely like that the corresponding testcases stop after e.g. 500ms if they not run through and be marked as FAILED in order to let all the other testcases beeing continued.
Unfortunately, the unittest framework does not support timeouts (If i had known in advance ...). Therefore i'm searching for a workaround of this. If some package would add that missing functionality to the unittest framework i would be willing to try. What i don't want is that my production code relies on too much non standard packages, for the unittests this would be OK.
I'm a little lost on howto adding such functionality to the unittests.
Therefore i just tried out some code from here: How to limit execution time of a function call?. As they said somewhere that threading should not be used to implement timeouts i tried using multiprocessing, too.
Please note that the solutions proposed here How to specify test timeout for python unittest? do not work, too. They are designed for linux (using SIGALRM).
import multiprocessing
# import threading
import time
import unittest
class InterruptableProcess(multiprocessing.Process):
# class InterruptableThread(threading.Thread):
def __init__(self, func, *args, **kwargs):
super().__init__()
self._func = func
self._args = args
self._kwargs = kwargs
self._result = None
def run(self):
self._result = self._func(*self._args, **self._kwargs)
#property
def result(self):
return self._result
class timeout:
def __init__(self, sec):
self._sec = sec
def __call__(self, f):
def wrapped_f(*args, **kwargs):
it = InterruptableProcess(f, *args, **kwargs)
# it = InterruptableThread(f, *args, **kwargs)
it.start()
it.join(self._sec)
if not it.is_alive():
return it.result
#it.terminate()
raise TimeoutError('execution expired')
return wrapped_f
class MyTests(unittest.TestCase):
def setUp(self):
# some initialization
pass
def tearDown(self):
# some cleanup
pass
#timeout(0.5)
def test_XYZ(self):
# looong running
self.assertEqual(...)
The code behaves very differently using threads vs. processes.
In the first case it runs through but continues execution of the function despite timeout (which is unwanted). In the second case it complains about unpickable objects.
In both cases i would like to know howto do proper cleanup, e.g. call the unittest.TestCase.tearDown method on timeout from within the decorator class.

Scrapy Spider Close

I have a script that I need to run after my spider closes. I see that Scrapy has a handler called spider_closed() but what I dont understand is how to incorporate this into my script. What I am looking to do is once the scraper is done crawling I want to combine all my csv files them load them to sheets. If anyone has any examples of this can be done that would be great.
As per the example in the documentation, you add the following to your Spider:
# This function remains as-is.
#classmethod
def from_crawler(cls, crawler, *args, **kwargs):
spider = super().from_crawler(crawler, *args, **kwargs)
crawler.signals.connect(spider.spider_closed, signal=signals.spider_closed)
return spider
# This is where you do your CSV combination.
def spider_closed(self, spider):
# Whatever is here will run when the spider is done.
combine_csv_to_sheet()
As per the comments on my other answer about a signal-based solution, here is a way to run some code after multiple spiders are done. This does not involve using the spider_closed signal.
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
process = CrawlerProcess(get_project_settings())
process.crawl('spider1')
process.crawl('spider2')
process.crawl('spider3')
process.crawl('spider4')
process.start()
# CSV combination code goes here. It will only run when all the spiders are done.
# ...

pyqt5, looping over QtWidgets

I have created a dialog with qtdesigner. I wanted to modify certain widgets (e.g. buttons) programmatically.
I have my pyuic generated gui.py and have created a guiLogic.py defining the class "myGui".
class myGui(QtWidgets.QDialog, inmoovServoGui.Ui_Dialog):
The class init calls then the "setupUi" function from the generated gui.py
def __init__(self, *args, **kwargs):
QtWidgets.QDialog.__init__(self, *args, **kwargs)
self.setupUi(self)
...
In my python ide (pycharm) in the evaluate expression window I can type in now my class name followed by a . "myGui." and it will popup a list of all my QWidgets (beside others).
I have tried to get the same list but
myGui.__dict__.items()
does not contain the QWidgets. What am I doing wrong?

Python - Multiprocessing vs Multithreading for file-based work

I have created a GUI (using wxPython) that allows the user to search for files and their content. The program consists of three parts:
The main GUI window
The searching mechanism (seperate class)
The output window that displays the results of the file search (seperate class)
Currently I'm using pythons threading module to run the searching mechanism (2) in a separate thread, so that the main GUI can still work flawlessly. I'm passing the results during runtime to the output window (3) using Queue. This works fine for less performance requiring file-reading-actions, but as soon as the searching mechanism requires more performance, the main GUI window (1) starts lagging.
This is roughly the schematic:
import threading
import os
import wx
class MainWindow(wx.Frame): # this is point (1)
def __init__(self, parent, ...):
# Initialize frame and panels etc.
self.button_search = wx.Button(self, label="Search")
self.button_search.Bind(wx.EVT_BUTTON, self.onSearch)
def onSearch(self, event):
"""Run filesearch in separate Thread""" # Here is the kernel of my question
filesearch = FileSearch().Run()
searcher = threading.Thread(target=filesearch, args=(args,))
searcher.start()
class FileSearch(): # this is point (2)
def __init__(self):
...
def Run(self, args):
"""Performs file search"""
for root, dirs, files in os.walk(...):
for file in files:
...
def DetectEncoding(self):
"""Detects encoding of file for reading"""
...
class OutputWindow(wx.Frame): # this is point (3)
def __init__(self, parent, ...):
# Initialize output window
...
def AppendItem(self, data):
"""Appends a fileitem to the list"""
...
My questions:
Is python's multiprocessing module better suited for this specific performance requiring job?
If yes, which way of interprocess communication (IPC) should I choose to send the results from the searching mechanism class (2) to the output window (3) and how should implement it schematically?

Porting a sub-class of python2 'file' class to python3

I have a legacy code that calls class TiffFile(file). What is the python3 way to call it?
I tried to replace following in python2:
class TiffFile(file):
def __init__(self, path):
file.__init__(self, path, 'r+b')
By this in python3:
class TiffFile(RawIOBase):
def __init__(self, path):
super(TiffFile, self).__init__(path, 'r+b')
But now I am getting TypeError: object.__init__() takes no parameters
RawIOBase.__init__ does not take any arguments, that is where you error is.
Your TiffFile implementation also inherits file which is not a a class, but a constructor function, so your Python 2 implementation is non-idiomatic, and someone could even claim it is wrong-ish. You should use open instead of file, and in a class context you should use an io module class for input and output.
You can use open to return a file object for use as you would use file in Python 2.7 or you can use io.FileIO in both Python 2 and Python 3 for accessing file streams, like you would do with open.
So your implementation would be more like:
import io
class TiffFile(io.FileIO):
def __init__(self, name, mode='r+b', *args, **kwargs):
super(TiffFile, self).__init__(name, mode, *args, **kwargs)
This should work in all currently supported Python versions and allow you the same interface as your old implementation while being more correct and portable.
Are you actually using r+b for opening the file in read-write-binary mode on Windows? You should probably be using rb mode if you are not writing into the file, but just reading the TIFF data. rb would open the file in binary mode for reading only. The appended + sets the file to open in read-write mode.

Resources