crontab Failed to import the site module - python-3.x

I want crontab to run the python script for a public welfare project.
I can successfully run the script in Pycharm.
When I run it with crontab, there is an error.
Environment: Mac OS, python3.5
After I type 'crontab -e', it shows that:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/Users/yy/anaconda/bin/python3:/Users/yy/anaconda/bin/
32 14 * * * PATH=$PATH:/Users/yy/anaconda/bin/ cd /Users/yy/PycharmProjects/selenium_test/ && /Users/yy/anaconda/bin/python3 /Users/yy/PycharmProjects/selenium_test/selenium_test.py >> /Users/yy/PycharmProjects/selenium_test/log.txt
I got an error as follows in the /var/mail/username:
From yy#YY.local Thu Jun 8 14:32:00 2017
Return-Path: <yy#YY.local>
X-Original-To: yy
Delivered-To: yy#YY.local
Received: by YY.local (Postfix, from userid 501)
id A7F1F38FFFCC; Thu, 8 Jun 2017 14:32:00 -0500 (CDT)
From: yy#YY.local (Cron Daemon)
To: yy#YY.local
Subject: Cron <yy#YY> PATH=$PATH:/Users/yy/anaconda/bin/ cd /Users/yy/PycharmProjects/selenium_test/ && /Users/yy/anaconda/bin/python3 /Users/yy/PycharmProjects/selenium_test/selenium_test.py >> /Users/yy/PycharmProjects/selenium_test/log.txt
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/Users/yy/anaconda/bin/python3:/Users/yy/anaconda/bin/>
X-Cron-Env: <LOGNAME=yy>
X-Cron-Env: <USER=yy>
X-Cron-Env: <HOME=/Users/yy>
Message-Id: <20170608193200.A7F1F38FFFCC#YY.local>
Date: Thu, 8 Jun 2017 14:32:00 -0500 (CDT)
Failed to import the site module
Traceback (most recent call last):
File "/Users/yy/anaconda/lib/python3.5/site.py", line 567, in <module>
main()
File "/Users/yy/anaconda/lib/python3.5/site.py", line 550, in main
known_paths = addsitepackages(known_paths)
File "/Users/yy/anaconda/lib/python3.5/site.py", line 327, in addsitepackages
addsitedir(sitedir, known_paths)
File "/Users/yy/anaconda/lib/python3.5/site.py", line 206, in addsitedir
addpackage(sitedir, name, known_paths)
File "/Users/yy/anaconda/lib/python3.5/site.py", line 162, in addpackage
for n, line in enumerate(f):
File "/Users/yy/anaconda/lib/python3.5/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 127: ordinal not in range(128)
I spent two hours on this error.
However, No solutions work...
Please help.
Thanks!
#
I use python3.5, so the default encoding is utf-8. The
UnicodeDecodeError is strange...

The problem is about encoding.
The default of Python3.5 is utf-8.
However, some packages I installed is encoded with Unicode.
I solve the problem by modifying the site.py file in path=/Users/user_name/anaconda/lib/python3.5/site.py.
Line 158:
f = open(fullname, "rb") -> f = open(fullname, "rb")
Line 163:
if line.startswith("#"): -> if line.startswith(b"#"):
Line 166:
if line.startswith(("import ", "import\t")): -> if line.startswith((b"import ", b"import\t")):
Line 170:
dir, dircase = makepath(sitedir, line) -> dir, dircase = makepath(sitedir, str(line))
I don't think it's a good idea to modify the "site.py" in anaconda...
But this fix the problem.
Hope it will be helpful.

Related

Using Python 3 "requests" package to access web server with IP V6 address: works with W10, not with Rasberry Pi OS

We have a device currently under development that we want to test for different features and protocols. One of the features is this device has an embedded HTTP(S) server and we want to perform regression testing when we make changes in the device source code. for that purpose we use Jenkins and a Raspberry Pi 4 (4 GB) as slave with PyTest as test framework, although our development environment is done with Windows 10.
We noticed a problem: when trying to connect to the device HTTP server with the IP V6 address on the RPi, we get a "[Errno 22] Invalid argument" error.
We reduced the test to a minimal Python 3 script to no avail: the same script works on Windows 10 but not on RPi.
Here is the script:
import os
import requests
ipv6_address = 'fe80::211:ff:fe4c:e565'
os.environ['no_proxy'] = 'localhost,foobar.com,192.168.10.*,{},[{}]'.format(ipv6_address, ipv6_address)
r = requests.get('https://[{}]/'.format(ipv6_address), verify=False, timeout=5)
print(r.status_code)
print(r.headers)
print(r.url)
Here are the results for Windows 10 (version 2004 19041.867):
$ python --version
Python 3.7.7
$ python ~/test_req.py
C:\Python37-32\lib\site-packages\urllib3\connectionpool.py:1020: InsecureRequestWarning: Unverified HTTPS request is being made to host 'fe80::211:ff:fe4c:e565'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
200
{'Server': '', 'Connection': 'Keep-Alive', 'Keep-Alive': '', 'Persist': '', 'Content-Type': 'text/html', 'Content-Length': '1429', 'Last-Modified': 'FRI JUL 07 16:43:06 2017', 'Etag': '"21D47F50-00000595-595FBA1A"'}
https://[fe80::211:ff:fe4c:e565]/
Result for Raspberry Pi (Linux raspberrypi 5.4.83-v7l-ALADDIN+ #1 SMP Tue Feb 9 14:23:45 CET 2021 armv7l GNU/Linux) :
[NOTE: we have compiled a specific version of Linux for the Ixxat SocketCAN driver from the default kernel configuration, except the name]
$ python3 --version
Python 3.7.3
$ python3 ~/test_req.py
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 80, in create_connection
raise err
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 70, in create_connection
sock.connect(sa)
OSError: [Errno 22] Invalid argument
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 841, in _validate_conn
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 301, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0xb6369c30>: Failed to establish a new connection: [Errno 22] Invalid argument
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/.local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='fe80::211:ff:fe4c:e565', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0xb6369c30>: Failed to establish a new connection: [Errno 22] Invalid argument'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/test_req.py", line 9, in <module>
r = requests.get('https://[{}]/'.format(ipv6_address), verify=False, timeout=5)
File "/home/pi/.local/lib/python3.7/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='fe80::211:ff:fe4c:e565', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0xb6369c30>: Failed to establish a new connection: [Errno 22] Invalid argument'))
We would like to know whether someone has an idea of the source of this "invalid argument" error.

Using dateutil. parser to account for timezones, but parse won't recognizer tzinfos?

I'm trying to extract time stamps from a list that contains different timezones.
I am using dateutil.parser. I believe I want to use the parse function for this, including timezone information, but it appears it doesn't want to accept them. Can someone tell me where I'm going wrong?
from dateutil.parser import parse
timezone_info = {
"PDT": "UTC -7",
"PST": "UTC -8",
}
date_list = ['Oct 21, 2019 19:30 PDT',
'Nov 4, 2019 18:30 PST']
for dates in date_list:
print(parse(dates))
# This gives:
# 2019-10-21 19:30:00
# 2019-11-04 18:30:00
for date in date_list:
print(parse(dates, tzinfos = timezone_info))
This is the output:
2019-10-21 19:30:00
2019-11-04 18:30:00
C:\Users\mbsta\Anaconda3\envs\untitled2\lib\site-packages\dateutil\parser\_parser.py:1218: UnknownTimezoneWarning: tzname PDT identified but not understood. Pass `tzinfos` argument in order to correctly return a timezone-aware datetime. In a future version, this will raise an exception.
category=UnknownTimezoneWarning)
C:\Users\mbsta\Anaconda3\envs\untitled2\lib\site-packages\dateutil\parser\_parser.py:1218: UnknownTimezoneWarning: tzname PST identified but not understood. Pass `tzinfos` argument in order to correctly return a timezone-aware datetime. In a future version, this will raise an exception.
category=UnknownTimezoneWarning)
Traceback (most recent call last):
File "C:/Users/mbsta/PycharmProjects/untitled2/tester.py", line 16, in <module>
print(parse(dates, tzinfos = timezone_info))
File "C:\Users\mbsta\Anaconda3\envs\untitled2\lib\site-packages\dateutil\parser\_parser.py", line 1374, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "C:\Users\mbsta\Anaconda3\envs\untitled2\lib\site-packages\dateutil\parser\_parser.py", line 660, in parse
ret = self._build_tzaware(ret, res, tzinfos)
File "C:\Users\mbsta\Anaconda3\envs\untitled2\lib\site-packages\dateutil\parser\_parser.py", line 1185, in _build_tzaware
tzinfo = self._build_tzinfo(tzinfos, res.tzname, res.tzoffset)
File "C:\Users\mbsta\Anaconda3\envs\untitled2\lib\site-packages\dateutil\parser\_parser.py", line 1175, in _build_tzinfo
tzinfo = tz.tzstr(tzdata)
File "C:\Users\mbsta\Anaconda3\envs\untitled2\lib\site-packages\dateutil\tz\_factories.py", line 69, in __call__
cls.instance(s, posix_offset))
File "C:\Users\mbsta\Anaconda3\envs\untitled2\lib\site-packages\dateutil\tz\_factories.py", line 22, in instance
return type.__call__(cls, *args, **kwargs)
File "C:\Users\mbsta\Anaconda3\envs\untitled2\lib\site-packages\dateutil\tz\tz.py", line 1087, in __init__
raise ValueError("unknown string format")
ValueError: unknown string format
Process finished with exit code 1
I believe the issue here is that the offsets you are specifying are not a valid format for tzstr, which expects something that looks like a TZ variable. If you change the strings to "PST+8" and "PDT+7", respectively, it will work as intended.
That said, I think you'd be much better off using a tzfile, which is one of the main things that tzinfos is for:
from dateutil import parser
from dateutil import tz
PACIFIC = tz.gettz("America/Los_Angeles")
timezone_info = {"PST": PACIFIC, "PDT": PACIFIC}
date_list = ["Oct 21, 2019 19:30 PDT",
"Nov 4, 2019 18:30 PST"]
for dtstr in date_list:
print(parser.parse(dtstr, tzinfos=timezone_info))
This prints:
2019-10-21 19:30:00-07:00
2019-11-04 18:30:00-08:00
And since it attaches a full time zone offset, you can do arithmetic on the results without worrying (since it's a full time zone, not a fixed offset).

HTTPError: Forbidden - File "/home/evans/anaconda3/envs/myenv/lib/python3.8/urllib/request.py",

Hi I'm tring to run the code below and i get an error, HTTPError: Forbidden. It tells me that the line with a problem is in the requests.py file in the urllib folder. I wanted to extract data from an online website.
This is my code which i try to run
import pandas as pd
import geopandas as gpd
data = pd.read_html('https://www.worldometers.info/coronavirus/')
And this is the response i get from the spyder console
Python 3.8.2 (default, Mar 26 2020, 15:53:00)
Type "copyright", "credits" or "license" for more information.
IPython 7.13.0 -- An enhanced Interactive Python.
runfile('/home/evans/Desktop/GIS DEVELOPMENTS/PROJECTS/Coronavirus2020.py', wdir='/home/evans/Desktop/GIS DEVELOPMENTS/PROJECTS')
Traceback (most recent call last):
File "/home/evans/Desktop/GIS DEVELOPMENTS/PROJECTS/Coronavirus2020.py", line 5, in
data = pd.read_html('https://www.worldometers.info/coronavirus/')
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/io/html.py", line 1085, in read_html
return _parse(
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/io/html.py", line 895, in _parse
tables = p.parse_tables()
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/io/html.py", line 213, in parse_tables
tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/io/html.py", line 733, in _build_doc
raise e
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/io/html.py", line 714, in _build_doc
with urlopen(self.io) as f:
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/site-packages/pandas/io/common.py", line 141, in urlopen
return urllib.request.urlopen(*args, **kwargs)
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/home/evans/anaconda3/envs/myenv/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
HTTPError: Forbidden
The problem at first was that lxml was missing, so i decided to install it from my environment using pip3 install lxml, but this is the return message i got.
Requirement already satisfied: lxml in /usr/lib/python3/dist-packages (4.4.1).
But this is not in my environment folder, it is in the base/root folder. So i just decided to use pip install lxml and it worked. Then when i executed it, it returned the above error.
I will appreciate any guidance to help me overcome this problem.
It's probably the site blocking the scraping. Maybe...
HTTP error 403 in Python 3 Web Scraping
Hence try...
from urllib.request import Request, urlopen
req = Request('https://www.worldometers.info/coronavirus/', headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()
tables = pd.read_html(webpage)
df = tables[0]
print(df.head())
Outputs:
Country,Other TotalCases NewCases TotalDeaths NewDeaths TotalRecovered \
0 USA 123781 +203 2229.0 +8 3238.0
1 Italy 92472 NaN 10023.0 NaN 12384.0
2 Spain 78797 +5,562 6528.0 +546 14709.0
3 Germany 58247 +552 455.0 +22 8481.0
4 Iran 38309 +2,901 2640.0 +123 12391.0
ActiveCases Serious,Critical Tot Cases/1M pop Deaths/1M pop 1stcase
0 118314 2666.0 374.0 7.0 Jan 20
1 70065 3856.0 1529.0 166.0 Jan 29
2 57560 4165.0 1685.0 140.0 Jan 30
3 49311 1581.0 695.0 5.0 Jan 26
4 23278 3206.0 456.0 31.0 Feb 18

Running clustalw on google platform with error in generating .aln file in ubuntu

I was trying to run clustalw from Biopython library of python3 on Google Cloud Platform, then generate a phylogenetic tree from the .dnd file using the Phylo library.
The code was running perfectly with no error on my local system. However, when it runs on the Google Cloud platform it has the following error:
python3 clustal.py
Traceback (most recent call last):
File "clustal.py", line 9, in <module>
align = AlignIO.read("opuntia.aln", "clustal")
File "/home/lhcy3w/.local/lib/python3.5/site-packages/Bio/AlignIO/__init__.py", line 435, in read
first = next(iterator)
File "/home/lhcy3w/.local/lib/python3.5/site-packages/Bio/AlignIO/__init__.py", line 357, in parse
with as_handle(handle, 'rU') as fp:
File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/home/lhcy3w/.local/lib/python3.5/site-packages/Bio/File.py", line 113, in as_handle
with open(handleish, mode, **kwargs) as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'opuntia.aln'
If I run sudo python3 clustal.py, the error would be
File "clustal.py", line 1, in <module>
from Bio import AlignIO
ImportError: No module named 'Bio'
If I run it as in the interactive form of python, the following happened
Python 3.5.3 (default, Sep 27 2018, 17:25:39)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from Bio.Align.Applications import ClustalwCommandline
>>> in_file = "opuntia.fasta"
>>> clustalw_cline = ClustalwCommandline("/usr/bin/clustalw", infile=in_file)
>>> clustalw_cline()
('\n\n\n CLUSTAL 2.1 Multiple Sequence Alignments\n\n\nSequence format is Pearson\nSequence 1: gi|6273291|gb|AF191665.1|AF191665 902 bp\nSequence 2: gi|6273290|gb
|AF191664.1|AF191664 899 bp\nSequence 3: gi|6273289|gb|AF191663.1|AF191663 899 bp\nSequence 4: gi|6273287|gb|AF191661.1|AF191661 895 bp\nSequence 5: gi|627328
6|gb|AF191660.1|AF191660 893 bp\nSequence 6: gi|6273285|gb|AF191659.1|AF191659 894 bp\nSequence 7: gi|6273284|gb|AF191658.1|AF191658 896 bp\n\n', '\n\nERROR:
Cannot open output file [opuntia.aln]\n\n\n')
Here is my clustal.py file:
from Bio import AlignIO
from Bio import Phylo
import matplotlib
from Bio.Align.Applications import ClustalwCommandline
in_file = "opuntia.fasta"
clustalw_cline = ClustalwCommandline("/usr/bin/clustalw", infile=in_file)
clustalw_cline()
tree = Phylo.read("opuntia.dnd", "newick")
tree = tree.as_phyloxml()
Phylo.draw(tree)
I just want to know how to create an .aln and a .dnd file on the Google Cloud platform as I can get on my local environment. I guess it is probably because I don't have the permission to create a new file on the server with python. I have tried f = open('test.txt', 'w') on Google Cloud but it couldn't work until I add sudo before the terminal command like sudo python3 text.py. However, as you can see, for clustalw, adding sudo only makes the whole biopython library missing.

shutil.copystat() fails inside Docker on Azure

The failing code runs inside a Docker container based on python:3.6-stretch debian.
It happens while Django moves a file from one Docker volume to another.
When I test on MacOS 10, it works without error. Here, the Docker containers are started with docker-compose and use regular Docker volumes on the local machine.
Deployed into Azure (AKS - Kubernetes on Azure), moving the file succeeds but copying the stats fails with the following error:
File "/usr/local/lib/python3.6/site-packages/django/core/files/move.py", line 70, in file_move_safe
copystat(old_file_name, new_file_name)
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: '/some/path/file.pdf'
The volumes on Azure are persistent volume claims with ReadWriteMany access mode.
Now, copystat is documented as:
copystat() never returns failure.
https://docs.python.org/3/library/shutil.html
My questions are:
Is this a "bug" because the documentation says that it should "never return failure"?
Can I savely try/except this error because the file in question is moved (it only fails later on, while trying to copy the stats)
Can I change something about the Azure settings that fix this? (probably not)
Here some small test on the machine in Azure itself:
root:/media/documents# ls -al
insgesamt 267
drwxrwxrwx 2 1000 1000 0 Jul 31 15:29 .
drwxrwxrwx 2 1000 1000 0 Jul 31 15:29 ..
-rwxrwxrwx 1 1000 1000 136479 Jul 31 16:48 orig.pdf
-rwxrwxrwx 1 1000 1000 136479 Jul 31 15:29 testfile
root:/media/documents# lsattr
--S-----c-jI------- ./orig.pdf
--S-----c-jI------- ./testfile
root:/media/documents# python
Python 3.6.6 (default, Jul 17 2018, 11:12:33)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import shutil
>>> shutil.copystat('orig.pdf', 'testfile')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: 'orig.pdf'
>>> shutil.copystat('orig.pdf', 'testfile', follow_symlinks=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/shutil.py", line 225, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/local/lib/python3.6/shutil.py", line 157, in _copyxattr
names = os.listxattr(src, follow_symlinks=follow_symlinks)
OSError: [Errno 38] Function not implemented: 'orig.pdf'
>>>
The following solution is a hotfix. It would have to be applied to any method that calls copystat directly or indirectly (or any shutil method that produces an ignorable errno.ENOSYS).
if hasattr(os, 'listxattr'):
LOGGER.warning('patching listxattr to avoid ERROR 38 (errno.ENOSYS)')
# avoid "ERROR 38 function not implemented on Azure"
with mock.patch('os.listxattr', return_value=[]):
file_field.save(name=name, content=GeneratedFile(fresh, content_type=content_type), save=True)
else:
file_field.save(name=name, content=GeneratedFile(fresh, content_type=content_type), save=True)
file_field.save is the Django method that calls the shutil code in question. It's the last location in my code before the error.

Resources