Displaying PIL Images in Dash/Plotly - python-3.x

I've been developing my Dash web application and am now looking into hosting it on my VM.
After I set up my environment, I'm unable to directly load PIL Image objects in html.Img elements.
As they are rendered, an error will pop up and notify me that my PIL Image is not serializable.
This strikes me as weird, and possibly not an plotly error, but I have the exact same code, libraries and images causing error on my VM but running smoothly on my workstation.
After loading and doing some preprocessing, my Image object is passed to the html component as shown:
grid_main_images = <PIL.Image.Image image mode=RGB size=482x542 at 0x7FE88C04CD90>
html.Img(src=grid_main_imgs)
Again, the serialization error only occurs on my VM but not on my local machine.
And here is the full error / traceback
Traceback (most recent call last):
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/dash/dash.py", line 1227, in add_context
cls=plotly.utils.PlotlyJSONEncoder
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/_plotly_utils/utils.py", line 49, in encode
encoded_o = super(PlotlyJSONEncoder, self).encode(o)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/_plotly_utils/utils.py", line 119, in default
return _json.JSONEncoder.default(self, obj)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Image is not JSON serializable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/app.py", line 2463, in __call__
return self.wsgi_app(environ, start_response)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/app.py", line 2449, in wsgi_app
response = self.handle_exception(e)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/app.py", line 1866, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/dash/dash.py", line 1291, in dispatch
response.set_data(self.callback_map[output]['callback'](*args))
File "/home/aegilsson/anaconda3/envs/diamond/lib/python3.7/site-packages/dash/dash.py", line 1242, in add_context
).replace(' ', ''))
dash.exceptions.InvalidCallbackReturnValue:
The callback for property `children`
of component `tabs-content` returned a value
which is not JSON serializable.
In general, Dash properties can only be
dash components, strings, dictionaries, numbers, None,
or lists of those.

You need to base64 encode the image and add some HTML headers.
def pil_to_b64(im, enc_format="png", **kwargs):
"""
Converts a PIL Image into base64 string for HTML displaying
:param im: PIL Image object
:param enc_format: The image format for displaying. If saved the image will have that extension.
:return: base64 encoding
"""
buff = BytesIO()
im.save(buff, format=enc_format, **kwargs)
encoded = base64.b64encode(buff.getvalue()).decode("utf-8")
return encoded
html.Img(id="my-img",className="image", src="data:image/png;base64, " + pil_to_b64(pil_img))
Credit: #Atli's comment pointed me in the right direction.

Not sure when it was introduced but both plotly.express.imshow() and plotly.graph_objects.Figure().add_layout_image() do accept PIL image out of the box:
In fact, if you provide any incompatible input like np.array to fig.add_layout_image({"source":np.array(my_pil_img)}) you will get the following ValueError:
The 'source' property is an image URI that may be specified as:
A remote image URI string (e.g. 'http://www.somewhere.com/image.png')
A data URI image string (e.g. 'data:image/png;base64,iVBORw0KGgoAAAANSU')
A PIL.Image.Image object which will be immediately converted to a data URI image string See http://pillow.readthedocs.io/en/latest/reference/Image.html
Example:
from PIL import Image
#import pathlib
mysize = (512,512)
# if you have a real img
#path_to_file = pathlib.Path().cwd()/'dummy.png'
#img = Image.open(path_to_file)
# use dummy img here for example:
img = Image.new('RGBA', size=(1024, 1024), color=(155, 0, 0))
# I encountered issues with very big images, so best to make an in-place thumbnail
img = img.thumbnail(mysize, Image.ANTIALIAS)
Now for plotly.express:
import plotly.express as px
px.imshow(img)
or a bit more elaborate for plotly.graph_objects:
from plotly import graph_objects as go
fig = go.Figure()
# based on https://plotly.com/python/images/#zoom-on-static-images
# Constants
img_width, img_height = img.size
scale_factor = 1
# Add invisible scatter trace.
# This trace is added to help the autoresize logic work.
fig.add_trace(
go.Scatter(
x=[0, img_width * scale_factor],
y=[0, img_height * scale_factor],
mode="markers",
marker_opacity=0
)
)
# Configure axes
fig.update_xaxes(
visible=False,
range=[0, img_width * scale_factor]
)
fig.update_yaxes(
visible=False,
range=[0, img_height * scale_factor],
# the scaleanchor attribute ensures that the aspect ratio stays constant
scaleanchor="x"
)
# Add image
fig.add_layout_image(
dict(
x=0,
sizex=img_width * scale_factor,
y=img_height * scale_factor,
sizey=img_height * scale_factor,
xref="x",
yref="y",
opacity=1.0,
layer="below",
sizing="stretch",
source=img)
)
# Configure other layout
fig.update_layout(
width=img_width * scale_factor,
height=img_height * scale_factor,
margin={"l": 0, "r": 0, "t": 0, "b": 0},
)
# Disable the autosize on double click because it adds unwanted margins around the image
# More detail: https://plotly.com/python/configuration-options/
fig.show(config={'doubleClick': 'reset'})

Related

Getting this while using pytorch transforms--->TypeError: integer argument expected, got float

I cloned transfer-learning-library repo and working on maximum classifier discrepancy. I am trying to change the augmentation but getting the following error
Traceback (most recent call last):
File "mcd.py", line 378, in <module>
main(args)
File "mcd.py", line 145, in main
results = validate(val_loader, G, F1, F2, args)
File "mcd.py", line 290, in validate
for i, (images, target) in enumerate(val_loader):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "../../../common/vision/datasets/imagelist.py", line 48, in __getitem__
img = self.transform(img)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py", line 60, in __call__
img = t(img)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py", line 750, in forward
return F.perspective(img, startpoints, endpoints, self.interpolation, fill)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py", line 647, in perspective
return F_pil.perspective(img, coeffs, interpolation=pil_interpolation, fill=fill)
File "/usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional_pil.py", line 289, in perspective
return img.transform(img.size, Image.PERSPECTIVE, perspective_coeffs, interpolation, **opts)
File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2371, in transform
im = new(self.mode, size, fillcolor)
File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2578, in new
return im._new(core.fill(mode, size, color))
TypeError: integer argument expected, got float
The previous code was
# Data loading code
normalize = T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
if args.center_crop:
train_transform = T.Compose([
ResizeImage(256),
T.CenterCrop(224),
T.RandomHorizontalFlip(),
T.ToTensor(),
normalize
])
else:
train_transform = T.Compose([
ResizeImage(256),
T.RandomResizedCrop(224),
T.RandomHorizontalFlip(),
T.ToTensor(),
normalize
])
val_transform = T.Compose([
ResizeImage(256),
T.CenterCrop(224),
T.ToTensor(),
normalize
])
I just added T.RandomPerspective(distortion_scale = 0.8, p=0.5, fill=0.6) for val_transform.
Before this I also added few other transforms for train_transform but still got the same error.
What could be the problem?
The fill argument needs to be an integer.
This transform does not support the fill parameter for Tensor types; therefore, if you wish to use the fill parameter, then you must use this transform before the ToTensor transform. At this point, the data is integral.

WSQ files not opening with Pillow/wsq when using joblib.Parallel

I am trying to preprocess large amounts of WSQ images for model training using both the Pillow and wsq libraries. To speed up my code, I am trying to use Parallel but this causes an UnidentifiedImageError.
I verified that the files are there where they should be, and that the function runs without errors when used in a regular for-loop. Other files (eg csv files) can be opened inside the function without errors, so I presume that the error lies with the combination of Parallel and Pillow/wsq. All libraries are up to date. As I am just starting out with Pillow and multiprocessing, I have no idea yet on how to fix this and any help would be highly appreciated.
Code:
from joblib import Parallel, delayed
from PIL import Image
import multiprocessing
import wsq
import numpy as np
def process_image(i):
path = "/home/user/project/wsq/image_"+str(i)+".wsq"
img = np.array(Image.open(path))
#some preprocessing, saving as npz
output_path = "/home/user/project/npz/image_"+str(i)+".npz"
np.savez_compressed(output_path, img)
return None
inputs = range(100000)
num_cores = multiprocessing.cpu_count()
Parallel(n_jobs=num_cores)(delayed(process_image)(i) for i in inputs)
Output:
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/user/.local/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 431, in _process_worker
r = call_item()
File "/home/user/.local/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py", line 285, in __call__
return self.fn(*self.args, **self.kwargs)
File "/home/user/.local/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 595, in __call__
return self.func(*args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/joblib/parallel.py", line 262, in __call__
return [func(*args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/joblib/parallel.py", line 262, in <listcomp>
return [func(*args, **kwargs)
File "preprocess_images.py", line 9, in process_image
img = np.array(Image.open(path))
File "/home/user/.local/lib/python3.8/site-packages/PIL/Image.py", line 2967, in open
raise UnidentifiedImageError(
PIL.UnidentifiedImageError: cannot identify image file '/home/user/project/wsq/image_1.wsq'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "preprocess_images.py", line 18, in <module>
Parallel(n_jobs=num_cores)(delayed(process_image)(i) for i in inputs)
File "/home/user/.local/lib/python3.8/site-packages/joblib/parallel.py", line 1054, in __call__
self.retrieve()
File "/home/user/.local/lib/python3.8/site-packages/joblib/parallel.py", line 933, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "/home/user/.local/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result
return future.result(timeout=timeout)
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
PIL.UnidentifiedImageError: cannot identify image file '/home/user/project/wsq/image_1.wsq'

Why image from a byte stream isn't being rendered?

I'm working with the base64 module for image manipulation.
I've got this code:
import flask, base64, webbrowser, PIL.Image
...
...
image = PIL.Image.frombytes(mode='RGBA', size=(cam_width, cam_height), data=file_to_upload)
im_base64 = base64.b64encode(image.tobytes())
html = '<html><head><meta http-equiv="refresh" content="0.5"><title>Displaying Uploaded Image</title></head><body><h1>Displaying Uploaded Image</h1><img src="data:;base64,{}" alt="" /></body></html>'.format(im_base64.decode('utf8'))
html_url = '/home/mark/Desktop/FlaskUpload/test.html'
with open(html_url, 'w') as f:
f.write(html)
webbrowser.open(html_url)
I've also tried:
html = '<html><head><meta http-equiv="refresh" content="0.5"><title>Displaying Uploaded Image</title></head><body><h1>Displaying Uploaded Image</h1><img src="data:;base64,"'+im_base64.decode('utf8')+'" alt="" /></body></html>'
The heading is being rendered just fine but not the image.
Have I missed anything ?
Update:
cam_width is 720
cam_height is 1280
file_to_upload is 3686400
first 10 bytes of the file_to_upload:
b'YPO\xffYPO\xffVQ'
I can't seem to get first 10 bytes of im_base64 with print(image.tobytes()[:10]) as it throws an error.
I got little bit closer to determining what's wrong. Once I fixed
the quotes I got error:
Traceback (most recent call last):
File "/home/mark/venv/lib/python3.7/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/home/mark/venv/lib/python3.7/site-packages/flask/app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "/home/mark/venv/lib/python3.7/site-packages/flask/app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/mark/venv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/mark/venv/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/mark/venv/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/mark/venv/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/mark/venv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/mark/venv/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/mark/venv/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/mark/venv/server.py", line 28, in upload_file
image = PIL.Image.frombytes(mode='RGBA', size=(cam_width, cam_height), data=file_to_upload)
File "/home/mark/venv/lib/python3.7/site-packages/PIL/Image.py", line 2650, in frombytes
im.frombytes(data, decoder_name, args)
File "/home/mark/venv/lib/python3.7/site-packages/PIL/Image.py", line 797, in frombytes
d.setimage(self.im)
ValueError: tile cannot extend outside image
I'm working with image manipulation for the very first time so I don't know what I'm doing. What ValueError: tile cannot extend outside image means ?
To see where you are going wrong, you need to differentiate between:
RGB "pixel data", and
JPEG/PNG encoded images.
"Pixel data" is a bunch of RGB/RGBA bytes and that is all. There is no height or width information to know how to interpret or lay out the pixels on a screen. The data is just 4 RGBA bytes for each pixel. If you know your image is 720x1280 RGBA pixels, you will have 720x1280x4, or 3686400 bytes. Notice there's no room in there for height and width or the fact it's RGBA. That's what you have in your variable file_to_upload. Note that you had to additionally tell PIL Image the height and width and the fact it is RGBA for PIL to understand the pixel data.
A JPEG/PNG encoded image is very different. Firstly, it starts with a magic number, which is ff d8 for JPEG, and the 3 letters PNG and some other bits and pieces for PNG. Then it has the height and width, the bytes/pixel and colourspace and possibly the date and GPS location you took the photo, your copyright, the camera manufacturer and lens and a bunch of other stuff. Then it has the compressed pixel data. In general, it will be smaller than the corresponding pixel data. A JPEG/PNG is self-contained - no additional data is needed.
Ok, you need to send a base64-encoded JPEG or a PNG to the browser. Why? Because the browser needs an image with dimensions in it, else it can't tell if it is 720 px wide and 1280 px tall, or a single straight line of 921,600 RGBA pixels, or a single straight line of 1,228,800 RGB pixels. Your image is RGBA, so you probably better send a PNG because JPEGs cannot contain transparency.
So, where did you go wrong? You started with "pixel data", added in your knowledge of height and width and made a PIL Image. So far so good. But then you went wrong because you called tobytes() and made it back into exactly what you started with - "pixel data" with the same length and content as you had, and no width or height info. You should have instead, created an in-memory PNG-encoded image with the height and width embedded in it so that the browser knows its shape. Then base64-encode and send that. So you needed something like:
image = PIL.Image.frombytes(mode='RGBA', size=(cam_width, cam_height), data=file_to_upload)
buffer = io.BytesIO()
image.save(buffer, format="PNG")
PNG = buffer.getvalue()
Also, have a read here about checking the first few bytes of your data so you can readily check if you are sending the right thing.
So, here's the complete code:
#!/usr/bin/env python3
import base64
import numpy as np
from PIL import Image
from io import BytesIO
cam_width, cam_height = 640, 480
# Simulate some semi-transparent red pixel data
PixelData = np.full((cam_height,cam_width,4), [255,0,0,128], np.uint8)
# Convert to PIL Image
im = Image.frombytes(mode='RGBA', size=(cam_width, cam_height), data=PixelData)
# Create in-memory PNG
buffer = BytesIO()
im.save(buffer, format="PNG")
PNG = buffer.getvalue()
# Base64 encode
b64PNG = base64.b64encode(PNG).decode("utf-8")
# Create HTML
html = f'<html><head><meta http-equiv="refresh" content="0.5"><title>Displaying Uploaded Image</title></head><body><h1>Displaying Uploaded Image</h1><img src="data:;base64,{b64PNG}" alt="" /></body></html>'
# Write HTML
with open('test.html', 'w') as f:
f.write(html)
And the resulting, semi-transparent red image:

A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable

I am using the following code for machine learning purposes (I am also quite new to python and pytorch). Basically, I think the problem is that multitasking is not happening for some reason.
I am using code from here: https://raw.githubusercontent.com/harryhan618/LaneNet/master/demo_test.py
The purpose of the code is draw lane markings on an image.
import cv2
import torch
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
from lane_files.model import LaneNet
from lane_files.utils.transforms import *
from lane_files.utils.postprocess import embedding_post_process
if __name__=='__main__':
net = LaneNet(pretrained=False, embed_dim=7, delta_v=.5, delta_d=3.)
transform = Compose(Resize((800, 288)), ToTensor(),
Normalize(mean=(0.3598, 0.3653, 0.3662), std=(0.2573, 0.2663, 0.2756)))
img = cv2.imread('data/train_images/frame0.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB for net model input
x = transform(img)[0]
x.unsqueeze_(0)
save_dict = torch.load('lane_files/experiments/exp0/exp0_best.pth', map_location='cpu')
net.load_state_dict(save_dict['net'])
net.eval()
output = net(x)
embedding = output['embedding']
embedding = embedding.detach().cpu().numpy()
embedding = np.transpose(embedding[0], (1, 2, 0))
binary_seg = output['binary_seg']
bin_seg_prob = binary_seg.detach().cpu().numpy()
bin_seg_pred = np.argmax(bin_seg_prob, axis=1)[0]
seg_img = np.zeros_like(img)
lane_seg_img = embedding_post_process(embedding, bin_seg_pred, 0.5)
color = np.array([[255, 125, 0], [0, 255, 0], [0, 0, 255], [0, 255, 255]], dtype='uint8')
for i, lane_idx in enumerate(np.unique(lane_seg_img)):
seg_img[lane_seg_img == lane_idx] = color[i]
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
img = cv2.resize(img, (800, 288))
img = cv2.addWeighted(src1=seg_img, alpha=0.8, src2=img, beta=1., gamma=0.)
cv2.imshow("", img)
cv2.waitKey(5000)
cv2.destroyAllWindows()
Expected result: An image displayed with lane markings on it
Actual result:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:/Users/sarim/PycharmProjects/thesis/pytorch_learning.py", line 36, in <module>
lane_seg_img = embedding_post_process(embedding, bin_seg_pred, 0.5)
File "C:\Users\sarim\PycharmProjects\thesis\lane_files\utils\postprocess.py", line 29, in embedding_post_process
mean_shift.fit(embedding_reshaped)
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\cluster\mean_shift_.py", line 424, in fit
cluster_all=self.cluster_all, n_jobs=self.n_jobs)
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\sklearn\cluster\mean_shift_.py", line 204, in mean_shift
(seed, X, nbrs, max_iter) for seed in seeds)
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\parallel.py", line 934, in __call__
self.retrieve()
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\parallel.py", line 833, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\_parallel_backends.py", line 521, in wrap_future_result
return future.result(timeout=timeout)
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\concurrent\futures\_base.py", line 435, in result
return self.__get_result()
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\concurrent\futures\_base.py", line 384, in __get_result
raise self._exception
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\externals\loky\_base.py", line 625, in _invoke_callbacks
callback(self)
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\parallel.py", line 309, in __call__
self.parallel.dispatch_next()
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\parallel.py", line 731, in dispatch_next
if not self.dispatch_one_batch(self._original_iterator):
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\parallel.py", line 759, in dispatch_one_batch
self._dispatch(tasks)
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\parallel.py", line 716, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\_parallel_backends.py", line 510, in apply_async
future = self._workers.submit(SafeFunction(func))
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\externals\loky\reusable_executor.py", line 151, in submit
fn, *args, **kwargs)
File "C:\Users\sarim\AppData\Local\Programs\Python\Python37\lib\site-packages\joblib\externals\loky\process_executor.py", line 1022, in submit
raise self._flags.broken
joblib.externals.loky.process_executor.BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.

How to response with PIL image in Cherrypy dynamically (Python3)?

It seems the task is easy but...
I have simple PIL.Image object. How to make Cherrypy response with this image dynamically?
def get_image(self, data_id):
cherrypy.response.headers['Content-Type'] = 'image/png'
img = PIL.Image.frombytes(...)
buffer = io.StringIO()
img.save(buffer, 'PNG')
return buffer.getvalue()
This code gives me:
500 Internal Server Error
The server encountered an unexpected condition which prevented it from fulfilling the request.
Traceback (most recent call last):
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\cherrypy\_cprequest.py", line 631, in respond
self._do_respond(path_info)
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\cherrypy\_cprequest.py", line 690, in _do_respond
response.body = self.handler()
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\cherrypy\_cpdispatch.py", line 60, in __call__
return self.callable(*self.args, **self.kwargs)
File "D:\Dev\Bf\webapp\controllers\calculation.py", line 69, in get_image
img.save(buffer, 'PNG')
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\PIL\Image.py", line 1930, in save
save_handler(self, fp, filename)
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\PIL\PngImagePlugin.py", line 731, in _save
fp.write(_MAGIC)
TypeError: string argument expected, got 'bytes'
Can someone help me please?
Use io.BytesIO() instead of io.StringIO(). (From this answer.)

Resources