Im trying to get the size of an image that is sent via http request encoded as base64. On the file system the image is a .png image and is around 1,034,023 bytes, however when I receive the image as base64 and get the size in bytes its smaller i.e 840734.
Is this correct and is this due to the compression of .png being different to the image loaded in memory? And if i want he size of the image that is displayed in the file system will I have to re save this image to disk when i receive it?
to get the size of the image in bytes I have the following functions (both return the same value). Im using Python3.
def image_size(imageb64):
character_count = len(imageb64)
padding_count = imageb64[character_count:None].count('=')
count = (3 * (character_count / 4)) - padding_count
print(f'Image size count: {count}')
def image_to_size_in_bytes(numpy_img):
img = Image.fromarray(numpy_img)
buffered = BytesIO()
img.save(buffered, format='PNG')
contents = buffered.getvalue()
print(f'IMAGE SIZE: {len(contents)}')
Related
I tried using density but it didn't help. The original TIFF image is 459 kB but when it gets converted to PDF the size changes to 8446 KB.
commands = ['magick', 'convert']
commands.extend(waiting_list["images"][2:])
commands.append('-adjoin')
commands.append(combinedFormPathOutput)
process = Popen(commands, stdout=PIPE, stderr=PIPE, shell=True)
process.communicate()
https://drive.google.com/file/d/14V3vKRcyyEx1U23nVC13DDyxGAYOpH-6/view?usp=sharing
Its not teh above code but the below PIL code which is causing the image to increase
images = []
filepath = 'Multi_document_Tiff.tiff'
image = Image.open(filepath)
if filepath.endswith('.tiff'):
imagepath = filepath.replace('.tiff', '.pdf')
for i, page in enumerate(ImageSequence.Iterator(image)):
page = page.convert("RGB")
images.append(page)
if len(images) == 1:
images[0].save(imagepath)
else:
images[0].save(imagepath, save_all=True, append_images=images[1:])
image.close()
When I run
convert Multi_document_Tiff.tiff -adjoin Multi_document.pdf
I get a 473881 bytes PDF that contains the 10 pages of the TIFF. If I run
convert Multi_document_Tiff.tiff Multi_document_Tiff.tiff Multi_document_Tiff.tiff -adjoin Multi_document.pdf
I get a 1420906 bytes PDF that contains 30 pages (three copies of your TIFF).
So obviously if you pass several input files to IM it will coalesce them in the output file.
You code does:
commands.extend(waiting_list["images"][2:])
So it seems it is passing a list of files to IM, and the output should be the accumulation of all these files, which can be a lot bigger that the size of the first file.
So:
did you check the content of the output PDF?
did you check the list of files which is actually passed?
So i want to create a matplotlib pie chart using some data then save it to BytesIO and send to my S3 bucket to store and use this picture later. No errors appear during the operation, its successfully uploaded on my S3 with correct name but the file's size is 0 and it is completely empty, though buf is not empty if i check it via print() before uplaod.
async def generate_pie_chart():
slices = [33,33,33]
fig = Figure()
pie = fig.subplots()
pie.pie(slices)
buf = BytesIO()
fig.savefig(buf, format="png")
S3_CLIENT.upload_fileobj(buf, S3_BUCKET, 'piee')
Just add the following line right above upload_fileobj():
buf.seek(0)
It moves the "cursor" to the beginning of the BytesIO. When upload_fileobj() is called, it can read/upload from the beginning of the file.
I mean I don't want the file to be downloaded onto the hdd, just the string has to be returned in form of bytes so that it can later be passed to some other function.
Here is one way:
url = 'https://m.media-amazon.com/images/M/MV5BMTY5MTY3NjgxNF5BMl5BanBnXkFtZTcwMDExMTQyMw##._V1_SX1777_CR0,0,1777,987_AL_.jpg'
import requests
# Return data as a string
output = requests.get(url).text
# Return data as bytes
output = requests.get(url).content
You could also use urlib or urlib2.
I am trying to send a picture from my Pi camera to a flask web server. When I encode the image Base64 it doesn't seem to produce a valid image.
I can take the picture and process it through opencv. The Base64 encoded image is passed to the web page, but the string sent is not a valid image. To prove this I have saved the image and processed it with an online Base64 converter. Pasting this string into the web page shows the image.
def Take_Picture(camera):
stream = io.BytesIO() # saving the picture to an in-program stream
camera.resolution = (160,120) # set resolution
camera.capture(stream, format='jpeg', use_video_port=True) # capture into stream
mydata = np.fromstring(stream.getvalue(), dtype=np.uint8) # convert image into numpy array
img = cv2.imdecode(mydata, -1) # to opencv image
data = base64.b64encode(img).decode('utf-8')
print(data)
cv2.imwrite("test.jpg",img)
return data
HTML
<img src="data:image/jpeg;charset=utf-8;base64,{{img}}" alt="Camera View" width="640" height="480">
I get a result of
b'AAIAAAIAAAIAAAIAAAIAAAIAAAIAAAIAAAIAAAIAAAIAAAMAAAIAAAIAAA...
from data above.
But I get
/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUG...
from test.jpg from an online Base64 conversion. Putting this string in the web page displays the image.
You have to convert you image back from a numpy array to an image which can then encoded correctly to Base64!
What you now do is you encode a numpy array to base64 string which surely can't give the same result the online base64 tool gives!
What you need to do, pass your numpy array to cv2.imencode which returns a buffer image object and then you can convert it to base64
retval, buffer_img= cv2.imencode('.jpg', img)
data = base64.b64encode(buffer_img)
OR
you can skip the img = cv2.imdecode(mydata, -1) and pass mydata directly to base64.b64encode(mydata) while the image is already stored the memory!
There is no openCV image, the openCV image is a ndArray. When you execute print(type(img)) you will get <class 'numpy.ndarray'>
The following solved it for me:
import cv2
import base64
# img is a numpy array / opencv image
_, encoded_img = cv2.imencode('.png', img) # Works for '.jpg' as well
base64_img = base64.b64encode(encoded_img).decode("utf-8")
I am attempting to download the MNIST dataset and decode it without writing it to disk (mostly for fun).
request_stream = urlopen('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz')
zip_file = GzipFile(fileobj=request_stream, mode='rb')
with zip_file as fd:
magic, numberOfItems = struct.unpack('>ii', fd.read(8))
rows, cols = struct.unpack('>II', fd.read(8))
images = np.fromfile(fd, dtype='uint8') # < here be dragons
images = images.reshape((numberOfItems, rows, cols))
return images
This code fails with OSError: obtaining file position failed, an error that seems to be ungoogleable. What could the problem be?
The problem seems to be, that what gzip and similar modules provide, aren't real file objects (unsurprisingly), but numpy attempts to read through the actual FILE* pointer, so this cannot work.
If it's ok to read the entire file into memory (which it might not be), then this can be worked around by reading all non-header information into a bytearray and deserializing from that:
rows, cols = struct.unpack('>II', fd.read(8))
b = bytearray(fd.read())
images = np.frombuffer(b, dtype='uint8')
images = images.reshape((numberOfItems, rows, cols))
return images