How to read an image from Google Drive using Python? - python-3.x

I want to read image from drive and convert to binary.How can I do that? I used this code but not get the actual image.
link = urllib.request.urlopen("https://drive.google.com/file/d/1CT12YIeF0xcc8cwhBpvR-Oq0AFOABwsw/view?usp=sharing").read()
image_base64 = base64.encodestring(link)

1. Download the image to your computer.
2. You can use cv2 to convert an image to binary like so:
import cv2
img = cv2.imread('imgs/mypic.jpg',2)
ret, bw_img = cv2.threshold(img,127,255,cv2.THRESH_BINARY)

Related

Using PDF file in Keras OCR or PyTesseract - Python, is it possible?

I am using Keras OCR and PyTesseract and was wondering if it is possible to use PDF files as the image input.
If not, does anyone have a suggestion as to how to convert a very massive PDF file into PNG or another acceptable format?
Thank you!
No, as far as I know PyTesseract works only with images. You'll need to convert your pdf to images first.
By "very massive PDF" I'm assuming you mean a pdf with lots of pages. This is not an issue. You can use pdf2image library (see the docs here). The method convert_from_path has an output_folder argument that lets you specify the folder where all your generated images will be saved:
Output directory for the generated files, should be seen more as a
“working directory” than an output folder. The converted images will
be written there to save system memory.
You can later use them one by one instead of your pdf to work with PyTesseract. If you don't assign the returned list of images from convert_from_path you don't risk filling up your memory.
Otherwise, if you are willing to keep everything in memory you can use the returned pages directly, like so:
pages = convert_from_path(pdf_path)
for example, my code :
Python : 3.9
Macos: BigSur
from PIL import Image
from fonctions_images import *
from pdf2image import convert_from_path
path='/Users/yves/documents_1/'
fichier =path+'TOUTOU.pdf'
images = convert_from_path(fichier,500, transparent=True,grayscale=True,poppler_path='/usr/local/Cellar/poppler/21.12.0/bin')
for v in range(0,len(images)):
image=images[v]
image.save(path+"image.png", format="png")
test=path+"image.png"
img = cv2.imread(test) # to store image in memory
img = del_lines(path,img) # to supprime the lines
img = cv2.imread(path+"img_final_bin_1.png")
pytesseract.pytesseract.tesseract_cmd = "/usr/local/bin/tesseract"
d=pytesseract.image_to_data(img[3820:4050,2340:4000], lang='fra',config=custom_config,output_type='data.frame')

PIL Python3 - How can I open a GIF file using Pillow?

In my current condition, I can open an Image normally using a really short code like this
from PIL import Image
x = Image.open("Example.png")
x.show()
But I tried to use GIF format instead of png, It shows the file but it didn't load the frame of the GIF. Is there any possible way to make load it?
In My Current Code
from PIL import Image
a = Image.open("x.gif").convert("RGBA") # IF I don't convert it to RGBA, It will give me an error.
a.show()
Refer to Reading Sequences in the documentation:
from PIL import Image
with Image.open("animation.gif") as im:
im.seek(1) # skip to the second frame
try:
while 1:
im.seek(im.tell() + 1)
# do something to im
except EOFError:
pass # end of sequence

Trying to pass a base_64 encoded image to google authentication

So i am working on a small project(a restful service so json format which is not mentioned in the code) in which the code accepts base_64 image data and decodes it to from an image ,i'm able to convert it back to image but i am not able to use google vision(googel ocr) on the image to extract the text . The only part that isn't working is the following block of code:
from flask import Flask,request,jsonify
import os,io,re,glob,base64
from google.cloud import vision
from google.cloud.vision import types
from PIL import Image
app = Flask(__name__)
os.environ['GOOGLE_APPLICATION_CREDENTIALS']=r'date_scanner.json'
#app.route('/jason_example',methods=['POST'])
def jason_example():
req_data=request.get_json()
base_64_image_content=req_data['imgcnt']
#the issue starts from here
image = base64.b64decode(base_64_image_content)
image=Image.open(io.BytesIO(image))
image=vision.types.Image(content=content)
response=client.text_detection(image=image)
texts=response.text_annotations`
enter code here
No need to use Image.open which I think is a PIL method anyway. You should be able to decode this straight to a byte string with base64.decodebytes, as outlined in this answer,
The code should look like:
# the issue starts from here
image_bytes = base64.decodebytes(base_64_image_content)
image = vision.types.Image(content=image_bytes)
response=client.text_detection(image=image)
texts=response.text_annotations

Raw Images from rawpy darker than their thumbnails

I'm wanting to convert '.NEF' to '.png' using the rawpy, imageio and opencv libraries in Python. I've tried a variety of flags in rawpy to produce the same image that I see when I just open the NEF, but all of the images that output are extremely dark. What am I doing wrong?
My current version of the code is:
import rawpy
import imageio
from os.path import *
import os
import cv2
def nef2png(inputNEFPath):
parent, filename = split(inputNEFPath)
name, _ = splitext(filename)
pngName = str(name+'.png')
tempFileName = str('temp%s.tiff' % (name))
with rawpy.imread(inputNEFPath) as raw:
rgb = raw.postprocess(gamma=(2.222, 4.5),
no_auto_bright=True,
output_bps=16)
imageio.imsave(join(parent, tempFileName), rgb)
image = cv2.imread(join(parent, tempFileName), cv2.IMREAD_UNCHANGED)
cv2.imwrite(join(parent, pngName), image)
os.remove(join(parent, tempFileName))
I'm hoping to get to get this result:
https://imgur.com/Q8qWfwN
But I keep getting dark outputs like this:
https://imgur.com/0jIuqpQ
For the actual file NEF, I uploaded them to my google drive if you want to mess with it: https://drive.google.com/drive/folders/1DVSPXk2Mbj8jpAU2EeZfK8d2HZM9taiH?usp=sharing
You're not doing anything wrong, it's just that the thumbnail was generated by Nikon's proprietary in-camera image processing pipeline. It's going to be hard to get the exact same visual output from an open source tool with an entirely different set of algorithms.
You can make the image brighter by setting no_auto_bright=False. If you're not happy with the default brightening, you can play with the auto_bright_thr parameter (see documentation).

How to overlay images on each other in python and opencv?

I am trying to write images over each other. Ideally, what I want to do is to write every image in one folder over every image in another folder and output every unique image to another folder. So far, I am just working on having one image write over one image, but I can't seem to get that to work.
import numpy as np
import cv2
import matplotlib
def opencv_createsamples():
mask = ('resized_pos/2')
img = cv2.imread('neg/1')
new_img = img * (mask.astype(img.dtype))
cv2.imwrite('samp', new_img)
opencv_createsamples()
It would be helpful to have more information about your errors.
Something that stands out immediately is the lack of file type extensions. Your images are probably not being read correctly, to begin with. Also, image sizes would be a good thing to consider so you could resize as required.
If the goal is to blend images, considering the alpha channel is important. Here is a relevant question on StackOverflow:How to overlay images in python
Some other OpenCV docs that have helped me in the past: https://docs.opencv.org/trunk/d0/d86/tutorial_py_image_arithmetics.html
https://docs.opencv.org/3.1.0/d5/dc4/tutorial_adding_images.html
Hope this helps!

Resources