Is noLoop stopping execution of draw? - python-3.x

This is my first post here, so I apologize if I'm making any mistakes.
I recently started to study Processing in Python mode and I'm trying to develop a code that, after selecting an image from your computer, reads the colors and inserts them in a list. The final idea is to calculate the percentage of certain colors in the image. For this I am using the following code:
img = None
tam=5
cores_img = []
def setup():
size (500, 500)
selectInput(u"Escolha a ilustração para leitura de cores", "adicionar_imagens")
noLoop()
def adicionar_imagens(selection):
global img
if selection == None:
print(u"Seleção cancelada")
else:
print(u"Você selecionou " + selection.getAbsolutePath())
img = loadImage(selection.getAbsolutePath())
def draw():
if img is not None:
image (img, 0, 0)
for xg in range(0, img.width, tam):
x = map(xg, 0, img.width, 0, img.width)
for yg in range(0, img.height, tam):
y = map(yg, 0, img.height, 0, img.height)
cor = img.get(int(x), int(y))
cores_img.append(cor)
print (cores_img)
I'm using noLoop() so that the colors are added only once to the list. However, it seems that the draw is not running. It performs the setup actions, but when the image is selected, nothing happens. There is also no error message.
I'm completely lost about what might be happening. If anyone has any ideas and can help, I really appreciate it!

Calling noLoop() indeed stops the draw() loop from running, which means by the time you've selected and image nothing yould happen.
You can however manually call draw() (or redraw()) once the image is loaded:
img = None
tam=5
cores_img = []
def setup():
size (500, 500)
selectInput(u"Escolha a ilustração para leitura de cores", "adicionar_imagens")
noLoop()
def adicionar_imagens(selection):
global img
if selection == None:
print(u"Seleção cancelada")
else:
print(u"Você selecionou " + selection.getAbsolutePath())
img = loadImage(selection.getAbsolutePath())
redraw()
def draw():
if img is not None:
image (img, 0, 0)
for xg in range(0, img.width, tam):
x = map(xg, 0, img.width, 0, img.width)
for yg in range(0, img.height, tam):
y = map(yg, 0, img.height, 0, img.height)
cor = img.get(int(x), int(y))
cores_img.append(cor)
print (cores_img)
You should pay attention to a few details:
As the reference mentions, calling get() is slow: pixels[x + y * width] is faster (just remember to call loadPixels() if the array doesn't look right)
PImage already has a pixels array: calling img.resize(img.width / tam, img .height / tam) should downsample the image so you can read the same list
x = map(xg, 0, img.width, 0, img.width) (and similarly y) maps from one range to the same range which has no effect
e.g.
img = None
tam=5
cores_img = None
def setup():
size (500, 500)
selectInput(u"Escolha a ilustração para leitura de cores", "adicionar_imagens")
noLoop()
def adicionar_imagens(selection):
global img, cores_img
if selection == None:
print(u"Seleção cancelada")
else:
print(u"Você selecionou " + selection.getAbsolutePath())
img = loadImage(selection.getAbsolutePath())
print("total pixels",len(img.pixels))
img.resize(img.width / tam, img.height / tam);
cores_img = list(img.pixels)
print("resized pixels",len(img.pixels))
print(cores_img)
def draw():
pass
Update
I thought that calling noLoop on setup would make draw run once. Still
it won't print the image... I'm calling 'image (img, 0, 0)' at the end
of 'else', on 'def adicionar_imagens (selection)'. Should I call it
somewhere else?
think of adicionar_imagens time-wise, running separate to setup() and draw()
you are right, draw() should be called once (because of noLoop()), however it's called as soon as setup() completes but not later (as navigating the file system, selecting a file and confirming takes time)
draw() would need to be forced to run again after the image was loaded
Here's an updated snippet:
img = None
# optional: potentially useful for debugging
img_resized = None
tam=5
cores_img = None
def setup():
size (500, 500)
selectInput(u"Escolha a ilustração para leitura de cores", "adicionar_imagens")
noLoop()
def adicionar_imagens(selection):
global img, img_resized, cores_img
if selection == None:
print(u"Seleção cancelada")
else:
print(u"Você selecionou " + selection.getAbsolutePath())
img = loadImage(selection.getAbsolutePath())
# make a copy of the original image (to keep it intact)
img_resized = img.get()
# resize
img_resized.resize(img.width / tam, img.height / tam)
# convert pixels array to python list
cores_img = list(img.pixels)
# force redraw
redraw()
# print data
print("total pixels",len(img.pixels))
print("resized pixels",len(img.pixels))
# print(cores_img)
def draw():
print("draw called " + str(millis()) + "ms after sketch started")
# if an img was selected and loaded, display it
if(img != None):
image(img, 0, 0)
# optionally display resized image
if(img_resized != None):
image(img_resized, 0, 0)
Here are a couple of notes that may be helpful:
each pixel in the list is a 24 bit ARGB colour (e.g. all channels are stored in a single value). if you need individual colour channels remember you have functions like red(), green(), blue() available. (Also if that gets slow notice the example include faster versions using bit shifting and masking)
the Histogram example could be helpful. You would need to port from Java to Python syntax and use 3 histograms (one for each colour channel), however the principle of counting intensities is nicely illustrated

Related

A QGraphicsPixmapItem with a transparency gradient

I would like to apply a transparency gradient on a QGraphicsPixmapItem but I don't know how to go about it, for now I can only apply the transparency on the whole QGraphicsPixmapItem (not the gradient). The item undergoes a mirror effect (like an element on a wall which is reflected on the ground), then I would like to set up this gradient of transparency (from top to bottom; quite opaque at the top ... and in several stages of transparencies, it ends up being completely transparent at the bottom of the item), then I add blur. This is the part of the transparency gradient that I can't seem to do... I don't understand how it can work. Everything I've tried doesn't work. Can you help me ?
Thanks in advance.
Here is a fairly short code:
for item in self.scene.selectedItems() :
# Width
item_height = item.boundingRect().height()
# mirroir
item.setTransform(QTransform(1, 0, 0, 0, -1, 0, 0, 0, 1))
#
item.setOpacity(0.7)
# Blur
blur = QGraphicsBlurEffect()
blur.setBlurRadius(8)
item.setGraphicsEffect(blur)
item.setPos(100, 100 + 2 * (int(item_height)))
##############################################
"""
alphaGradient = QLinearGradient(item.boundingRect().topLeft(), item.boundingRect().bottomLeft())
alphaGradient.setColorAt(0.0, Qt.transparent)
alphaGradient.setColorAt(0.5, Qt.black)
alphaGradient.setColorAt(1.0, Qt.transparent)
effect = QGraphicsOpacityEffect()
effect.setOpacityMask(alphaGradient)
item.setGraphicsEffect(effect) ########
"""
##############################################
"""
opacity = QGraphicsOpacityEffect()
lg = QLinearGradient()
lg.setStart(0, 0)
lg.setFinalStop(0, 100)
lg.setColorAt(0, Qt.transparent)
lg.setColorAt(0.5, Qt.transparent)
lg.setColorAt(1.0, Qt.transparent)
opacity.setOpacityMask(lg)
#opacity.setOpacity(1)
item.setGraphicsEffect(opacity)
"""
##############################################
Only one single QGraphicsEffect can be applied for each target item (or widget), so you have two options:
create a new image that applies the opacity gradient (using QPainter composition modes) and only set the blur effect;
set the opacity effect on a parent item;
In both cases, I'd suggest you to create the "mirror" item as a child of the actual QGraphicsPixmapItem, which will ensure that it will always follow what happens to the parent, including visibility and geometry changes, transformations, etc.
Generate the mirror image with a transparency gradient
For this, we use the composition features of QPainter, and specifically the CompositionMode_SourceIn which allows to use a gradient as source for the alpha channel of the image that is being drawn.
The procedure is the following:
create a new image based on the size of the mirrored one;
create a gradient that has full opacity at 0 and is transparent at 1 (or less);
draw that gradient on the image;
set the composition mode;
draw the mirrored image;
create a QGraphicsPixmapItem based on that image;
class MirrorPixmapItem(QtWidgets.QGraphicsPixmapItem):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.mirror = QtWidgets.QGraphicsPixmapItem(self)
self.blurEffect = QtWidgets.QGraphicsBlurEffect(blurRadius=8)
self.mirror.setGraphicsEffect(self.blurEffect)
if not self.pixmap().isNull():
self.createMirror()
def setPixmap(self, pixmap):
old = self.pixmap()
super().setPixmap(pixmap)
if old == pixmap:
return
self.createMirror()
def setOffset(self, *args):
super().setOffset(*args)
if not self.mirror.pixmap().isNull():
self.mirror.setOffset(self.offset())
def createMirror(self):
source = self.pixmap()
if source.isNull():
self.mirror.setPixmap(source)
return
scale = .5
height = int(source.height() * scale)
output = QtGui.QPixmap(source.width(), height)
output.fill(QtCore.Qt.transparent)
qp = QtGui.QPainter(output)
grad = QtGui.QLinearGradient(0, 0, 0, height)
grad.setColorAt(0, QtCore.Qt.black)
grad.setColorAt(1, QtCore.Qt.transparent)
qp.fillRect(output.rect(), grad)
qp.setCompositionMode(qp.CompositionMode_SourceIn)
qp.drawPixmap(0, 0,
source.transformed(QtGui.QTransform().scale(1, -scale)))
qp.end()
self.mirror.setPixmap(output)
self.mirror.setY(self.boundingRect().bottom())
self.mirror.setOffset(self.offset())
Set the opacity on the parent item
In this case, we still have the blur effect on the mirror item, but we apply a QGraphicsOpacityEffect on the parent (the original image) and use the opacityMask that includes both the original image and the mirror, will be at full opacity down to the height of the original, and then begins to fade out to full transparency for the mirror.
class MirrorPixmapItem2(QtWidgets.QGraphicsPixmapItem):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.opacity = QtWidgets.QGraphicsOpacityEffect(opacity=1)
self.setGraphicsEffect(self.opacity)
self.mirror = QtWidgets.QGraphicsPixmapItem(self)
self.blurEffect = QtWidgets.QGraphicsBlurEffect(blurRadius=8)
self.mirror.setGraphicsEffect(self.blurEffect)
if not self.pixmap().isNull():
self.createMirror()
# ... override setPixmap() and setOffset() as above
def createMirror(self):
source = self.pixmap()
if source.isNull():
self.mirror.setPixmap(source)
return
scale = .5
self.mirror.setPixmap(
source.transformed(QtGui.QTransform().scale(1, -scale)))
self.mirror.setPos(self.boundingRect().bottomLeft())
self.mirror.setOffset(self.offset())
height = source.height()
gradHeight = height + height * scale
grad = QtGui.QLinearGradient(0, 0, 0, gradHeight)
grad.setColorAt(height / gradHeight, QtCore.Qt.black)
grad.setColorAt(1, QtCore.Qt.transparent)
self.opacity.setOpacityMask(grad)
This is the result, as you can see there's no visible difference, which approach to use is a matter of choice, but the first one might be slightly better from the perspective of performance, as no additional effect is required when the items are drawn, which might be important if you are going to have many (as in hundreds or thousands) image items and/or at high resolutions.

I need to make pytesseract.image_to_string faster

i'm capturing the screen and then reading text from it using tesseract to transform it to a string the problem is that it's to slow for what i need i'm doing about 5.6fps and I needed more like 10-20.(i didn't put the imports i used because u can just see them in the code)
i tried everithing i know and nothing helped
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
time.sleep(7)
def getDesiredWindow():
"""Returns the top-left and bottom-right of the desired window."""
print('Click the top left of the desired region.')
pt1 = detectClick()
print('First position set!')
time.sleep(1)
print('Click the bottom right of the desired region.')
pt2 = detectClick()
print('Got the window!')
return pt1,pt2
def detectClick():
"""Detects and returns the click position"""
state_left = win32api.GetKeyState(0x01)
print("Waiting for click...")
while True:
a = win32api.GetKeyState(0x01)
if a != state_left: #button state changed
state_left = a
if a < 0:
print('Detected left click')
return win32gui.GetCursorPos()
def gettext(pt1,pt2):
# From the two input points, define the desired box
box = (pt1[0],pt1[1],pt2[0],pt2[1])
image = ImageGrab.grab(box)
return pytesseract.image_to_string(image)
"""this is the part where i need it to be faster"""
Hi my solution was to make the image smaller.
Yes it might affect the image_to_string result and make it inaccurate but in my case since my images were 1500 width I managed to get 3x speed with this.
Try to change basewidth and try again:
from PIL import Image
basewidth = 600
img = Image.open('yourimage.png')
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth, hsize), Image.ANTIALIAS)
img.save('yourimage.png')

Reading a barcode using OpenCV QRCodeDetector

I am trying to use OpenCV on Python3 to create an image with a QR code and read that code back.
Here is some relevant code:
def make_qr_code(self, data):
qr = qrcode.QRCode(
version=2,
error_correction=qrcode.constants.ERROR_CORRECT_H,
box_size=10,
border=4,
)
qr.add_data(data)
return numpy.array( qr.make_image().get_image())
# // DEBUG
img = numpy.ones([380, 380, 3]) * 255
index = self.make_qr_code('Hello StackOverflow!')
img[:index.shape[0], :index.shape[1]][index] = [0, 0, 255]
frame = img
# // DEBUG
self.show_image_in_canvas(0, frame)
frame_mono = cv.cvtColor(numpy.uint8(frame), cv.COLOR_BGR2GRAY)
self.show_image_in_canvas(1, frame_mono)
qr_detector = cv.QRCodeDetector()
data, bbox, rectifiedImage = qr_detector.detectAndDecode(frame_mono)
if len(data) > 0:
print("Decoded Data : {}".format(data))
self.show_image_in_canvas(2, rectifiedImage)
else:
print("QR Code not detected")
(the calls to show_image_in_canvas are just for showing the images in my GUI so I can see what is going on).
When inspecting the frame and frame_mono visually, it looks OK to me
However, the QR Code Detector doesn't return anything (going into the else: "QR Code not detected").
There is literally nothing else in the frame than the QR code I just generated. What do I need to configure about cv.QRCodeDetector or what additional preprocessing do I need to do on my frame to make it find the QR code?
OP here; solved the problem by having a good look at the generated QR code and comparing it to some other sources.
The problem was not in the detection, but in the generation of the QR codes.
Apparently the array that qrcode.QRCode returns has False (or maybe it was 0 and I assumed it was a boolean) in the grid squares that are part of the code, and True (or non-zero) in the squares that are not.
So when I did img[:index.shape[0], :index.shape[1]][index] = [0, 0, 255] I was actually creating a negative image of the QR code.
When I inverted the index array the QR code changed from the image on the left to the image on the right and the detection succeeded.
In addition I decided to switch to the ZBar library because it's much better at detecting these codes under less perfect circumstances (like from a webcam image).
import cv2
import sys
filename = sys.argv[1]
# Or you can take file directly like this:
# filename = f'images/filename.jpg' where images is folder for files that you trying to read
# read the QRCODE image
# in case if QR code is not black/white it is better to convert it into grayscale
# Zero means grayscale
img = cv2.imread(filename, 0)
img_origin = cv2.imread(filename)
# initialize the cv2 QRCode detector
detector = cv2.QRCodeDetector()
# detect and decode
data, bbox, straight_qrcode = detector.detectAndDecode(img)
# if there is a QR code
if bbox is not None:
print(f"QRCode data:\n{data}")
# display the image with lines
# length of bounding box
# Cause bbox = [[[float, float]]], we need to convert fload into int and loop over the first element of array
n_lines = len(bbox[
0])
bbox1 = bbox.astype(int) # Float to Int conversion
for i in range(n_lines):
# draw all lines
point1 = tuple(bbox1[0, [i][0]])
point2 = tuple(bbox1[0, [(i + 1) % n_lines][0]])
cv2.line(img_origin, point1, point2, color=(255, 0, 0), thickness=2)
# display the result
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
else:
print("QR code not detected")
To re-state the accepted answer, the background of the QRcode must be white and the foreground must be black. So if the generated QRcode has a white foreground you must invert the colors, e.g.:
from cv2 import cv2
img = cv2.imread('C:/Users/N/qrcode.jpg')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Invert colors so foreground is black
img_invert = cv2.bitwise_not(img_gray)
cv2.imshow('gray', img_gray)
cv2.imshow('inverted', img_invert)
cv2.waitKey(1)
qr_detector = cv2.QRCodeDetector()
text, _, _ = qr_detector.detectAndDecode(img_invert)
print(text)

OpenGL error 1286 when window is minimised

Some information about what I'm using:
Ancient Potato integrated GPU (Intel(R) HD Graphics Family)
Windows 7
OpenGL <=3.1
Python 3.7.0
Error that I get instantly the moment I simply minimise the window:
$ python main.py
Traceback (most recent call last):
File "main.py", line 71, in <module>
main()
File "main.py", line 60, in main
renderer.render(mesh)
File "...\myproject\renderer.py", line 22, in render
glDrawElements(GL_TRIANGLES, mesh.indices, GL_UNSIGNED_INT, ctypes.c_void_p(0))
File "...\OpenGL\latebind.py", line 41, in __call__
return self._finalCall( *args, **named )
File "...\OpenGL\wrapper.py", line 854, in wrapperCall
raise err
File "...\OpenGL\wrapper.py", line 847, in wrapperCall
result = wrappedOperation( *cArguments )
File "...\OpenGL\error.py", line 232, in glCheckError
baseOperation = baseOperation,
OpenGL.error.GLError: GLError(
err = 1286,
baseOperation = glDrawElements,
pyArgs = (
GL_TRIANGLES,
6,
GL_UNSIGNED_INT,
c_void_p(None),
),
cArgs = (
GL_TRIANGLES,
6,
GL_UNSIGNED_INT,
c_void_p(None),
),
cArguments = (
GL_TRIANGLES,
6,
GL_UNSIGNED_INT,
c_void_p(None),
)
)
When I googled OpenGL errorcode 1286 I found that this happens in OpenGL context when something is wrong with Framebuffer. That really doesn't tell anything to me...
# renderer.py
class Renderer:
def __init__(self, colour=(0.0, 0.0, 0.0)):
self.colour = colour
#property
def colour(self):
return self._colour
#colour.setter
def colour(self, new_colour):
glClearColor(*new_colour, 1.0)
self._colour = new_colour
def render(self, mesh):
glBindVertexArray(mesh.vao_id)
glBindTexture(GL_TEXTURE_2D, mesh.texture)
glDrawElements(GL_TRIANGLES, mesh.indices, GL_UNSIGNED_INT, ctypes.c_void_p(0))
def clear(self):
glClear(GL_COLOR_BUFFER_BIT)
As I am using Framebuffers, I COULD have done something wrong, but I got everything to work the way I wanted it to (render to texture, then use the texture as source for rendering on quad and also as source for texture that will be rendered next frame, basically using GPU to manipulate grids of arbitrary data), I do it by swapping FBO's instead of swapping textures if it's unclear from the code:
# bufferedtexture.py (The place where I use Frame Buffer)
class BufferedTexture:
def __init__(self, width, height):
self._width = width
self._height = height
self._textures = glGenTextures(2)
self._buffers = glGenFramebuffers(2)
self._previous_buffer = 0
self._current_buffer = 1
self.init_buffer(0)
self.init_buffer(1)
#property
def width(self):
return self._width
#property
def height(self):
return self._height
#property
def buffer(self):
return self._buffers[self._current_buffer]
#property
def texture(self):
return self._textures[self._previous_buffer]
def init_buffer(self, index):
glBindFramebuffer(GL_FRAMEBUFFER, self._buffers[index])
glBindTexture(GL_TEXTURE_2D, self._textures[index])
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, self.width, self.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, ctypes.c_void_p(0))
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, self._textures[index], 0)
def set_texture_data(self, image_data):
glBindTexture(GL_TEXTURE_2D, self.texture)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, self.width, self.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data)
def swap_buffers(self):
self._previous_buffer = self._current_buffer
self._current_buffer = (self._current_buffer + 1) % 2
def enable(self):
glBindFramebuffer(GL_FRAMEBUFFER, self.buffer)
def disable(self):
glBindFramebuffer(GL_FRAMEBUFFER, 0)
def destroy(self):
glDeleteFramebuffers(self._buffers)
glDeleteTextures(self._textures)
And I use everything like this:
# main loop
while not window.should_close: # glfwWindowShouldClose(self.hwnd)
shader.enable() # glUseProgram(self._program)
buff.enable() # BufferedTexture object, source above
renderer.clear() # Renderer object, source above
renderer.render(mesh) # By the way, mesh is just a Quad, nothing fancy
buff.disable() # Tells driver that we will be drawing to screen again
buff.swap_buffers()
mesh.texture = buff.texture # give quad the texture that was rendered off-screen
renderer.clear()
renderer.render(mesh)
window.swap_buffers() # glfwSwapBuffers(self.hwnd)
window.poll_events() # glfwPollEvents()
I don't even know what could be wrong, again, this only happens when I minimise the window, otherwise I can leave it to run for hours and it's fine, but the moment I minimise it dies...
I even tried to add
assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE)
assert(glGetError() == GL_NO_ERROR)
at the end of BufferedTexture.init_buffer to quickly check whether it's a problem with FBO itself, but...
$ python main.py
<no assertion errors to be found>
<same error once I minimise>
TL;DR
Everything renders properly as intended;
haven't noticed any problems performance wise, no errors are being
thrown or swallowed while I initialize glfw and openGL stuff, I
raise RuntimeError myself when PyOpenGL would just be all fine with something going wrong (for some reason), without ever catching it;
Program crashes with OpenGL: 1286 when I minimise the window, losing focus does nothing, only when I minimise it...
Send help.
EDIT:
mesh = Mesh(indices, vertices, uvs)
buff = BufferedTexture(800, 600)
with Image.open("cat.jpg") as image:
w, h = image.size # the image is 800x600
img_data = np.asarray(image.convert("RGBA"), np.uint8)
buff.set_texture_data(img_data[::-1])
buff.swap_buffers()
buff.set_texture_data(img_data[::-1])
mesh.texture = buff.texture # this is just GL_TEXTURE_2D object ID
buff.disable()
while not window.should_close:
shader.enable()
#buff.enable()
#renderer.clear()
#renderer.render(mesh)
#buff.disable()
#buff.swap_buffers()
#mesh.texture = buff.texture
renderer.clear()
renderer.render(mesh)
window.swap_buffers()
window.poll_events()
Once I stop using buffers completely, it works as intended. So there's something wrong with my code, I hope.
Someone pointed out (but deleted their answers/comments for whatever reason), that window size becomes 0, 0 when minimized, and that was indeed the case, to prevent crashing and waste of resources when window is minimized, I did this:
Create and register a callback for window resize, this was very easy as I'm using only a single window and I already had it set up as a singleton, the callback just tells window whether it should sleep or not.
def callback(window, width, height):
Window.instance.sleeping = width == 0 or height == 0
(obviously) registered the callback for my own window:
glfwSetFramebufferSizeCallback(self.handle, callback)
I don't do anything besides polling events when window is "sleeping":
while not window.should_close:
window.poll_events()
if window.sleeping:
time.sleep(0.01)
continue
shader.enable()
buff.enable()
renderer.clear()
renderer.render(mesh)
buff.disable()
buff.swap_buffers()
mesh.texture = buff.texture
renderer.clear()
renderer.render(mesh)
window.swap_buffers()

tkinter bitmapimage "image doesn't exist"

So I had it working, not sure what I changed but maybe someone else can take a peek and see where the error is.
here is the code the image is made and set to display in
def app_openfile(self, w, o, s):
io_file = filedialog.askopenfilename()
self.hex_data=Util.openFile(io_file)
self.all_bin_data = Util.convertToBinary(self.hex_data)
self.bin_data = self.all_bin_data
##infor bar call goes here
self.bit_image = Util.makeImage(self.bin_data, w, o, s)
print(self.bit_image.size)
print(self.bit_image)
self.photo = ImageTk.BitmapImage(self.bit_image, background='white')
print(self.photo)
self.binViewBox.create_image((0,0), image=self.photo, anchor=N)
self.binViewBox.config(xscrollincrement=self.Scale,
yscrollincrement=self.Scale,
scrollregion=(0,
0,
math.ceil((int(w) * int(self.Scale))),
math.ceil(len(self.binData)/int(w)* int(self.Scale))))
Now in the Util.makeimage bit, I have it set to show before it returns to verify it makes an image, and it does. But for some reason I haven't figured out yet, is why its now throwing this error.
_tkinter.TclError: image "pyimage11" doesn't exist
this is the return of the print statement before the exception is thrown
making image 1000 0 1
(1000, 96)
"PIL.Image.Image image mode=1 size=1000x96 at 0xE1ADD30"
pyimage11
EDIT:
this is an example what it should produce, but isn't pushed to the tkinter canvas
enter image description here
Update as of 08/14/2018
def app_openfile():
io_file = filedialog.askopenfilename()
h_data=Util.openFile(io_file)
all_b_data = b_data = Util.convertToBinary(h_data)
all_b_data = b_data = Util.convertToBinary(h_data)
viewtest = Canvas(root, width=500, height=500,bd=0)
bimage = Util.makeImage(b_data, 800, 0, 5)
bimage = ImageTk.BitmapImage(bimage)
viewtest.create_image((0,0), image=bimage)
app_openfile()
root = Tk()
root.wm_title('PyDMT')
root.resizable(width=True, height=True)
root.mainloop()
Took out the class stuff to see if that was the issue, but still throws error.

Resources