Creating a Retro-themed loading screen in Python 3.x - python-3.x

I have an idea to include a loading screen, just for a fun project I'm working on, prior to starting the actual program in Python.
Back in the 80s, we had loading screens on the ZX Spectrum, and other 8-bit computers, that displayed an image one line at a time, then coloured the image in - something to look at while the game loaded. You can find an example at: https://youtu.be/MtBoRp_cSxQ
What I'd like to do is part-replicate this feature. I'd like to be able to take a JPG or PNG and have Python code to load it one line at a time, much in the same as the video in the link above. I don't mind if it can't be coloured in after, or that there's no funky raster bars in the borders. Just an image, 'loading' in at one line at a time (if you see what I mean).
I can achieve much the same effect in the console with some basic code, using a text file with ASCII art, but it'd be great with an actual image.
Any help will be greatly appreciated.
As always, thanks in advance.
Dave.
import os
import time
def splash_screen(seconds):
splash=open("test.txt", 'r')
for lines in splash:
print(lines)
time.sleep(seconds)
splash.close()
#Main Code Start
os.system('cls' if os.name == 'nt' else 'clear')
splash_screen(.5)
username=input("Type your username:")

Related

showing PIL ImageGrab as numpy Array in Kivy Window

I am trying to capture a part of my screen with PIL.ImageGrab and then converting it to an numpy.array as you can see below.
np.array(ImageGrab.grab(bbox=(0, 0, 720, 480)))
What I can't figure out is, how to get this data into a Kivy window or a widget?
All my Widgets are in separate classes so I would like to have one Screen class with all the code necessary for this part of my Window.
I have already tried methods like this one, but I don't know how to implement it since Kivy confuses me a lot sometimes.
If my idea just isn't realizable, I guess you could save each captured frame as .jpg format and then load it into Kivy. But I imagine this is not the most efficient way.
Thanks a lot in advance! Dominik

Open CV, find blurry images

I want a help in python code that should show me if the images in a folder is blurry or not.
I find this article useful https://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/
but in the output it shows a text on the top of the image with its blurry value.
But I want the result should generate a text file (output.txt) that shows the image path , it's blurry value and also states whther it is blurry or not. Rather than writing these things on the top of image.
I am using Anaonda 3 and install cv2, argparse and imutils explained in the article.
In output.txt it should be like this

PIL -> PyGame image conversion: Partial data loss

I'm using the C-based screenshot concept from JHolta's answer in Take a screenshot via a python script. [Linux] in order to generate screenshots I'd like to display in PyGame. With some minor tweaks (prepending extern "C" to the functions and importing Xutil instead of Xlib) the provided code works amazingly well. In short, it uses Image.frombuffer on a byte array returned by the C library. With show(), the image and anything about it I manipulate is displayed by ImageMagick.
However, if I convert it to Python 3's PyGame as per PIL and pygame.image, I only get a black surface. It's not a straightforward issue, though: If I draw onto the image before converting it into a PyGame image (like in the OP of the latter link), that does show on a black background when blitting the result. Furthermore, printing the byte objects from PILImage.tobytes and pygame.image.tostring shows they both contain data and their len is identical.
What am I doing wrong here? I'll gladly provide code if necessary, but I think it's more of a conceptual issue and I didn't really change the snippets from these answers a lot.
(Similar issue in Python 2, by the way, but there PyGame uses str instead of byte for tostring / fromstring and printing the tostring appears to yield an empty string.)
It turns out that a buggy trigger caused the screenshoot to be taken again while the fullscreen window displaying it was opening. I suppose there are a few milliseconds of blackness or of an undefined state (in the context of the screenshot function) at that moment, and the library is fast enough to catch that.
I'm not sure if this should stay up because it's basically a reminder to check for things that a human can't perceive. Feel free to delete if it's not appropriate.

How do you create a perfect pygame fullscreen?

I am making a game, and I want it to be fullscreen. However, the pygame fullscreen is strange, creating a screen too large. So I referred to this: Pygame FULLSCREEN Display Flag Creates A Game Screen That Is Too Large For The Screen. However, when I followed these instructions
import ctypes
ctypes.windll.user32.SetProcessDPIAware()
true_res = (ctypes.windll.user32.GetSystemMetrics(0), ctypes.windll.user32.GetSystemMetrics(1))
pygame.display.set_mode(true_res,pygame.FULLSCREEN)
from an answer (but instead using pywin32 instead of ctypes, like this: win32api.GetSystemMetric(0)).
I used this, and while it does create a fullscreen, it also creates a black border around my screen and enlarges everything a slight bit, including my cursor. How can I get rid of this black border and get all shapes to normal size? Or is there a better way to create a good fullscreen?
If it helps, I use Windows 10.
Thanks in advance!
I think the problem of enlarging everything arose with the use of ctypes module as because the ctypes module makes use of a function named as GetSystemMetrics() whose work is to get the size of the screen of your system.
And might be the import pygame is loading some dll that is not compatible with a dll that windll needs.
So I suggest either you update the ctype library or pygame library or update both libraries or you can enlarge screen size by providing custom width and height values according to the resolution supported by your system.
Hope this helps !!

Move to searched text on active screen with pyautogui

I am trying to make a program that searches for a text on a web-page, then places the mouse cursor on the highlighted text after it has been found. Is this possible using pyautogui? If so, how. If not, are there any other alternatives to do this?
Example code below:
import webbrowser
import pyautogui
var = 'Filtered Questions'
webbrowser.open('https://stackexchange.com/')
time.sleep(2)
pyautogui.hotkey('ctrl', 'f')
pyautogui.typewrite(var)
#code to place mouse cursor to the occurrence of var
I would prefer to not use the pyautogui.moveTo() or pyautogui.moveRel() because the text I am searching for on the website is not static. The position of the searched text varies when the web page loads. Any help would be highly appreciated.
When you use Chrome or Chromium as a browser there is a much easier and much more stable approach using ONLY pyautogui:
Perform Crtl + F with pyautogui
Perform Ctrl + Enter to 'click' on search result / open the link related to the result
With other browsers you have to clarify if there keyboard shortcuts also exists.
Yes, you can do that, but you additionally need Tesseract (and the Python-module pytesseract) for text recognition and PIL for taking screenshots.
Then perform the following steps:
Open the page
Open and perform the search (ctrl+f with pyautogui) - the view changes to the first result
Take a screenshot (with PIL)
Convert the image to text and data (with Tesseract) and find the text and the position
Use pyautogui to move the mouse and click on it
Here is the needed code for getting the image and the related data:
import time
from PIL import ImageGrab # screenshot
import pytesseract
from pytesseract import Output
pytesseract.pytesseract.tesseract_cmd = (r"C:\...\AppData\Local\Programs\Tesseract-OCR\tesseract") # needed for Windows as OS
screen = ImageGrab.grab() # screenshot
cap = screen.convert('L') # make grayscale
data=pytesseract.image_to_boxes(cap,output_type=Output.DICT)
print(data)
In data you find all required information you need to move the mouse and click on the text.
The downside of this approach is the ressource consuming OCR part which takes a few seconds on slower machines.
I stumbled upon this question while researching the topic. Basically the answer is no. " major points:
1) Pyautogui has the option of searching using images. Using this you could for example screenshot all the text you want to find and save as individual text files then use that to search for it dynamically and move the mouse there/click/do whatever you need to. However, as explained in the docs, it takes 1-2 seconds for each search which is rather unpractical.
2) In some cases, but not always, using ctrl+f on a website and searching for the text will scroll so that the result is in the middle (vertical) of the page. However that relies on some heavy implications about where the text to search is. If it's at the top of the page you obviously won't be able to use that method, same as if it's at the bottom.
If you're trying to automate clicks and have links with distinguishable names, my advice would be to parse the source code and artificially clicking the link. Otherwise you're probably better off with a automation suite like blue prism.
pyautogui is for controlling mouse and keyboard and for automating other GUI applications. If your need is to find a text on a webpage, you may look for better options that are intended for scraping webpages. For instance: Selenium
If you are a newcomer looking for how to find a string of text anywhere on your screen and stumbled upon this old question through a Google search, you can use the following snippet which I have used in my own projects (It takes a raw string of text as an input, and if the text is found on the screen, return the coordinates, and if not, return None):
import pyautogui
import pytesseract
import cv2
import numpy as np
# In case you're on Windows and pytesseract doesn't
# find your installation (Happened to me)
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
def find_coordinates_text(text, lang='en'):
# Take a screenshot of the main screen
screenshot = pyautogui.screenshot()
# Convert the screenshot to grayscale
img = cv2.cvtColor(np.array(screenshot), cv2.COLOR_RGB2GRAY)
# Find the provided text (text) on the grayscale screenshot
# using the provided language (lang)
data = pytesseract.image_to_data(img, lang=lang, output_type='data.frame')
# Find the coordinates of the provided text (text)
try:
x, y = data[data['text'] ==
text]['left'].iloc[0], data[data['text'] == text]['top'].iloc[0]
except IndexError:
# The text was not found on the screen
return None
# Text was found, return the coordinates
return (x, y)
Usage:
text_to_find = 'Filtered Questions'
coordinates = find_coordinates_text(text_to_find)

Resources