How to used Python to automate click on a icon? - python-3.x

I have an window application that install in pc, I wish to used Python code to auto click on a icon after the application open. Below is my application look:
from this case I wish to used code to look for the Excel Icon and auto click it, the purpose of this icon is download the data.
I cannot used webdriver function because this is not the web application and this application don't have any coding behind, that mean when i run on Sample.exe it will direct pop up this window as the picture shown here.
Anyone have any idea on this?

import pyautogui
while True:
iconX, iconY = pyautogui.locateCenterOnScreen('icon.png')
pyautogui.doubleClick(iconX, iconY)
There are many different types of modules that you can use in order to achieve this but the one i would recommend is python auto GUI. This allows for screen searches and control over peripherals such as a mouse and keyboard. The code is relatively simple and is very basic. You can install it using the pip function using pip install pyautogui.
The code to achieve something like you say is one of the first things you will be able to do and if you need further help then feel free to ask but for convenience I will drop a website with the following functions.
https://pypi.org/project/PyAutoGUI/
I shall provide a demo code above.

Related

Is there a way to take control of an open Edge or Chrome browser window using Excel VBA?

I'm attempting to use VBA to transfer data from an Excel sheet to a number of input boxes on a webpage. Due to a couple of considerations specific to this site it would be much, much easier if I could simply log in to the webpage manually then use Selenium (or whatever) to take control of the browser window and execute the data entry tasks. My impression is that this was quite doable with Internet Explorer, but that's been phased out, and it looks like there's not a very robust method for doing this with Google Chrome. Is there a way to do this with Edge?
I managed to get half of the data entered using the AppActivate statement and a series of SendKeys commands - this was with Chrome - but that approach won't work for the rest of it, since AppActivate doesn't seem to really commandeer the browser window in a way that allows Selenium to click buttons etc.
Yes you can use control browser with selenium Basic. You can follow instruction on how to get it working with vba here. https://simpleexcelvba.com/google-chrome-automation-with-selenium-basic/#:~:text=Selenium%20Basic%20is%20a%20Selenium,by%20double%2Dclicking%20on%20it.

Pyautogui image recognition returns None no matter what I do

I have been trying to get pyautogui to locate graphically a folder on my desktop but it fails no matter what I do:
This is my code (int_auto.py)
import pyautogui
button = pyautogui.locateCenterOnScreen('/Users/cadellteng/Desktop/Test-ground/randompy/pyxell2.png')
print(button)
I literally read through every SO thread I could find on this topic and some suggested that the photo needs to be lossless, so I used the cmd+shift+4 command on Mac to achieve this. A screenshot of my desktop and the reference image I used is attached here.
Note: This photo was converted to jpg because SO only allows photos up to 2MiB to be uploaded
Other things I tried are:
1. using locateAllOnScreen
2. grayscale = True
3. uninstall pyautogui and reinstalling pyautogui
But no matter what I did, I couldn't seems to be able to get the outcome I want. If it helps, I use a double screen and my code editor (VS Code) is on the secondary screen. What you see here as my desktop is my primary screen.
When the program is running nothing is blocking the folder.
There is also a warning on my terminal which I'm not sure if it's going to be useful:
/Users/cadellteng/Desktop/Test-ground/randompy/env/lib/python3.8/site-packages/rubicon/objc/ctypes_patch.py:21:
UserWarning: rubicon.objc.ctypes_patch has only been tested with Python 3.4 through 3.7.
You are using Python 3.8.2. Most likely things will work properly, but you may experience crashes if Python's internals have changed significantly. warnings.warn(
Do let me know if you require additional information.
EDIT June 01, 2020: So I thought that maybe pyautogui was not able to detect the folder because there are multiple folders and they all look the same and was not able to detect it. So I got this image off the web and tried to search it with it opened on my desktop with nothing else blocking it. But even then it was not able to find it.
I'm not sure if I am doing something wrong here but with each try, I am becoming more convinced that this API is broken.
The size of the picture might be a problem. When I use locateOnScreen to find an element in a browser and zoom out by, for example, 10%, then it will not locate it on screen even if is almost the same.
You have a Macbook so I think its because of the pixel density of the screen and the resolution pyautogui detects. You can check by:
image = pyautogui.LocateCenterOnScreen(‘image.png’)
print(image)
If you see coordinates above 2000 its probably because of the resolution. I suggest a workaround if you want to click the image like:
pyautogui.click(image.x/2, image.y/2, clicks=1, interval=10, button=‘left’) also try cropping the images as small as possible!

GTK+, GLADE, PYTHON 3.6 Windows

I am an Electrical engineer. I am completely new for programming and coding..
Actually i am working as estimation engineer, where i am doing the same estimation again and again with excel.
As same in design stage also i am doing the same again and again.
I thought i will create a application which will automatically do the estimation based on my input. and the same time at the time of design stage it will input from my estimation sheet it will generate the drawing automatically.
(i am doing estimation in excel, preparing drawing in auto cad)
from google, i found the following..
To write a program I need to know a language. so i selected python version 3.6.1
To create a stand alone or micro soft windows installer i selected cx freeze which is converting my code to .exe filex
To create a user interface i selected glade.
But i dont know how to install gtk builder to link pythin code and glade files.
Hopefully you got the point what i am looking..
I am using windiws 10 with 64 bit version
Advance thanks for all
All the information you need on how to install GTK+ and Glade on Windows is on the GTK+ Website.
However, If you're a total beginner, then I suggest you start small, as the kind of work you're describing seems a lot of work for someone who has never written a single line of code.
As suggested by #liberforce, install GTK+ and Glade on Windows from their GTK+ Website. This way you will have an environment (MINGW64) inside which you can practice.
Install python from their download site.
Check also these instructions for how to use it.
The key points (at least for me) are:
Copy the hello.py script you created to C:\msys64\home\<username>
In the mingw32 terminal execute python3 hello.py - a window should appear.
After that, if you want to make an executable, follow these instructions. Try their example for gedit to get some ideas on how to create and how PKGBUILD works (a.k.a. follow the instructions for gedit to see what's happening).

PyQt: Simulate real mouse click on "desktop"?

If there's any way to simulate a real mouse click (press + release) at the absolute position of current desktop with PyQt, without other extenal library like PyUserInput?
I search around and just found this and this. But If I don't misunderstand, they seem to send their click event to Qt application it self, instead of the desktop?
Use PyQt's QTest, together with unittest or such. See also for example http://www.voom.net/pyqt-qtest-example.
If this is not for unit testing, look at sendEvent and postEvent (See http://doc.qt.digia.com/qq/qq11-events.html#syntheticevents). There are some limitations to Qt's mechanism for generating "artificial" events but based on what you describe, it is likely to work. If you have tried those and it doesn't work, please post the code you tried.

Tool to identify items in Qt interface

I'm trying to automate Skype on Ubuntu using LDTP which has a GUI that is written with Qt. LDTP requires that I know the names of the frames I'm interacting with and their objects. I don't have the Skype source code, but I was hoping there was some tool that might exist for extracting information about a Qt window or that it might at least confirm for me that automation is impossible for the window I'm trying to play with.
The reason I think this exists in the first place is that AutoIT had a similar application on Windows.
To find out if a window is able to be automated using LDTP, you can use the getapplist() and getwindowlist() functions as shown in the tutorial which can be found under doc on the github. To list the objects of this window, you can use getobjectlist().

Resources