I'm writing a script for automatizing some tasks at my job. However, I need to make my script portable and try it on different screen resolution.
So far right now I've tried to multiply my coordinate with the ratio between the old and new resolutions, but this doesn't work properly.
Do you know how I can convert my X, Y coordinates for mouse's clicks make it works on different resolution?
Quick question: Are you trying to get it to click on certain buttons? (i.e. buttons that look the same on every computer you plug it into) And by portable, do you mean on a thumb drive (usb)?
You may be able to take an image of the button (i.e. cropping a screenshot), pass it on to the opencv module, one of the modules has an Image within Image searching ability. you can pass that image along with a screenshot (using pyautogui.screenshot()) and it will return the (x,y) coordinates of the button, pass that on to pyautogui.moveto(x,y) and pyautogui.click(), it might be able to work. you might have to describe the action you are trying to get Pyautogui to do a little better.
Related
I've just got started with Godot yesterday, and I'm starting a game. I drew a few spritesheets for it. It seems much more efficient to pack all of the frames of an animation into a single image file, right?
Anyway, in Godot I have an AnimatedSprite, who of course has a SpriteFrames property, or whatever it's called. I want to split my spritesheet up into multiple images so that I can use each image as a separate frame in the animation, but as far as I can see Godot provides no such feature. Is this the case?
I've been searching for an answer on the web for a while now, and I can't find anything relevant.
I'd be very surprised if I can't do this in Godot, since I can do it in just about every other game engine I've seen.
Thanks!
(Just to clarify, I want to (programmatically or otherwise,) split a spritesheet into multiple textures, within Godot.)
Click New SpriteFrames in the Frames property menu of the AnimatedSprite node. Now click on the just created SpriteFrames next to the name of the Frames property. Animation Frames window should appear.
Click this Add frames from a Sprite Sheet button. Select your spite sheet file, set grid sizes and finally select individual frames from that sprite sheet.
(This works for me in Godot v3.2.2)
As it says: "Sprite node that can use multiple textures for animation."
Source: http://docs.godotengine.org/en/3.0/classes/class_animatedsprite.html
What you are searching for is using normal Sprite(2D), and set regions for it if you are using AnimationPlayer at each frame.
Example: https://www.youtube.com/watch?v=IGHcscKpA7Y
If you want to do all programmically, you just have to set the sprite to Sprite(2D), and then:
func _ready():
set_region(true)
set_region_rect(Rect2(positionx,positiony,width,height))
but i guess using AnimationPlayer is better option.
This is not tested because i should be sleeping right now, but it should work.
Can someone please tell me why I'm getting a green square on my Phaser game canvas as below?
Without seeing any code I can tell you that you'll see that when an image can't be loaded by the Phaser framework.
Open developer tools in your browser of choice and refresh after opening the Network tab. You should see a 404 for one of your images.
I believe if you look at the standard browser console you may also see messages about the name of the asset that it failed to load.
I had a slightly different case. Images were being loaded in the init function, which apparently doesn't work. I renamed that function to preload and suddenly the green squares are gone and the images show up.
My case was also a little different too; it appeared that all my image resources were loaded, but I think I was trying to create the sprites a little too quickly - in order words, I was trying to create and add sprites to my scene before the scene was properly loaded.
I'm going to try waiting until 'scene.scene.isActive(key);' returns a boolean of true.....maybe that's what will solve my issue. Failing that, I might just put in some sort of sleep/await promise of 1 second or something (not ideal, but might work)
ALSO NOTE: Part of the reason I was able to create my sprites too quickly was because I was doing so in my own custom function, not the typical create() function. Actually, the best solution is probably to create my sprites in the create() function and not a custom function...
Just started working with this awesome external but have a couple of questions.
When the control is evoked, is it always the top layer or can I have a background transparent image on top of it so I can frame the control nicely?
Also, my testing seems to read most Barcodes but when it comes down to reading Barcodes on hard drives, the control does not want to decode those.... Too dense of bar code pattern?
I am very impressed thus far with the ease of use of your externals. Makes we want to code more for mobile devices!
an overlaying transparent image is not possible, as far as i know.
but couldnĀ“t you use
command mergZXingControlSetRect pLeft,pTop,pRight,pBottom
to define the rect of that scanner after creation
or
command mergZXingControlCreate pLeft,pTop,pRight,pBottom
to create the scanner control in the specified rect.
Set the rect smaller than the width and the height of the screen.
You could then use an underlying image, which is displayed outside of the scanner rect, to show the frame around scanner control. Did not test it myself, but i would assume that this should work.
Unfortunately the native controls in externals and the ones the engine provides are added as views on top of the LiveCode view. That means you can't intermingle LiveCode controls with them. One thing that some users have done is add a web view with a transparent background and a load a png image. If you create the barcode view first and the web view second then the web view will be on top.
I want to get a very basic interaction with a SVG loaded through Athens in Pharo using Morphic. This example shows what I'm looking for. I have used
(ASVGMorph fromFile: 'lion.svg') drawOn: Display getCanvas
but clicking the SVG makes the picture dissapear. However all examples I have seen were using a web browser. Is this possible using Athens? There is any other work in this area?
That's because you are drawing it in display canvas, which is refreshed every time... so is natural that you lost it...
What you need to do is:
(ASVGMorph fromFile: 'lion.svg') openInWorld.
or better, you probably want to put it in a window:
(ASVGMorph fromFile: 'lion.svg') openInWindow.
at the end, you will probably want it inside some other morph that you create, but debugging anyone of the solutions above with show you how to proceed :)
Yes, as Esteban pointed, to keep morph on desktop, you should add it to world, i.e. use
openInWorld, or #openInWindow.
ASVGMorph is very basic, however, and not intended to serve all possible use cases.
For more advanced uses, it is preferred to use ASVGRoot instance and draw it in own morph or compose with other drawings.
I'd like to write a Linux screen magnifier that's customized to my liking. Ideally, the magnified window would be a square about 150 pixels wide that follows the mouse cursor wherever it goes.
Is it possible to do this in X11? Would it be easier to have an application window that follows the mouse around, or would it be better (or possible) to forget about the window altogether and just make the mouse pointer a 150x150 square that magnifies whatever's underneath?
Look at the source to xeyes?
This actually already exists, it's called Xmag (do a Google search for additional info). You might want to check out the source code for it if you want to know how it works.
EDIT: looks like I misread your question a little bit... if you want a magnified square to follow the mouse pointer around, I suppose it should be possible, but I don't know the technical details of how you'd do it. Regardless, the place to start is probably by looking at Xmag as a starting point.
I am unsure if this can run as its own app or would have to be integrated into your window manager. Either way, you would need libx11 (might have a different name from distro to distro). Also, I would suggest taking a look at swarp. I know this is not even close to what you are talking about, but the source code is only 35 lines and it shows what can be done with libx11.
I would personally make that a frameless window that always stays atop with a 1px hole in the middle. The events that the user makes (Mouse clicks, keypresses, whatever) is passed to the window below.
And when the user moves it's cursor it is ought to be visible to your window and you just move it over a bit. For the magnifying part, well - that is left as an exercise to the reader (Because I do not know how to do that as of yet ;-).
Texworks comes with such a feature to inspect the pdf resulting from typesetting a latex source. You can also choose between a square or a circular magnifier. See https://www.tug.org/texworks/ for access to the code which can serve a launchpad.