Godot pixelated image - godot

Godot 2d project, I created at 640 640 png using Gimp.
Imported those PNG's to Godot as texture(2d).
After setting scale to 0.1, I resized those images to 64 x 64 in godot scene.
When I initiate this image in my main scene, I get this pixelated disgusting result.
Edit : Dont be confused with rotated red wings, I did it at runtime. Its not part of the question.
My window size is 1270 x 780
Stretch Mode is viewport.
I tried changing import settings etc.
I wonder is it not possible to have a sleek image in this sizes?

Disclaimer: I haven’t bothered to fire up Godot to reproduce your problem.
I suspect you are shooting yourself in the foot by that scale 0.1 bit. Remember, every time you resample (scale) an image there is loss.
There are three things to do:
Prepare your image(s) to have the same size (resolution) as your intended target display. In other words, if your image is going to display at 64×64 pixels, then your source image should be 64×64 pixels.
When importing images, make sure that Filter is set to ☑ On. If your image contains alpha, you may also wish to check the Fix Alpha Border flag.
Perform as few resampling operations as possible to display the image. In your particular case, you are resampling it to a tenth of its size before again resampling it up to the displayed size. Don’t do that. (This is probably the main cause of your problem.) Just make your sprite have the natural size of the image, and scale the sprite only if necessary.
It is also possible that you are applying some other filter or that your renderer has a bad resampler attached to it, but your problem description does not lead me to think either of these are likely.

A warning ahead: I'm not into godot at all, but I have some experience with image resizing.
Your problem looks totally related to pure image resizing. If you scale down an image in one go by any factor smaller than 0.5 (this means make it smaller than half its original size), you will face this effect of an ugly small image.
To get a nice and smooth result, you have to resize it multiple times:
Always reduce the image size by half, until the next necessary step is bigger than scaling by 0.5.
For your case (640x640 to 64x64) it would need these steps:
640 * 0.5 => 320
320 * 0.5 => 160
160 * 0.5 => 80
80 * 0.8 => 64
You can either start with a much smaller image - if you never need it with that big size in your code, or you add multiple scaled resolutions to your resources, or you precalculate the halfed steps before you start using them and then just pick the right image to scale down by the final factor.

I'm not very experienced with godot and haven't opened it right now, but I could imagine that you shouldn't scale the image to 0.1 because then there will be a loss in image quality.

Related

Matplotlib: consistent image size for publications

I want to make publication-quality plots with Matplotlib. The biggest problem I am having right now is to tune the image and font sizes.
When I create a figure with several panels, I usually set a bigger figsize. For example, these three panels are created with a figsize=(12, 6 / 1.618) (pasted from Jupyter Lab, I always save to PDF files).
The lines can be perfectly seen, there is a lot of space, the figure seems nice. The problem is that in my publication this has to be a column-wise figure, so it has to be scaled down. A colum has a width of around ~3.5 inches. When the image is resized, it still looks good, but the axes labels become very tiny and unreadable. Of course, I can just simply start increasing the font sizes until I find a good size, but I would like to have a workflow that allows me to work with the lengths and sizes I have to use.
When I set the image size to figsize=(columnw, 0.5*columnw / 1.618) (so the aspect ratio is the same) as before, and set the font size around 10 (the font size of my publication) this is what I get:
So now the fonts are exactly the size I want them to be, the figure does not have to be reescaled, but the contents of the graph seem to be compressed into a very very tiny space. It just look... ugly.
Then, my question is: why using a big figsize with extremely large fontsizes gives a beautiful, readable figure when scaled, but with the a priori correct figsize without rescaling seems to be ugly? How could I work with real figsizes from the very beginning to obtain something nice?
I read some questions regarding image size with Matplotlib on this site, as well as a pair of blog posts, but I haven't found any information regarding this problem.
Thank you in advance.

opencv2: Circle detection not detecting the obvious ones

Problem
I'm trying to use opencv2 to detect PlayStation Move Motion Controllers in still images. In an attempt to increase the contrast between the orbs and the backgrounds, I decided to modify the input image to automatically scale the brightness level between the image's mean level and 96 above for each channel, then when converting to grayscale, taking the maximum value instead of the default transform, since some orbs are saturated but not "bright".
However, my best attempts at adjusting the parameters seems to not work well, detecting circles that aren't there over the obvious ones.
What can I do to improve the accuracy of the detection? What other improvements or algorithms do you think I could use?
Samples
In order of best to worst:
2 Wands, 1 Wand detected (showing all 2 detected circles)
2 Wands, 1 Wand detected with many nonexistent circles (showing top 4 circles)
1 Wand (against a dark background), 6 total circles, the lowest-ranked of which is the correct one (showing all 6 circles)
1 Wand (against a dark background), 44 total circles detected, none of which are that Wand (showing all 44 circles)
I am using this function call:
cv2.HoughCircles(img_gray,cv2.HOUGH_GRADIENT,
dp=1, minDist=24, param1=90, param2=25,
minRadius=2, maxRadius=48)
All images are resized and cropped to 640x480 (the resolution of the PS3 Eye). No blur is performed.
I think hough circles is the wrong approach for you, as you are not really looking for circles. You are looking for circular areas with strong intensity. Use e.g. blob detection instead, I linked a guide:
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
In the blob detection, you need to set the parameters to get a proper high-intensity circular area.
as the other user said, hough circles arent the best approach here because hough circles look for perfect circles only. whereas your target is "circular" but not a circle (due to motion blur, light bleed/reflection, noise etc)
I suggest converting the image to HSV then filtering by hue/color and intensities to get a binary threshold instead of using grayscale directly (that will help remove background & noise and limit the search area)
then using findContours() (faster than blob detection), check for contours of high circularity and expected size/area range and maybe even solidity.
area = cv2.contourArea(contour)
perimeter = cv2.arcLength(contour,True)
circularity = 4*np.pi*area / (perimeter**2)
solidity = area/cv2.contourArea(cv2.convexHull(contour))
your biggest problem will be the orb contour merging with the background due to low contrast. so maybe some adaptive threshold could help

How to make image fit screen in Godot

I am new in godot engine and I am trying to make mobile game (portrait mode only). I would like to make background image fit screen size. How do I do that? Do i have to import images with specific sizes and implement them all for various screens? If I import image to big, it will just cut out parts that don't fit screen.
Also, while developing, which width and height values should I use for these purposes?
With Godot 3, I am able to set size and position of sprite / other UI elements using script. I am not using the stretch mode for display window.
Here is how you can easily make the sprite to match viewport size -
var viewportWidth = get_viewport().size.x
var viewportHeight = get_viewport().size.y
var scale = viewportWidth / $Sprite.texture.get_size().x
# Optional: Center the sprite, required only if the sprite's Offset>Centered checkbox is set
$Sprite.set_position(Vector2(viewportWidth/2, viewportHeight/2))
# Set same scale value horizontally/vertically to maintain aspect ratio
# If however you don't want to maintain aspect ratio, simply set different
# scale along x and y
$Sprite.set_scale(Vector2(scale, scale))
Also for targeting mobile devices I would suggest importing a PNG of size 1080x1920 (you said portrait).
Working with different screen sizes is always a bit complicated. Especially for mobile games due to the different screen sizes, resolutions and aspect ratios.
The easiest way I can think of, is scaling of the viewport. Keep in mind that your root node is always a viewport. In Godot you can stretch the viewport in the project settings (you have to enable the stretch mode option). You can find a nice little tutorial here.
However, viewport stretching might result in an image distortion or black bars at the edges.
Another elegant approach would be to create an image that is larger than you viewport and just define an area that has to be shown on every device no matter whats the resolution. Here is someone showing what I am meaning.
I can't really answer your second question about the optimal width and height but I would look for the most typical mobile phone resolutions and ratios and go with that settings. In the end you probably should start with using the width and height ratio of the phone you want to use for testing and debugging.
Hope that helps.

Detecting center and area of shapes in an image

I am working with GD library, and I'm looking for a way to detect the nearest pixel to the middle center of shapes, as well as total area used by each shape in a monochromic black-and-white image.
I'm having difficulty coming up with an efficient algorithm to do this. If you have done something similar to this in the past, I'd be grateful for any solution that would help.
Check out the binary image library
Essentially, Otsu threshold to separate out foreground from background, then label connected components. That particular image looks very clean but you might need morph ops to clean it up a bit and get rid of small holes and other artifacts.
Then you have area trivially (count pixels in component) or almost as trivially (use the weighted area function that penalises edge pixels). Centre is just mean.
http://malcolmmclean.github.io/binaryimagelibrary/
#MalcolmMcLean is right but there are remaining difficulties (if you are after maximum accuracy).
If you threshold with Otsu, there are a few pairs of "kissing" dots which will form a single blob using connected component analysis.
In addition, Otsu threshoding will discard some of the partially filled edge pixels so that the weighted averages will be inaccurate. A cure would be to increase the threshold (up to 254 is possible), but that worsens the problem of the kissing dots.
A workaround is to keep a low threshold and dilate the blobs individually to obtain suitable masks that cover all edge pixels. Even so, slight inaccuracies will result in the vicinity of the kissings.
Blob splitting by the watershed transform is also possible but more care is required to handle the common pixels. I doubt that a prefect solution is possible.
An alternative is the use of subpixel edge detection and least-squares circle fitting (after blob detection with a very low threshold to separate the dots). By avoiding the edge pixels common to two circles, you can probably achieve excellent results.

Scaling an image up in Corona SDK without it becoming fuzzy

I am working on a classic RPG that requires a pixelated style of graphics. I want to do this by making a small image and scaling it up. However, when I do this, it gets fuzzy. Is there any way to scale it while keeping a crisp edge for every pixel, or do I just need to make a bigger image?
You cannot scale an image expecting it to keep a crisp aspect if it's not made in a big enough resolution in the first place. In your case you would have to make a bigger image and scale it down to make the small image.
If you do not use the large image all the time however, you should consider having two versions of the same image (one small / one large) for optimization sake.

Resources