My friend and I are making a game, and when I upload his 2D images into Unity, the colors are quite under-saturated. Is this possible to remedy?
The filter is Point, and format is Compressed. I tried 16 bits and True Color, but it didn't change anything.
Related
I am attempting to make 4bit 16 color video with an adaptive color palette. I can convert each frame into a 16 color format, but I can't find an algorithm to automatically generate a 16 color palette for each frame.
I plan to change the color palette each frame to make the most use of the 16 colors, similar to Photo Shop's algorithm of generating the best 16 or 256 colors for a GIF. Can someone show me or point me towards an algorithm I can use to generate the adaptive palettes?
I am coding my project in Java since importing images and libraries was easier in it, but I can also code in C and C++. My goal for this project is so I can play videos on a Ti84CE, which will use 4bit 16 color palette color. The video linked below is what the quality would look like.
Here is what my program can generate so far with a fixed color palette. The fixed palette is not good at matching the source video too much as most colors become grey: https://media.giphy.com/media/m20vIaWvnYoCaqqfLL/giphy.gif
I know dithering is an option, but I will implement it after I can get adaptive palettes to work.
I tried internet searching for color palette generating algorithms but all the resources lead me to PhotoShop tutorials instead of coding algorithms.
I also tried generating random colors for palettes but the results were not much better. I wouldn't want to hand pick the palettes since its too tedious.
Problem
I'm trying to use opencv2 to detect PlayStation Move Motion Controllers in still images. In an attempt to increase the contrast between the orbs and the backgrounds, I decided to modify the input image to automatically scale the brightness level between the image's mean level and 96 above for each channel, then when converting to grayscale, taking the maximum value instead of the default transform, since some orbs are saturated but not "bright".
However, my best attempts at adjusting the parameters seems to not work well, detecting circles that aren't there over the obvious ones.
What can I do to improve the accuracy of the detection? What other improvements or algorithms do you think I could use?
Samples
In order of best to worst:
2 Wands, 1 Wand detected (showing all 2 detected circles)
2 Wands, 1 Wand detected with many nonexistent circles (showing top 4 circles)
1 Wand (against a dark background), 6 total circles, the lowest-ranked of which is the correct one (showing all 6 circles)
1 Wand (against a dark background), 44 total circles detected, none of which are that Wand (showing all 44 circles)
I am using this function call:
cv2.HoughCircles(img_gray,cv2.HOUGH_GRADIENT,
dp=1, minDist=24, param1=90, param2=25,
minRadius=2, maxRadius=48)
All images are resized and cropped to 640x480 (the resolution of the PS3 Eye). No blur is performed.
I think hough circles is the wrong approach for you, as you are not really looking for circles. You are looking for circular areas with strong intensity. Use e.g. blob detection instead, I linked a guide:
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
In the blob detection, you need to set the parameters to get a proper high-intensity circular area.
as the other user said, hough circles arent the best approach here because hough circles look for perfect circles only. whereas your target is "circular" but not a circle (due to motion blur, light bleed/reflection, noise etc)
I suggest converting the image to HSV then filtering by hue/color and intensities to get a binary threshold instead of using grayscale directly (that will help remove background & noise and limit the search area)
then using findContours() (faster than blob detection), check for contours of high circularity and expected size/area range and maybe even solidity.
area = cv2.contourArea(contour)
perimeter = cv2.arcLength(contour,True)
circularity = 4*np.pi*area / (perimeter**2)
solidity = area/cv2.contourArea(cv2.convexHull(contour))
your biggest problem will be the orb contour merging with the background due to low contrast. so maybe some adaptive threshold could help
From a large collection of jpeg images, I want to identify those more likely to be simple logos or text (as opposed to camera pictures). One identifying characteristic would be low color count. I expect most to have been created with a drawing program.
If a jpeg image has a palette, it's simple to get a color count. But I expect most files to be 24-bit color images. There's no restriction on image size.
I suppose I could create an array of 2^24 (16M) integers, iterate through every pixel and inc the count for that 24-bit color. Yuck. Then I would count the non-zero entries. But if the jpeg compression messes with the original colors I could end up counting a lot of unique pixels, which might be hard to distinguish from a photo. (Maybe I could convert each pixel to YUV colorspace, and save fewer counts.)
Any better ideas? Library suggestions? Humorous condescensions?
Sample 10000 random coordinates and make a histogram, then analyze the histogram.
I am making game for mobile phone and i have little knowledge of creating graphics for games. I am making graphics using CorelDraw & Photoshop.
I made flash.png using above 2 software's & could squeeze the size to 47Kb only.....
But I came across one game which has file size just 2kb for its background (bg0 & bg1.png)
I want to know how do I make such beautiful graphics without increasing the size of my file...
I assume the gamer must have hand sketched, scanned & used one of the above software's to fill the colors.....but i am not sure about it...
plz help
There are several ways to reduce the size of a PNG:
Reduce the colour depth. Don't use RGB true/24 bit colour, use an indexed colour image. You need to add a palette to the image, but each pixel is one byte, not two.
Once you have an indexed colour image, reduce the number of colours in the palette. There is a limit to how many colours you can reduce it by - the fewer colours, the lower the image quality.
Remove unnecessary PNG chunks. Art packages may add additional data to the PNG that isn't image data (creation date, author info, resolution, comments, etc.)
Check http://pmt.sourceforge.net/pngcrush/ to get rid of unneeded PNG chunks and compress the IDAT chunk even further. This might help a lot or not at all depending on the PNG that came out of the art packages. If it doesn't help, consider index PNGs. And if you go for paletized PNGs be sure to check out http://en.wikipedia.org/wiki/Color_cycling for cool effects you might be able to use.
Use a paletted png with few colors and then pass the png through a png optimizer like the free exe PngOptimizer
If your png still is too big reduce the number of colors used and reoptimize. Rince and repeat ^^.
I have used this technique on quite a lot of mobile games where size was of the essence.
how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/