we are used to develop for iOS platforms using Cocos2D, and there we have plenty of choice when it comes to image formats.
I cannot find any reference about how should we save our image assets for use within Corona, especially in relation to performance.
What I found out so far is only that PVR format is not supported (obviously because of the cross-platform support of the SDK). I seemed to find small hints that Corona uses 32bit pixel format for everything, so we cannot use different pixel formats as we do with Cocos2D.
Any CoronaSDK people out there that can answer me and/or redirect me to some documentation containing more details about this?
Thanks!
I have managed to find the confirmation in the Anscamobile forums - indeed, Corona SDK currently uses only 32bit pixel format for all textures, so there is no reason to save your images as 16-bit PNGs other than the size on disk of the image.
Corona SDK now supports indexed pngs, 16-bit pngs, grayscale pngs... In fact any sort of png that you want to use.
But the only performance increase is the speed to load in memory (it GREATLY improves loading times) and space on storage.
I am using ImageOptim and ImageAlpha to optmize my PNGs :)
Related
Is there a unified and lightweight method for loading multiple common image file formats in NodeJS which provides read access to individual pixels?
It should support gif, jpeg, and png.
Preferably it would either support other image formats too or provide a way to add more. (webp, etc.)
It does not need to be able to save the file again after modifying pixels, provide metadata access, or anything else.
It doesn't need to be able to load images from URLs.
So far the libraries that support multiple image formats are heavyweight, such as providing full canvas support or full image processing support.
Is there a lightweight way to do this that I'm not finding?
I don't know why I couldn't find this one before posting here:
get-pixels
Given a URL/path, grab all the pixels in an image and return the result as an ndarray. Written in 100% JavaScript, works both in browserify and in node.js and has no external native dependencies.
Currently the following file formats are supported:
PNG
JPEG
GIF
It hasn't had any updates for two years but seems the most lightweight. I'm guessing people might mostly use Jimp these days. It doesn't seem to have external dependencies and is actively developed, but includes a lot of image processing functionality I don't need.
I am using Tableau to do some data representations and the only good quality image export Tableau allows is *.emf
Unfortunately, the online tool I use to put the report together(Canva) does not support emf format.
When I convert the file to jpg or png, the quality is drastically reduced :(
How can I overcome this matter? I tried many things such as opening emf in Illustrator and saving back with CMYK colors and 300dpi and such. But nothing seems to keep the crisp quality of the original emf file.
User Friendly solution:
InkScape opens enhanced windows metafiles, and many other vector-graphical file formats.
It exports to png with choice for output's resolution
It is opensource and available for Linux, windows and Mac OS X.
It is a fact that Tableau's image export feature does not provide many options. In general when I need high quality images, I use one of the below methods depending on the quality I need and the tools available to me at that time:
Screenshot method: If you have a large screen, taking a screenshot directly from Tableau yields better images than the exported ones. If my viz is exported to web, I sometimes enlarge the graphic from my web browser and then take the screenshot.
Converting from PDF: Since PDF can contain vector objects, Tableau's PDF files are in high quality most of the times. If you cannot use these PDF files, you may try converting these files to PNG or JPG files using online or desktop tools. Here is an online tool you may use for this purpose, but be careful about your confidential files when using such online services :)
And there are more ways to convert from PDF but are usually more complicated since they contain some Photoshop steps. I am not sure whether these are easy to apply methods for a lot of files but still you may want to check one of them: https://community.tableau.com/thread/120134
Currently the png images used in the application do not show up crisp and clear. There is clearly a resolution issue. We tried to change the resolution of the images like in page.setImage from 32x32 to even 128x128. The higher the resolution, the worse the images actually look.
In native iOS the resolution of the images used for retina displays are defined by a naming convention, like icon.png and icon#2x.png, which has double the resolution.
Tried that as well, knowing there is no documented evidence that this should work.
Any words of wisdom?
Thanks!
Vincent
This is planned for a future release. The API (based on SWT) does currently not support multiple images with different resolutions to be set on a single widget.
[Note: This is a rewrite of an earlier question that was considered inappropriate and closed.]
I need to do some pixel-level analysis of television (TV) video. The exact nature of this analysis is not pertinent, but it basically involves looking at every pixel of every frame of TV video, starting from an MPEG-2 transport stream. The host platform will be server-class, multiprocessor 64-bit Linux machines.
I need a library that can handle the decoding of the transport stream and present me with the image data in real-time. OpenCV and ffmpeg are two libraries that I am considering for this work. OpenCV is appealing because I have heard it has easy to use APIs and rich image analysis support, but I have no experience using it. I have used ffmpeg in the past for extracting video frame data from files for analysis, but it lacks image analysis support (though Intel's IPP can supplement).
In addition to general recommendations for approaches to this problem (excluding the actual image analysis), I have some more specific questions that would help me get started:
Are ffmpeg or OpenCV commonly used in industry as a foundation for real-time
video analysis, or is there something else I should be looking at?
Can OpenCV decode video frames in real time, and still leave enough
CPU left over to do nontrivial image analysis, also in real-time?
Is sufficient to use ffpmeg for MPEG-2 transport stream decoding, or
is it preferable to just use an MPEG-2 decoding library directly (and if so, which one)?
Are there particular pixel formats for the output frames that ffmpeg
or OpenCV is particularly efficient at producing (like RGB, YUV, or YUV422, etc)?
1.
I would definitely recommend OpenCV for "real-time" image analysis. I assume by real-time you are referring to the ability to keep up with TV frame rates (e.g., NTSC (29.97 fps) or PAL (25 fps)). Of course, as mentioned in the comments, it certainly depends on the hardware you have available as well as the image size SD (480p) vs. HD (720p or 1080p). FFmpeg certainly has its quirks, but you would be hard pressed to find a better free alternative. Its power and flexibility quite impressive; I'm sure that is one of the reasons that the OpenCV developers decided to use it as the back-end for video decoding/encoding with OpenCV.
2.
I have not seen issues with high-latency while using OpenCV for decoding. How much latency can your system have? If you need to increase performance, consider using separate threads for capture/decoding and image analysis. Since you mentioned having multi-processor systems, this should take greater advantage of your processing capabilities. I would definitely recommend using the latest Intel Core-i7 (or possibly the Xeon equivalent) architecture as this will give you the best performance available today.
I have used OpenCV on several embedded systems, so I'm quite familiar with your desire for peak performance. I have found many times that it was unnecessary to process a full frame image (especially when trying to determine masks). I would highly recommend down-sampling the images if you are having difficultly processing your acquired video streams. This can sometimes instantly give you a 4-8X speedup (depending on your down-sample factor). Also on the performance front, I would definitely recommend using Intel's IPP. Since OpenCV was originally an Intel project, IPP and OpenCV blend very well together.
Finally, because image-processing is one of those "embarrassingly parallel" problem fields don't forget about the possibility of using GPUs as a hardware accelerator for your problems if needed. OpenCV has been doing a lot of work on this area as of late, so you should have those tools available to you if needed.
3.
I think FFmpeg would be a good starting point; most of the alternatives I can think of (Handbrake, mencoder, etc.) tend to use ffmpeg as a backend, but it looks like you could probably roll your own with IPP's Video Coding library if you wanted to.
4.
OpenCV's internal representation of colors is BGR unless you use something like cvtColor to convert it. If you would like to see a list of the pixel formats that are supported by FFmpeg, you can run
ffmpeg -pix_fmts
to see what it can input and output.
For the 4th question only:
video streams are encoded in a 422 format: YUV, YUV422, YCbCr, etc. Converting them to BGR and back (for re-encoding) eats up lots of time. So if you can write your algorithms to run on YUV you'll get an instant performance boost.
Note 1. While OpenCV natively supports BGR images, you can make it process YUV, with some care and knowledge about its internals.
By example, if you want to detect some people in the video, just take the upper half of the decoded video buffer (it contains the grayscale representation of the image) and process it.
Note 2. If you want to access the YUV image in opencv, you must use ffmpeg API directly in your app. OpenCV force the conversion from YUV to BGR in its VideoCapture API.
I'm starting to venture out from using jpeg and gif files to png, I was wondering if there were any standards for using png beside IE's lack of support for it. I also want to know if there was any current articles about setting I should be using when optimizing for web? Right now I'm using photoshop to do this, should I be using firework instead?
Which optimizations you use depends on the type of image. If your image contains only few colors, you might use png-8, otherwise you may need png-24. Same goes for the use of transparency/alpha blending.
The Photoshop save for web-feature does a fine job, but when your website has a lot of visitors, you may benefit from using PNGCrush for further compressing your images. You can use the YSlow plugin for FireFox to test how much bandwidth you can save by crushing your images.
Also, you can make use of CSS-sprites if your design allows it. This can result in less (but larger) images and therefore less requests and sometimes less bandwidth. But this doen't depend on the type of images you use.
Png is supported by IE, by the way. Only the alpha-transparency is not supported by IE 6, but there are CSS/Javascript trics to work around that, although they do not work for background images.
I wouldn't quit using jpg. Jpg is very useful when it comes to pictures. Png files are convenient for small images like buttons, graphical elements, and for images with large plain areas, like screenshots.