Using the Haxe programming language, is there any cross-platform way to read a PNG image, and get the pixel data from the image?
I have a file called stuff.png, and I want to obtain an array of RGB values from the image (as an integer array).
Here's an example usage of the Haxe format library to read a PNG file. You need -lib format in your compiler args / build.hxml:
function readPixels(file:String):{data:Bytes, width:Int, height:Int} {
var handle = sys.io.File.read(file, true);
var d = new format.png.Reader(handle).read();
var hdr = format.png.Tools.getHeader(d);
var ret = {
data:format.png.Tools.extract32(d),
width:hdr.width,
height:hdr.height
};
handle.close();
return ret;
}
Here's an example of how to get ARGB pixel data from the above:
public static function main() {
if (Sys.args().length == 0) {
trace('usage: PNGReader <filename>');
Sys.exit(1);
}
var filename = Sys.args()[0];
var pixels = readPixels(filename);
for (y in 0...pixels.height) {
for (x in 0...pixels.width) {
var p = pixels.data.getInt32(4*(x+y*pixels.width));
// ARGB, each 0-255
var a:Int = p>>>24;
var r:Int = (p>>>16)&0xff;
var g:Int = (p>>>8)&0xff;
var b:Int = (p)&0xff;
// Or, AARRGGBB in hex:
var hex:String = StringTools.hex(p,8);
trace('${ x },${ y }: ${ a },${ r },${ g },${ b } - ${ StringTools.hex(p,8) }');
}
}
You can always access the pixel data with BitmapData.getPixels/BitmapData.setPixels.
If you are using haXe NME, you can use Assets.getBitmapData() to load an asset image file.
If you want to load images from network, then you can use Loader class, it can asynchronous loading remote images, but in flash please mind the cross-domain issue.
For more generic ByteArray -> BitmapData conversion, use following code:
var ldr = new Loader();
ldr.loadBytes(cast(byteArray)); // bytearray contains raw image data
var dp: DisplayObject = ldr.content; // actually ontent should be of Bitmap class
var bitmapData = new BitmapData(Std.int(dp.width), Std.int(dp.height), true, 0);
bitmapData.draw(dp);
Related
In AS3 I could write the following:
fileReference = new FileReference();
var xmlStage:XML = new XML(<STAGE/>);
var xmlObjects:XML = new XML(<OBJECTS/>);
var j:uint;
var scene:SomeScene = ((origin_ as SecurityButton).origin as SomeScene);
var object:SomeObject;
for (j = 0; j < scene.objectArray.length; ++j) {
object = scene.objectArray[j];
if (1 == object.saveToXML){
var item:String = "obj";
var o:XML = new XML(<{item}/>);
o.#x = scene.objectArray[j].x;
o.#y = scene.objectArray[j].y;
o.#n = scene.objectArray[j].name;
o.#g = scene.objectArray[j].band;
o.#f = scene.objectArray[j].frame;
o.#w = scene.objectArray[j].width;
o.#h = scene.objectArray[j].height;
o.#s = scene.objectArray[j].sprite;
o.#b = scene.objectArray[j].bodyType;
xmlObjects.appendChild(o);
//System.disposeXML(o);
}
}
xmlStage.appendChild(xmlObjects);
fileReference.save(xmlStage, "XML.xml");
//System.disposeXML(xmlObjects);
//System.disposeXML(xmlStage);
//fileReference = null;
Is there an equivalent way to do this in Haxe? (Target of interest: HTML5)
If not, what are my options?
(The exported results of this code in AS3 are shown in this link below)
https://pastebin.com/raw/5twiJ01B
You can use the Xml class to create xml (see example: https://try.haxe.org/#68cfF )
class Test {
static function main() {
var root = Xml.createElement('root');
var child = Xml.createElement('my-element');
child.set('attribute1', 'value1'); //add your own object's values
child.set('attribute2', 'value2'); //may be add a few more children
root.addChild(child);
//this could be a file write, or POST'ed to http, or socket
trace(root.toString()); // <root><my-element attribute1="value1" attribute2="value2"/></root>
}
}
The root.toString() in that example could be instead serialized to a file File, or indeed any other kind of output (like POSTing via http to somewhere).
You could use FileReference for flash target, and sys.io and File for supported targets:
var output = sys.io.File.write(path, true);
output.writeString(data);
output.flush();
output.close();
Currently I am using a webGL based browser implementation code at client end. It is working perfectly. But, I want to use the same code at server end. Yes, this is not browser based, pure javascript code using headless-gl wrapper.
While doing this I am facing a problem.
new Image() is identified by browser, but at server side I am getting error Image is not defined.
In node-webGL it can be used as
*var Image = require("node-webgl").Image;*,
but in headless-gl I tried with
*require("gl").Image;* and *require('gl')(width, height, { preserveDrawingBuffer: true }).Image*.
With the above I haven't had any success. Can someone offer some explanation, or advice on a proper place to look for a headless-gl manual?
Headless-gl only provides WebGL. It does not provide image loading which is not part of WebGL per se, images are part of HTML5
You could try the images package
To load and get the data something like this should work
var images = require('images');
var img = images('/path/to/img.jpg');
var raw = img.toBuffer(images.TYPE_RAW);
var pixels = new Uint8Array(raw.buffer, 12); // skip the raw header
You could now upload the image
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, img.width(), img.height(),
0, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
Or make a wrapper to emulate the WebGL API for images
gl.texImage2D = function(origFn) {
return function(bind, mip, internalFormat) {
var width;
var height;
var border;
var format;
var type;
var data;
if (arguments.length === 9) {
// sig1: bind, mip, internalFormat, width, height, border, format, type, data
width = arguments[3];
height = arguments[4];
border = arguments[5];
format = arguments[6];
type = arguments[7];
data = arguments[8];
} else if (arguments.length === 6) {
// sig2: bind, mip, internalFormat, format, type, image
format = arguments[3];
type = arguments[4];
img = arguments[6];
var raw = img.toBuffer(images.TYPE_RAW);
data = new Uint8Array(raw.buffer, 12); // skip the raw header
width = img.width();
height = img.height();
border = 0;
} else {
throw "Bad args to texImage2D";
}
gl.texImage2D(bind, mip, internalFormat, width, height,
border, format, type, data);
};
}(gl.texImage2D);
I need to get size of image that I just added to stage and set it to defaultX and defaultY before I call the autorenderer, but I can't seem to grab the size.
How do I grab the size?
Also, from what I got so far, I think the best way to set the size is including with autorenderer. But what would be the implication here if I resize() later?
Any help will be appreciated. Thank you.
my code:
this.stage = new PIXI.Container();
var texture = PIXI.Texture.fromImage('background.png'); //1610x640
var spr = new PIXI.Sprite(texture);
this.stage.addChild(spr);
defaultX = texture.width;
defaultY = texture.height;
this.renderer = PIXI.autoDetectRenderer(defaultX, defaultY); //640x690
this.renderer.resize(x,y); // I know this works with other values
// eg if I hardcode with 1610,640
// but texture.width doesn't work and
// I don't want to hardcode it
Your image can not be loaded when you call texture.width.
I use next construction:
this.stage = new PIXI.Container();
this.renderer = PIXI.autoDetectRenderer(640, 690); //640x690
var baseTexture = new PIXI.BaseTexture('background.png');
// can be loaded from cache
if (baseTexture .hasLoaded) {
createSprite(baseTexture);
} else {
baseTexture .on('loaded', function() {
createSprite(baseTexture); // or this
});
}
function createSprite(baseTexture) {
var texture = new PIXI.Texture(baseTexture);
var spr = new PIXI.Sprite(texture);
this.stage.addChild(spr);
defaultX = texture.width;
defaultY = texture.height;
this.renderer.resize(defaultX,defaultY);
}
Im trying to get the OCR sample app to recognise some small text and how I'm doing it is to resize the image. Once I have resized the image it is all 'pixel-ee'
I want to use the SmothGaussian method to clean it up but I get an error each time I execute the method
Here is the code:
Image<Bgr, Byte> image = new Image<Bgr, byte>(openImageFileDialog.FileName);
using (Image<Gray, byte> gray = image.Convert<Gray, Byte>().Resize(800, 600, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR, true))
{
gray.Convert<Gray, Byte>()._SmoothGaussian(4);
_ocr.Recognize(gray);
Tesseract.Charactor[] charactors = _ocr.GetCharactors();
foreach (Tesseract.Charactor c in charactors)
{
image.Draw(c.Region, drawColor, 1);
}
imageBox1.Image = image;
//String text = String.Concat( Array.ConvertAll(charactors, delegate(Tesseract.Charactor t) { return t.Text; }) );
String text = _ocr.GetText();
ocrTextBox.Text = text;
}
Here is the image:
_SmoothGaussian can only handle odd numbers as kernel size so try with 3 or 5 as argument instead.
Im trying to do an experiment on how to use the HoughTransformation class of AForge. Im using this class to try to count the number of circles on an image. But I always got this error message: Unsupported pixel format of the source image.
Here is my code:
private void CountCircles(Bitmap sourceImage)
{
HoughCircleTransformation circleTransform = new HoughCircleTransformation(15);
circleTransform.ProcessImage(sourceImage);
Bitmap houghCircleImage = circleTransform.ToBitmap();
int numCircles = circleTransform.CirclesCount;
MessageBox.Show("Number of circles found : "+numCircles.ToString());
}
HoughCircleTransformation expects a binary bitmap.
private void CountCircles(Bitmap sourceImage)
{
var filter = new FiltersSequence(new IFilter[]
{
Grayscale.CommonAlgorithms.BT709,
new Threshold(0x40)
});
var binaryImage = filter.Apply(bitmap);
HoughCircleTransformation circleTransform = new HoughCircleTransformation(15);
circleTransform.ProcessImage(binaryImage);
Bitmap houghCircleImage = circleTransform.ToBitmap();
int numCircles = circleTransform.CirclesCount;
MessageBox.Show("Number of circles found : "+numCircles.ToString());
}