im trying to get the most dominant color of an image (perfect case: getting the 5 dominant colors sorted by most used). Is there a way to make that in processing? I tried already a code i found but with that im only getting the average color:
color extractColorFromImage(PImage img)
{
img.loadPixels();
int r = 0, g = 0, b = 0;
for (int i=0; i<img.pixels.length; i++)
{
color c = img.pixels[i];
r += c>>16&0xFF;
g += c>>8&0xFF;
b += c&0xFF;
}
r /= img.pixels.length; g /= img.pixels.length; b /= img.pixels.length;
return color(r, g, b);
}
So its not really that what i need. I already read that i could do it with HSV, k-means and so on.... and any way to do it in processing?
Example: Here i want to get the color red as the dominant color, with the example above im getting a dark orange. Red-Blue Picture
What about this?
Set the image in a bitmap and analyze every pixel. Just add up the amount of times a pixel is in the image.
static Dictionary<Color, int> CalcImageColors(Bitmap image)
{
var frequency = new Dictionary<Color, int>();
for (var h = 0; h < image.Height; h++)
{
for (var w = 0; w < image.Width; w++)
{
var pixel = image.GetPixel(w, h);
if (frequency.ContainsKey(pixel))
{
frequency[pixel]++;
}
else
{
frequency.Add(pixel, 1);
}
}
}
return frequency.OrderByDescending(x => x.Value).ToDictionary(x => x.Key, x => x.Value);
}
The RGB colour space may not always work as you'd expect in terms of mixing/averaging colours. You should convert to a perceptual colour space like CIE LAB. To do that you need to first convert from RGB to CIE XYZ then from CIE XYZ to CIE RGB. For more info check out these pages on CIE XYZ and CIE LAB.
In terms of Processing, here's a prototype using RGB<>CIE XYZ<>CIE LAB colour conversion code from this answer (with small tweaks to compile in the Processing IDE (which is antsy about using the static keyword)):
void setup(){
PImage src = loadImage("http://i.stack.imgur.com/0H1OM.png");
size(src.width*4,src.height);
noStroke();
//display original image
image(src,0,0);
//display RGB average color
fill(extractColorFromImage(src));
rect(src.width,0,src.width,src.height);
//display (perceptual)Lab average color
fill(extractAverageColorFromImage(src));
rect(src.width*2,0,src.width,src.height);
//display the most dominant colour
fill(extractDominantColorFromImage(src));
rect(src.width*3,0,src.width,src.height);
}
color extractDominantColorFromImage(PImage img){
//create a hashmap - the key is the colour, the value associated is the number of pixels per colour
HashMap<Integer,Integer> colorCounter = new HashMap<Integer,Integer>();
int numPixels = img.pixels.length;
for (int i=0; i<numPixels; i++){
int colorKey = img.pixels[i];
//if the colour has already been added to the hashmap, increment the count
if(colorCounter.containsKey(colorKey)){
colorCounter.put(colorKey,colorCounter.get(colorKey)+1);
}else{//otherwise count it as 1
colorCounter.put(colorKey,1);
}
}
//find the most dominant colour - note you can implement this to return more than one if you need to
int max = 0;//what's the highest density of pixels per one colour
int dominantColor = 0;//which colour is it
//for each key (colour) in the keyset
for(int colorKey : colorCounter.keySet()){
//get the pixel count per colour
int count = colorCounter.get(colorKey);
//if this one is the highest, updated the max value and keep track of the colour
if(count > max){
max = count;
dominantColor = colorKey;
}
}
//return the winner (colour with most pixels associated)
return dominantColor;
}
color extractColorFromImage(PImage img)
{
img.loadPixels();
int r = 0, g = 0, b = 0;
for (int i=0; i<img.pixels.length; i++)
{
color c = img.pixels[i];
r += c>>16&0xFF;
g += c>>8&0xFF;
b += c&0xFF;
}
r /= img.pixels.length; g /= img.pixels.length; b /= img.pixels.length;
return color(r, g, b);
}
color extractAverageColorFromImage(PImage img){
float[] average = new float[3];
CIELab lab = new CIELab();
int numPixels = img.pixels.length;
for (int i=0; i<numPixels; i++){
color rgb = img.pixels[i];
float[] labValues = lab.fromRGB(new float[]{red(rgb),green(rgb),blue(rgb)});
average[0] += labValues[0];
average[1] += labValues[1];
average[2] += labValues[2];
}
average[0] /= numPixels;
average[1] /= numPixels;
average[2] /= numPixels;
float[] rgb = lab.toRGB(average);
return color(rgb[0] * 255,rgb[1] * 255,rgb[2] * 255);
}
//from https://stackoverflow.com/questions/4593469/java-how-to-convert-rgb-color-to-cie-lab
import java.awt.color.ColorSpace;
public class CIELab extends ColorSpace {
#Override
public float[] fromCIEXYZ(float[] colorvalue) {
double l = f(colorvalue[1]);
double L = 116.0 * l - 16.0;
double a = 500.0 * (f(colorvalue[0]) - l);
double b = 200.0 * (l - f(colorvalue[2]));
return new float[] {(float) L, (float) a, (float) b};
}
#Override
public float[] fromRGB(float[] rgbvalue) {
float[] xyz = CIEXYZ.fromRGB(rgbvalue);
return fromCIEXYZ(xyz);
}
#Override
public float getMaxValue(int component) {
return 128f;
}
#Override
public float getMinValue(int component) {
return (component == 0)? 0f: -128f;
}
#Override
public String getName(int idx) {
return String.valueOf("Lab".charAt(idx));
}
#Override
public float[] toCIEXYZ(float[] colorvalue) {
double i = (colorvalue[0] + 16.0) * (1.0 / 116.0);
double X = fInv(i + colorvalue[1] * (1.0 / 500.0));
double Y = fInv(i);
double Z = fInv(i - colorvalue[2] * (1.0 / 200.0));
return new float[] {(float) X, (float) Y, (float) Z};
}
#Override
public float[] toRGB(float[] colorvalue) {
float[] xyz = toCIEXYZ(colorvalue);
return CIEXYZ.toRGB(xyz);
}
CIELab() {
super(ColorSpace.TYPE_Lab, 3);
}
private double f(double x) {
if (x > 216.0 / 24389.0) {
return Math.cbrt(x);
} else {
return (841.0 / 108.0) * x + N;
}
}
private double fInv(double x) {
if (x > 6.0 / 29.0) {
return x*x*x;
} else {
return (108.0 / 841.0) * (x - N);
}
}
// private Object readResolve() {
// return getInstance();
// }
// private static class Holder {
// static final CIELab INSTANCE = new CIELab();
// }
// private static final long serialVersionUID = 5027741380892134289L;
private final ColorSpace CIEXYZ =
ColorSpace.getInstance(ColorSpace.CS_CIEXYZ);
private final double N = 4.0 / 29.0;
}
You can see a preview bellow with the original image, then (in this order):
the RGB average
the LAB average
the most dominant colour
Break your problem down into smaller steps.
Step 1: Can you iterate over the pixels in the image? Check out the get() function to help with that. Or you can use the for loop in your code. But first, try just printing out the RGB value of each cell.
Step 2: When you have that working, try keeping track of the count of each color you see. The way you do this depends on exactly what you want to do: should (255, 0, 0) and (200, 0, 0) both count as red? But one way might be to use a HashMap<color, Integer> that keeps track of the count of each color.
Step 3: Given the counts of each color, now you can output the dominant color. How you do this depends on the data structure you used in step 2.
If you get stuck on a specific step, post an MCVE and we'll go from there. Good luck!
You might want to look at this tutorial on finding dominant colors in an image. - it's a more mathematical take on the problem. The idea is to use statistics on the image to figure out the main colors. Source code is available for OpenCV - so it should be possible to adapt it to use for Processing!
Related
I want to make my dot program turn around when they reach edge
so basically i just simply calculate
x = width/2+cos(a)*20;
y = height/2+sin(a)*20;
it's make circular movement. so i want to make this turn around by checking the edge. i also already make sure that y reach the if condition using println command
class particles {
float x, y, a, r, cosx, siny;
particles() {
x = width/2; y = height/2; a = 0; r = 20;
}
void display() {
ellipse(x, y, 20, 20);
}
void explode() {
a = a + 0.1;
cosx = cos(a)*r;
siny = sin(a)*r;
x = x + cosx;
y = y + siny;
}
void edge() {
if (x>width||x<0) cosx*=-1;
if (y>height||y<0) siny*=-1;
}
}
//setup() and draw() function
particles part;
void setup(){
size (600,400);
part = new particles();
}
void draw(){
background(40);
part.display();
part.explode();
part.edge();
}
they just ignore the if condition
There is no problem with your check, the problem is with the fact that presumably the very next time through draw() you ignore what you did in response to the check by resetting the values of cosx and siny.
I recommend creating two new variables, dx and dy ("d" for "direction") which will always be either +1 and -1 and change these variables in response to your edge check. Here is a minimal example:
float a,x,y,cosx,siny;
float dx,dy;
void setup(){
size(400,400);
background(0);
stroke(255);
noFill();
x = width/2;
y = height/2;
dx = 1;
dy = 1;
a = 0;
}
void draw(){
ellipse(x,y,10,10);
cosx = dx*20*cos(a);
siny = dy*20*sin(a);
a += 0.1;
x += cosx;
y += siny;
if (x > width || x < 0)
dx = -1*dx;
if (y > height || y < 0)
dy = -1*dy;
}
When you run this code you will observe the circles bouncing off the edges:
I am using the following code to convert a Bitmap to Complex and vice versa.
Even though those were directly copied from Accord.NET framework, while testing these static methods, I have discovered that, repeated use of these static methods cause 'data-loss'. As a result, the end output/result becomes distorted.
public partial class ImageDataConverter
{
#region private static Complex[,] FromBitmapData(BitmapData bmpData)
private static Complex[,] ToComplex(BitmapData bmpData)
{
Complex[,] comp = null;
if (bmpData.PixelFormat == PixelFormat.Format8bppIndexed)
{
int width = bmpData.Width;
int height = bmpData.Height;
int offset = bmpData.Stride - (width * 1);//1 === 1 byte per pixel.
if ((!Tools.IsPowerOf2(width)) || (!Tools.IsPowerOf2(height)))
{
throw new Exception("Imager width and height should be n of 2.");
}
comp = new Complex[width, height];
unsafe
{
byte* src = (byte*)bmpData.Scan0.ToPointer();
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++, src++)
{
comp[y, x] = new Complex((float)*src / 255,
comp[y, x].Imaginary);
}
src += offset;
}
}
}
else
{
throw new Exception("EightBppIndexedImageRequired");
}
return comp;
}
#endregion
public static Complex[,] ToComplex(Bitmap bmp)
{
Complex[,] comp = null;
if (bmp.PixelFormat == PixelFormat.Format8bppIndexed)
{
BitmapData bmpData = bmp.LockBits( new Rectangle(0, 0, bmp.Width, bmp.Height),
ImageLockMode.ReadOnly,
PixelFormat.Format8bppIndexed);
try
{
comp = ToComplex(bmpData);
}
finally
{
bmp.UnlockBits(bmpData);
}
}
else
{
throw new Exception("EightBppIndexedImageRequired");
}
return comp;
}
public static Bitmap ToBitmap(Complex[,] image, bool fourierTransformed)
{
int width = image.GetLength(0);
int height = image.GetLength(1);
Bitmap bmp = Imager.CreateGrayscaleImage(width, height);
BitmapData bmpData = bmp.LockBits(
new Rectangle(0, 0, width, height),
ImageLockMode.ReadWrite,
PixelFormat.Format8bppIndexed);
int offset = bmpData.Stride - width;
double scale = (fourierTransformed) ? Math.Sqrt(width * height) : 1;
unsafe
{
byte* address = (byte*)bmpData.Scan0.ToPointer();
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++, address++)
{
double min = System.Math.Min(255, image[y, x].Magnitude * scale * 255);
*address = (byte)System.Math.Max(0, min);
}
address += offset;
}
}
bmp.UnlockBits(bmpData);
return bmp;
}
}
(The DotNetFiddle link of the complete source code)
(ImageDataConverter)
Output:
As you can see, FFT is working correctly, but, I-FFT isn't.
That is because bitmap to complex and vice versa isn't working as expected.
What could be done to correct the ToComplex() and ToBitmap() functions so that they don't loss data?
I do not code in C# so handle this answer with extreme prejudice!
Just from a quick look I spotted few problems:
ToComplex()
Is converting BMP into 2D complex matrix. When you are converting you are leaving imaginary part unchanged, but at the start of the same function you have:
Complex[,] complex2D = null;
complex2D = new Complex[width, height];
So the imaginary parts are either undefined or zero depends on your complex class constructor. This means you are missing half of the data needed for reconstruction !!! You should restore the original complex matrix from 2 images one for real and second for imaginary part of the result.
ToBitmap()
You are saving magnitude which is I think sqrt( Re*Re + Im*Im ) so it is power spectrum not the original complex values and so you can not reconstruct back... You should store Re,Im in 2 separate images.
8bit per pixel
That is not much and can cause significant round off errors after FFT/IFFT so reconstruction can be really distorted.
[Edit1] Remedy
There are more options to repair this for example:
use floating complex matrix for computations and bitmap only for visualization.
This is the safest way because you avoid additional conversion round offs. This approach has the best precision. But you need to rewrite your DIP/CV algorithms to support complex domain matrices instead of bitmaps which require not small amount of work.
rewrite your conversions to support real and imaginary part images
Your conversion is really bad as it does not store/restore Real and Imaginary parts as it should and also it does not account for negative values (at least I do not see it instead they are cut down to zero which is WRONG). I would rewrite the conversion to this:
// conversion scales
float Re_ofset=256.0,Re_scale=512.0/255.0;
float Im_ofset=256.0,Im_scale=512.0/255.0;
private static Complex[,] ToComplex(BitmapData bmpRe,BitmapData bmpIm)
{
//...
byte* srcRe = (byte*)bmpRe.Scan0.ToPointer();
byte* srcIm = (byte*)bmpIm.Scan0.ToPointer();
complex c = new Complex(0.0,0.0);
// for each line
for (int y = 0; y < height; y++)
{
// for each pixel
for (int x = 0; x < width; x++, src++)
{
complex2D[y, x] = c;
c.Real = (float)*(srcRe*Re_scale)-Re_ofset;
c.Imaginary = (float)*(srcIm*Im_scale)-Im_ofset;
}
src += offset;
}
//...
}
public static Bitmap ToBitmapRe(Complex[,] complex2D)
{
//...
float Re = (complex2D[y, x].Real+Re_ofset)/Re_scale;
Re = min(Re,255.0);
Re = max(Re, 0.0);
*address = (byte)Re;
//...
}
public static Bitmap ToBitmapIm(Complex[,] complex2D)
{
//...
float Im = (complex2D[y, x].Imaginary+Im_ofset)/Im_scale;
Re = min(Im,255.0);
Re = max(Im, 0.0);
*address = (byte)Im;
//...
}
Where:
Re_ofset = min(complex2D[,].Real);
Im_ofset = min(complex2D[,].Imaginary);
Re_scale = (max(complex2D[,].Real )-min(complex2D[,].Real ))/255.0;
Im_scale = (max(complex2D[,].Imaginary)-min(complex2D[,].Imaginary))/255.0;
or cover bigger interval then the complex matrix values.
You can also encode both Real and Imaginary parts to single image for example first half of image could be Real and next the Imaginary part. In that case you do not need to change the function headers nor names at all .. but you would need to handle the images as 2 joined squares each with different meaning ...
You can also use RGB images where R = Real, B = Imaginary or any other encoding that suites you.
[Edit2] some examples to make my points more clear
example of approach #1
The image is in form of floating point 2D complex matrix and the images are created only for visualization. There is little rounding error this way. The values are not normalized so the range is <0.0,255.0> per pixel/cell at first but after transforms and scaling it could change greatly.
As you can see I added scaling so all pixels are multiplied by 315 to actually see anything because the FFT output values are small except of few cells. But only for visualization the complex matrix is unchanged.
example of approach #2
Well as I mentioned before you do not handle negative values, normalize values to range <0,1> and back by scaling and rounding off and using only 8 bits per pixel to store the sub results. I tried to simulate that with my code and here is what I got (using complex domain instead of wrongly used power spectrum like you did). Here C++ source only as an template example as you do not have the functions and classes behind it:
transform t;
cplx_2D c;
rgb2i(bmp0);
c.ld(bmp0,bmp0);
null_im(c);
c.mul(1.0/255.0);
c.mul(255.0); c.st(bmp0,bmp1); c.ld(bmp0,bmp1); i2iii(bmp0); i2iii(bmp1); c.mul(1.0/255.0);
bmp0->SaveToFile("_out0_Re.bmp");
bmp1->SaveToFile("_out0_Im.bmp");
t. DFFT(c,c);
c.wrap();
c.mul(255.0); c.st(bmp0,bmp1); c.ld(bmp0,bmp1); i2iii(bmp0); i2iii(bmp1); c.mul(1.0/255.0);
bmp0->SaveToFile("_out1_Re.bmp");
bmp1->SaveToFile("_out1_Im.bmp");
c.wrap();
t.iDFFT(c,c);
c.mul(255.0); c.st(bmp0,bmp1); c.ld(bmp0,bmp1); i2iii(bmp0); i2iii(bmp1); c.mul(1.0/255.0);
bmp0->SaveToFile("_out2_Re.bmp");
bmp1->SaveToFile("_out2_Im.bmp");
And here the sub results:
As you can see after the DFFT and wrap the image is really dark and most of the values are rounded off. So the result after unwrap and IDFFT is really pure.
Here some explanations to code:
c.st(bmpre,bmpim) is the same as your ToBitmap
c.ld(bmpre,bmpim) is the same as your ToComplex
c.mul(scale) multiplies complex matrix c by scale
rgb2i converts RGB to grayscale intensity <0,255>
i2iii converts grayscale intensity ro grayscale RGB image
I'm not really good in this puzzles but double check this dividing.
comp[y, x] = new Complex((float)*src / 255, comp[y, x].Imaginary);
You can loose precision as it is described here
Complex class definition in Remarks section.
May be this happens in your case.
Hope this helps.
For context: I am going to analyze the breathing movement of parents during kangaroo mother care and I wish to respect their privacy by not recording them, but only the movement of stickers I placed on their chest and stomach.
So far, I'm able to track 2 colours based on webcam input through the code below. However, I would like to record only the tracked colours instead of the webcam feed as to preserve the privacy of the parent.
Does anybody know how to add a background colour, whilst still being able to track colour?
import processing.video.*;
Capture video;
final int TOLERANCE = 20;
float XRc = 0;// XY coordinate of the center of the first target
float YRc = 0;
float XRh = 0;// XY coordinate of the center of the second target
float YRh = 0;
int ii=0; //Mouse click counter
color trackColor; //The first color is the center of the robot
color trackColor2; //The second color is the head of the robot
void setup() {
size(640,480);
video = new Capture(this,640,480);
video.start();
trackColor = color(255,0,0);
trackColor2 = color(255,0,0);
smooth();
}
void draw() {
background(0);
if (video.available()) {
video.read();
}
video.loadPixels();
image(video,0,0);
float r2 = red(trackColor);
float g2 = green(trackColor);
float b2 = blue(trackColor);
float r3 = red(trackColor2);
float g3 = green(trackColor2);
float b3 = blue(trackColor2);
int somme_x = 0, somme_y = 0;
int compteur = 0;
int somme_x2 = 0, somme_y2 = 0;
int compteur2 = 0;
for(int x = 0; x < video.width; x++) {
for(int y = 0; y < video.height; y++) {
int currentLoc = x + y*video.width;
color currentColor = video.pixels[currentLoc];
float r1 = red(currentColor);
float g1 = green(currentColor);
float b1 = blue(currentColor);
if(dist(r1,g1,b1,r2,g2,b2) < TOLERANCE) {
somme_x += x;
somme_y += y;
compteur++;
}
else if(compteur > 0) {
XRc = somme_x / compteur;
YRc = somme_y / compteur;
}
if(dist(r1,g1,b1,r3,g3,b3) < TOLERANCE) {
somme_x2 += x;
somme_y2 += y;
compteur2++;
}
else if(compteur2 > 0) {
XRh = somme_x2 / compteur2;
YRh = somme_y2 / compteur2;
}
}
}
if(XRc != 0 || YRc != 0) { // Draw a circle at the first target
fill(trackColor);
strokeWeight(0.05);
stroke(0);
ellipse(XRc,YRc,20,20);
}
if(XRh != 0 || YRh != 0) {// Draw a circle at the second target
fill(trackColor2);
strokeWeight(0.05);
stroke(0);
ellipse(XRh,YRh,20,20);
}
}
void mousePressed() {
if (mousePressed && (mouseButton == RIGHT)) { // Save color where the mouse is clicked in trackColor variable
if(ii==0){
if (mouseY>480){mouseY=0;mouseX=0;}
int loc = mouseX + mouseY*video.width;
trackColor = video.pixels[loc];
ii=1;
}
else if(ii==1){
if (mouseY>480){mouseY=0;mouseX=0;}
int loc2 = mouseX + mouseY*video.width;
trackColor2 = video.pixels[loc2];
ii=2;
}
}
}
Try adding the background(0); right before you draw the first circle. It should cover the video and you can draw the circles on top of it.
Regards
Jose
So I'm writing up a processing sketch to test a randomized terrain generator for a scorched earth clone I'm working on. It seems to work as intended but with one minor problem. In the code I generate 800 1 pixel wide rectangles and set the fill to brown beforehand. The combination of the rectangles should be a solid mass with a brown dirt-like color (77,0,0).
However, the combination shows up as black regardless of the rgb fill value set. I think it might have something to do with each rectangle's border being black? Does anyone know what is happening here offhand?
final int w = 800;
final int h = 480;
void setup() {
size(w, h);
fill(0,128,255);
rect(0,0,w,h);
int t[] = terrain(w,h);
fill(77,0,0);
for(int i=0; i < w; i++){
rect(i, h, 1, -1*t[i]);
}
}
void draw() {
}
int[] terrain(int w, int h){
width = w;
height = h;
//min and max bracket the freq's of the sin/cos series
//The higher the max the hillier the environment
int min = 1, max = 6;
//allocating horizon for screen width
int[] horizon = new int[width];
double[] skyline = new double[width];
//ratio of amplitude of screen height to landscape variation
double r = (int) 2.0/5.0;
//number of terms to be used in sine/cosine series
int n = 4;
int[] f = new int[n*2];
//calculating omegas for sine series
for(int i = 0; i < n*2 ; i ++){
f[i] = (int) random(max - min + 1) + min;
}
//amp is the amplitude of the series
int amp = (int) (r*height);
for(int i = 0 ; i < width; i ++){
skyline[i] = 0;
for(int j = 0; j < n; j++){
skyline[i] += ( sin( (f[j]*PI*i/height) ) + cos(f[j+n]*PI*i/height) );
}
skyline[i] *= amp/(n*2);
skyline[i] += (height/2);
skyline[i] = (int)skyline[i];
horizon[i] = (int)skyline[i];
}
return horizon;
}
I think it might have something to do with each rectangle's border being black?
I believe this is the case. In your setup() function, I added the noStroke() function before you draw the rectangles. This removes the black outline to the rectangles. Since each rectangle is only 1 pixel wide, having this black stroke (which is on by default) makes the color of each rectangle black, no matter what color you try to choose before.
Here is an updated setup() function - I now see a reddish brown terrain:
void setup() {
size(w, h);
fill(0, 128, 255);
rect(0, 0, w, h);
int t[] = terrain(w, h);
fill(77, 0, 0);
noStroke(); // here
for (int i=0; i < w; i++) {
rect(i, h, 1, -1*t[i]);
}
}
I am Persian and j2me do not have good support for persian font.
I will create a native font library that get bitmap font and paint my persian text in desplay. But I have a problem.
In english each letter is a set consist shap and uncode. Like (a , U+0061)
But in persian a char may have several shape. for example letter 'ب' in persian alphabet can be:
آب --when it is separate letter in a word
به --when it is start letter in a word
...
How can I get other form of a letter from font file?
I am a persian developer and I had the same problem in about 4 years ago.You have some way to solve this problem:
1-using custom fonts.
2-reshape your text before display it.
A good article in about first,is "MIDP Terminal Emulation, Part 3: Custom Fonts for MIDP ".But for arabic letters I think that is not simple.
In about second way,say you would to replace any character in your text with correct character.This means when you have:
String str = "به";
If get str characters they will be look like:
{1576,1607} that is like "ب ه" instead of "به".So you would to replace incorrect Unicode with correct Unicode codes(in this case correct characters are: {65169, 65258}).You can use "Arabic Reshapers" even reshapers that designed for android!I saw 2 link for this reshapers:1-github 2-Arabic Android(I'm persian developer and so I do not try them,instead I create classes with the same idea as they have).
With using a good reshaper also you may have problem with character arranging from left to right instead of right to left.(some phones draw characters from left to right and other from right to left).I use below class to detect that ordering is true(from right to left) or not:
public class DetectOrdering{
public static boolean hasTrueOrdering()
{
boolean b = false;
try {
char[] chArr = {65169, 65258};
String str = new String(chArr);
System.out.println(str);
int width = f1.charWidth(chArr[1]) / 2;
int height = f1.getHeight();
image1 = Image.createImage(width, height);
image2 = Image.createImage(width, height);
Graphics g1 = image1.getGraphics();
Graphics g2 = image2.getGraphics();
g1.drawString(str, 0, 0, 0);
g2.drawChar(chArr[1], 0, 0, 0);
int[] im1 = new int[width * height];
int[] im2 = new int[width * height];
image1.getRGB(im1, 0, width, 0, 0, width, height);
image2.getRGB(im2, 0, width, 0, 0, width, height);
if (areEqualIntArrrays(im1, im2)) {
b = true;
} else {
b = false;
}
} catch (Exception e) {
e.printStackTrace();
}
return b;
}
private static boolean areEqualIntArrrays(int[] i1, int[] i2) {
if (i1.length != i2.length) {
return false;
} else {
for (int i = 0; i < i1.length; i++) {
if (i1[i] != i2[i]) {
return false;
}
}
}
return true;
}
}
If DetectOrdering.hasTrueOrdering() returns true,sure that phone draw Arabic characters from right to left and display your String.If returns false it draws from left to right.If phone draws Arabic character from left to right you would to reverse string after reshape it and then you can display it.
You can use one alphabet.png for the direct unicode mappings (those where the persian char does not change because of the neighbor chars). If your characters are monospaced, you may start with below class, as seen at http://smallandadaptive.blogspot.com.br/2008/12/custom-monospaced-font.html:
public class MonospacedFont {
private Image image;
private char firstChar;
private int numChars;
private int charWidth;
public MonospacedFont(Image image, char firstChar, int numChars) {
if (image == null) {
throw new IllegalArgumentException("image == null");
}
// the first visible Unicode character is '!' (value 33)
if (firstChar <= 33) {
throw new IllegalArgumentException("firstChar <= 33");
}
// there must be at lease one character on the image
if (numChars <= 0) {
throw new IllegalArgumentException("numChars <= 0");
}
this.image = image;
this.firstChar = firstChar;
this.numChars = numChars;
this.charWidth = image.getWidth() / this.numChars;
}
public void drawString(Graphics g, String text, int x, int y) {
// store current Graphics clip area to restore later
int clipX = g.getClipX();
int clipY = g.getClipY();
int clipWidth = g.getClipWidth();
int clipHeight = g.getClipHeight();
char[] chars = text.toCharArray();
for (int i = 0; i < chars.length; i++) {
int charIndex = chars[i] - this.firstChar;
// current char exists on the image
if (charIndex >= 0 && charIndex <= this.numChars) {
g.setClip(x, y, this.charWidth, this.image.getHeight());
g.drawImage(image, x - (charIndex * this.charWidth), y,
Graphics.TOP | Graphics.LEFT);
x += this.charWidth;
}
}
// restore initial clip area
g.setClip(clipX, clipY, clipWidth, clipHeight);
}
}
And change it to use a different char_uxxxx.png file for each persian char that changes because of the neighbor chars.
When parsing your string, before painting, you must check which png file is appropriate to use. Hope this is a good place to start.