Display glyph from font file - c#-4.0

Working on one windows application (.Net 4.0) which will accept font file name as input and will return images for each glyph in font. Tried all possible Google stuff but not getting all glyph s images. And if getting images that also have some wrong out put. Please suggest me the ways to do this... Currently had following work out...
private void generateBitmaps(string strFontpath)
{
////Generate font object
GlyphTypeface font = new GlyphTypeface(new Uri(strFontpath));
int fontSize = 22;
int intGlyphcount = 0;
//Collect geometry of all glyphs
var Glyphs = from glyphIndex in font.CharacterToGlyphMap.Values
select font.GetGlyphOutline(glyphIndex, fontSize, 1d);
// now create the visual we'll draw them to
DrawingVisual viz = new DrawingVisual();
System.Drawing.Size cellSize = new System.Drawing.Size(fontSize, Convert.ToInt32(fontSize * font.Height));
int bitWidth = (int)Math.Ceiling(Convert.ToDecimal(cellSize.Width*10));
int bitHeight = (int)Math.Ceiling(Convert.ToDecimal(cellSize.Height * 10));
//using (DrawingContext dc = viz.RenderOpen())
{
foreach (var g in Glyphs)
{
if (intGlyphcount > font.GlyphCount)
break;
//if (g.IsEmpty())
// continue; // don't draw the blank ones
DrawingContext dc = viz.RenderOpen();
dc.PushTransform(new TranslateTransform());
System.Windows.Media.Pen glyphPen = new System.Windows.Media.Pen(System.Windows.Media.Brushes.Black, 1);
dc.DrawGeometry(System.Windows.Media.Brushes.Red, glyphPen, g);
//GeometryDrawing glyphDrawing = new GeometryDrawing(System.Windows.Media.Brushes.White, glyphPen, g);
//DrawingImage geometryImage = new DrawingImage(glyphDrawing);
//dc.DrawImage(geometryImage, new Rect(0, 0, 150, 150));
dc.Close();
//geometryImage.Freeze();
//dc.Pop(); // get rid of the transform
RenderTargetBitmap bmp = new RenderTargetBitmap(
200, 200,
96, 96,
PixelFormats.Pbgra32);
bmp.Render(viz);
PngBitmapEncoder encoder = new PngBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(bmp));
using (FileStream file = new FileStream(#"GlyphList\Glyph" + intGlyphcount++ + ".png", FileMode.Create))
encoder.Save(file);
//dc.Pop();
}
}
MessageBox.Show("Done");
}

If You got glyph in font as image use external dll
This dll is available in url
http://www.gnostice.com/XtremeFontEngine_dot_NET.asp

Related

Cadlib draw model with images to bitmap

I am using Cadlib to read DWG, DXF and PDF architecture files, some files have images inside it, I create a DXFModel which I want to add those DXFImages to it then export to jpg, I always get a white image.
DxfModel dxfModel = new DxfModel();
foreach(DxfImage image in cadMimages)
{
dxfModel.Images.Add(image.ImageDef);
image.SetDefaultBoundaryVertices();
dxfModel.Entities.Add(image);
}
GraphicsConfig graphicsConfig = new GraphicsConfig(false, ArgbColors.White)
{
FixedForegroundColor = ArgbColors.Black,
CorrectColorForBackgroundColor = false,
ApplyLineType = true,
DisplayLineWeight = true,
DrawImages =true,
DrawImageFrame=true
};
var boundsCalculator = new BoundsCalculator();
boundsCalculator.GetBounds(dxfModel);
var bounds_ = boundsCalculator.Bounds;
Size maxSize = new Size(4096, 4096);
var width = maxSize.Width;
var height = maxSize.Height;
var point1 = new WW.Math.Point3D(0, height - 1, 0);
var point2 = new WW.Math.Point3D(width - 1, 0, 0);
var to2DTransform = DxfUtil.GetScaleTransform(bounds_.Corner1, bounds_.Corner2, point1, point2);
var bitmap = ImageExporter.CreateAutoSizedBitmap(dxfModel, new GDIGraphics3D(graphicsConfig), SmoothingMode.AntiAlias, to2DTransform, maxSize);
using (Stream stream = File.Create("d:/test.png"))
{
ImageExporter.EncodeImageToPng(bitmap, stream);
}

How can we get the Signature Properties details and put it as part of the visibleSignature in PDFBox 2.0.12

Below are the signature details as part of visible Signature
I am using Apache PDFBox 2.0.12 to sign a PDF document. I want to enable a visual signature also.
I am able to sign and create a visual signature. The problem is how can we get the details regarding the Signature properties(Such as SignedBy, SignedDate from the Certificate itself and mentioned in the Text as explained here:
public class VisibleSignature2 extends SignatureBase {
private SignatureOptions signatureOptions;
private VisibleSignatureOptions visibleSignatureOptions;
public VisibleSignature2(PrivateKey keystore, List<Certificate> pin, VisibleSignatureOptions visibleSignatureOptions) {
super(keystore, pin);
this.visibleSignatureOptions = visibleSignatureOptions;
}
public void signPDF(InputStream inputFile, OutputStream signedFile, Rectangle2D humanRect) throws IOException {
PDDocument doc = PDDocument.load(inputFile);
int accessPermissions = SigUtils.getMDPPermission(doc);
if (accessPermissions == 1) {
throw new IllegalStateException("No changes to the document are permitted due to DocMDP transform parameters dictionary");
}
PDSignature signature = new PDSignature();
PDAcroForm acroForm = doc.getDocumentCatalog().getAcroForm();
PDRectangle rect = createSignatureRectangle(doc, humanRect);
if (doc.getVersion() >= 1.5f && accessPermissions == 0) {
SigUtils.setMDPPermission(doc, signature, 2);
}
if (acroForm != null && acroForm.getNeedAppearances()) {
// PDFBOX-3738 NeedAppearances true results in visible signature becoming invisible
// with Adobe Reader
if (acroForm.getFields().isEmpty()) {
// we can safely delete it if there are no fields
acroForm.getCOSObject().removeItem(COSName.NEED_APPEARANCES);
// note that if you've set MDP permissions, the removal of this item
// may result in Adobe Reader claiming that the document has been changed.
// and/or that field content won't be displayed properly.
// ==> decide what you prefer and adjust your code accordingly.
}
}
// default filter
signature.setFilter(PDSignature.FILTER_ADOBE_PPKLITE);
signature.setSubFilter(PDSignature.SUBFILTER_ADBE_PKCS7_DETACHED);
// the signing date, needed for valid signature
signature.setSignDate(Calendar.getInstance());
signatureOptions = new SignatureOptions();
System.out.println("Visible Signature Reason is " + visibleSignatureOptions.hasVisibleSignatureReason());
if (visibleSignatureOptions.isShouldSignatureBeVisible()) {
System.out.println("Inside VisualSignature");
signature.setReason(visibleSignatureOptions.getVisibleSignatureReason());
signatureOptions.setVisualSignature(createVisualSignatureTemplate(doc, 0, rect));
}
signatureOptions.setPage(0);
doc.addSignature(signature, this, signatureOptions);
// write incremental (only for signing purpose)
doc.saveIncremental(signedFile);
doc.close();
IOUtils.closeQuietly(signatureOptions);
}
private PDRectangle createSignatureRectangle(PDDocument doc, Rectangle2D humanRect) {
float x = (float) humanRect.getX();
float y = (float) humanRect.getY();
float width = (float) humanRect.getWidth();
float height = (float) humanRect.getHeight();
PDPage page = doc.getPage(0);
PDRectangle pageRect = page.getCropBox();
PDRectangle rect = new PDRectangle();
// signing should be at the same position regardless of page rotation.
switch (page.getRotation()) {
case 90:
rect.setLowerLeftY(x);
rect.setUpperRightY(x + width);
rect.setLowerLeftX(y);
rect.setUpperRightX(y + height);
break;
case 180:
rect.setUpperRightX(pageRect.getWidth() - x);
rect.setLowerLeftX(pageRect.getWidth() - x - width);
rect.setLowerLeftY(y);
rect.setUpperRightY(y + height);
break;
case 270:
rect.setLowerLeftY(pageRect.getHeight() - x - width);
rect.setUpperRightY(pageRect.getHeight() - x);
rect.setLowerLeftX(pageRect.getWidth() - y - height);
rect.setUpperRightX(pageRect.getWidth() - y);
break;
case 0:
default:
rect.setLowerLeftX(x);
rect.setUpperRightX(x + width);
rect.setLowerLeftY(pageRect.getHeight() - y - height);
rect.setUpperRightY(pageRect.getHeight() - y);
break;
}
return rect;
}
private InputStream createVisualSignatureTemplate(PDDocument srcDoc, int pageNum, PDRectangle rect) throws IOException {
PDDocument doc = new PDDocument();
PDPage page = new PDPage(srcDoc.getPage(pageNum).getMediaBox());
doc.addPage(page);
PDAcroForm acroForm = new PDAcroForm(doc);
doc.getDocumentCatalog().setAcroForm(acroForm);
PDSignatureField signatureField = new PDSignatureField(acroForm);
PDAnnotationWidget widget = signatureField.getWidgets().get(0);
List<PDField> acroFormFields = acroForm.getFields();
acroForm.setSignaturesExist(true);
acroForm.setAppendOnly(true);
acroForm.getCOSObject().setDirect(true);
acroFormFields.add(signatureField);
widget.setRectangle(rect);
// from PDVisualSigBuilder.createHolderForm()
PDStream stream = new PDStream(doc);
PDFormXObject form = new PDFormXObject(stream);
PDResources res = new PDResources();
form.setResources(res);
form.setFormType(1);
PDRectangle bbox = new PDRectangle(rect.getWidth(), rect.getHeight());
float height = bbox.getHeight();
Matrix initialScale = null;
switch (srcDoc.getPage(pageNum).getRotation()) {
case 90:
form.setMatrix(AffineTransform.getQuadrantRotateInstance(1));
initialScale = Matrix.getScaleInstance(bbox.getWidth() / bbox.getHeight(), bbox.getHeight() / bbox.getWidth());
height = bbox.getWidth();
break;
case 180:
form.setMatrix(AffineTransform.getQuadrantRotateInstance(2));
break;
case 270:
form.setMatrix(AffineTransform.getQuadrantRotateInstance(3));
initialScale = Matrix.getScaleInstance(bbox.getWidth() / bbox.getHeight(), bbox.getHeight() / bbox.getWidth());
height = bbox.getWidth();
break;
case 0:
default:
break;
}
form.setBBox(bbox);
PDFont font = PDType1Font.HELVETICA_BOLD;
// from PDVisualSigBuilder.createAppearanceDictionary()
PDAppearanceDictionary appearance = new PDAppearanceDictionary();
appearance.getCOSObject().setDirect(true);
PDAppearanceStream appearanceStream = new PDAppearanceStream(form.getCOSObject());
appearance.setNormalAppearance(appearanceStream);
widget.setAppearance(appearance);
PDPageContentStream cs = new PDPageContentStream(doc, appearanceStream);
if (initialScale != null) {
cs.transform(initialScale);
}
// show text
float fontSize = 10;
float leading = fontSize * 1.5f;
cs.beginText();
cs.setFont(font, fontSize);
cs.setNonStrokingColor(Color.black);
cs.newLineAtOffset(fontSize, height - leading);
cs.setLeading(leading);
cs.showText("(Signature very wide line 1)");
cs.newLine();
cs.showText("(Signature very wide line 2)");
cs.newLine();
cs.showText("(Signature very wide line 3)");
cs.endText();
cs.close();
cs.close();
// no need to set annotations and /P entry
ByteArrayOutputStream baos = new ByteArrayOutputStream();
doc.save(baos);
doc.close();
return new ByteArrayInputStream(baos.toByteArray());
}
}

Issue with Photometric Interpretation tag even after inverting the image data

Note: Giving the background of my previous question once again so as to find all the related stuff at one source.
I'm capturing an image from an android mobile device and it’s in JPEG format. The image is of 72X72DPI and 24 bit. When I try to convert this JPEG image to TIFF using LibTiff.Net and to set the tag Photometric Interpretation = 0 for MinIsWhite, the image turns negative (the white becomes black and black becomes white). The environment is Windows 8.1 64 bit, Visual Studio 2012. The tag must have value 0, where 0 = white is zero.
I absolutely must use Photometric.MINISWHITE in images so tried inverting image data before writing it to TIFF as per the below code. But then the compression changes to LZW instead of CCITT4,Photometric is changed to MINISBLACK from MINISWHITE, FIllorder tag is removed, PlanarConfig tag is removed, New tag Predictor is added with value 1 and the image turns negative again.
public partial class Form1 : Form
{
private const TiffTag TIFFTAG_ASCIITAG = (TiffTag)666;
private const TiffTag TIFFTAG_LONGTAG = (TiffTag)667;
private const TiffTag TIFFTAG_SHORTTAG = (TiffTag)668;
private const TiffTag TIFFTAG_RATIONALTAG = (TiffTag)669;
private const TiffTag TIFFTAG_FLOATTAG = (TiffTag)670;
private const TiffTag TIFFTAG_DOUBLETAG = (TiffTag)671;
private const TiffTag TIFFTAG_BYTETAG = (TiffTag)672;
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
using (Bitmap bmp = new Bitmap(#"D:\Projects\ITests\images\IMG_2.jpg"))
{
// convert jpg image to tiff
byte[] tiffBytes = GetTiffImageBytes(bmp, false);
File.WriteAllBytes(#"D:\Projects\ITests\images\output.tif", tiffBytes);
//Invert the tiff image
Bitmap bmpTiff = new Bitmap(#"D:\Projects\ITests\images\output.tif");
Bitmap FBitmap = Transform(bmpTiff);
FBitmap.Save(#"D:\Projects\ITests\images\invOutput1.tif");
}
}
public static byte[] GetTiffImageBytes(Bitmap img, bool byScanlines)
{
try
{
byte[] raster = GetImageRasterBytes(img);
using (MemoryStream ms = new MemoryStream())
{
using (Tiff tif = Tiff.ClientOpen("InMemory", "w", ms, new TiffStream()))
{
if (tif == null)
return null;
tif.SetField(TiffTag.IMAGEWIDTH, img.Width);
tif.SetField(TiffTag.IMAGELENGTH, img.Height);
tif.SetField(TiffTag.COMPRESSION, Compression.CCITTFAX4);
tif.SetField(TiffTag.PHOTOMETRIC, Photometric.MINISWHITE);
tif.SetField(TiffTag.ROWSPERSTRIP, img.Height);
tif.SetField(TiffTag.XRESOLUTION, 200);
tif.SetField(TiffTag.YRESOLUTION, 200);
tif.SetField(TiffTag.SUBFILETYPE, 0);
tif.SetField(TiffTag.BITSPERSAMPLE, 1);
tif.SetField(TiffTag.FILLORDER, FillOrder.LSB2MSB);
tif.SetField(TiffTag.ORIENTATION, BitMiracle.LibTiff.Classic.Orientation.TOPLEFT);
tif.SetField(TiffTag.SAMPLESPERPIXEL, 1);
tif.SetField(TiffTag.RESOLUTIONUNIT, ResUnit.INCH);
tif.SetField(TiffTag.PLANARCONFIG, PlanarConfig.CONTIG);
int tiffStride = tif.ScanlineSize();
int stride = raster.Length / img.Height;
if (byScanlines)
{
// raster stride MAY be bigger than TIFF stride (due to padding in raster bits)
for (int i = 0, offset = 0; i < img.Height; i++)
{
bool res = tif.WriteScanline(raster, offset, i, 0);
if (!res)
return null;
offset += stride;
}
}
else
{
if (tiffStride < stride)
{
// raster stride is bigger than TIFF stride
// this is due to padding in raster bits
// we need to create correct TIFF strip and write it into TIFF
byte[] stripBits = new byte[tiffStride * img.Height];
for (int i = 0, rasterPos = 0, stripPos = 0; i < img.Height; i++)
{
System.Buffer.BlockCopy(raster, rasterPos, stripBits, stripPos, tiffStride);
rasterPos += stride;
stripPos += tiffStride;
}
// Write the information to the file
int n = tif.WriteEncodedStrip(0, stripBits, stripBits.Length);
if (n <= 0)
return null;
}
else
{
// Write the information to the file
int n = tif.WriteEncodedStrip(0, raster, raster.Length);
if (n <= 0)
return null;
}
}
}
return ms.GetBuffer();
}
}
catch (Exception)
{
return null;
}
}
public static byte[] GetImageRasterBytes(Bitmap img)
{
// Specify full image
Rectangle rect = new Rectangle(0, 0, img.Width, img.Height);
Bitmap bmp = img;
byte[] bits = null;
try
{
// Lock the managed memory
if (img.PixelFormat != PixelFormat.Format1bppIndexed)
bmp = convertToBitonal(img);
BitmapData bmpdata = bmp.LockBits(rect, ImageLockMode.ReadOnly, PixelFormat.Format1bppIndexed);
// Declare an array to hold the bytes of the bitmap.
bits = new byte[bmpdata.Stride * bmpdata.Height];
// Copy the sample values into the array.
Marshal.Copy(bmpdata.Scan0, bits, 0, bits.Length);
// Release managed memory
bmp.UnlockBits(bmpdata);
}
finally
{
if (bmp != img)
bmp.Dispose();
}
return bits;
}
private static Bitmap convertToBitonal(Bitmap original)
{
int sourceStride;
byte[] sourceBuffer = extractBytes(original, out sourceStride);
// Create destination bitmap
Bitmap destination = new Bitmap(original.Width, original.Height,
PixelFormat.Format1bppIndexed);
destination.SetResolution(original.HorizontalResolution, original.VerticalResolution);
// Lock destination bitmap in memory
BitmapData destinationData = destination.LockBits(
new Rectangle(0, 0, destination.Width, destination.Height),
ImageLockMode.WriteOnly, PixelFormat.Format1bppIndexed);
// Create buffer for destination bitmap bits
int imageSize = destinationData.Stride * destinationData.Height;
byte[] destinationBuffer = new byte[imageSize];
int sourceIndex = 0;
int destinationIndex = 0;
int pixelTotal = 0;
byte destinationValue = 0;
int pixelValue = 128;
int height = destination.Height;
int width = destination.Width;
int threshold = 500;
for (int y = 0; y < height; y++)
{
sourceIndex = y * sourceStride;
destinationIndex = y * destinationData.Stride;
destinationValue = 0;
pixelValue = 128;
for (int x = 0; x < width; x++)
{
// Compute pixel brightness (i.e. total of Red, Green, and Blue values)
pixelTotal = sourceBuffer[sourceIndex + 1] + sourceBuffer[sourceIndex + 2] +
sourceBuffer[sourceIndex + 3];
if (pixelTotal > threshold)
destinationValue += (byte)pixelValue;
if (pixelValue == 1)
{
destinationBuffer[destinationIndex] = destinationValue;
destinationIndex++;
destinationValue = 0;
pixelValue = 128;
}
else
{
pixelValue >>= 1;
}
sourceIndex += 4;
}
if (pixelValue != 128)
destinationBuffer[destinationIndex] = destinationValue;
}
Marshal.Copy(destinationBuffer, 0, destinationData.Scan0, imageSize);
destination.UnlockBits(destinationData);
return destination;
}
private static byte[] extractBytes(Bitmap original, out int stride)
{
Bitmap source = null;
try
{
// If original bitmap is not already in 32 BPP, ARGB format, then convert
if (original.PixelFormat != PixelFormat.Format32bppArgb)
{
source = new Bitmap(original.Width, original.Height, PixelFormat.Format32bppArgb);
source.SetResolution(original.HorizontalResolution, original.VerticalResolution);
using (Graphics g = Graphics.FromImage(source))
{
g.DrawImageUnscaled(original, 0, 0);
}
}
else
{
source = original;
}
// Lock source bitmap in memory
BitmapData sourceData = source.LockBits(
new Rectangle(0, 0, source.Width, source.Height),
ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
// Copy image data to binary array
int imageSize = sourceData.Stride * sourceData.Height;
byte[] sourceBuffer = new byte[imageSize];
Marshal.Copy(sourceData.Scan0, sourceBuffer, 0, imageSize);
// Unlock source bitmap
source.UnlockBits(sourceData);
stride = sourceData.Stride;
return sourceBuffer;
}
finally
{
if (source != original)
source.Dispose();
}
}
public Bitmap Transform(Bitmap bitmapImage)
{
var bitmapRead = bitmapImage.LockBits(new Rectangle(0, 0, bitmapImage.Width, bitmapImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
var bitmapLength = bitmapRead.Stride * bitmapRead.Height;
var bitmapBGRA = new byte[bitmapLength];
Marshal.Copy(bitmapRead.Scan0, bitmapBGRA, 0, bitmapLength);
bitmapImage.UnlockBits(bitmapRead);
for (int i = 0; i < bitmapLength; i += 4)
{
bitmapBGRA[i] = (byte)(255 - bitmapBGRA[i]);
bitmapBGRA[i + 1] = (byte)(255 - bitmapBGRA[i + 1]);
bitmapBGRA[i + 2] = (byte)(255 - bitmapBGRA[i + 2]);
// [i + 3] = ALPHA.
}
var bitmapWrite = bitmapImage.LockBits(new Rectangle(0, 0, bitmapImage.Width, bitmapImage.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppPArgb);
Marshal.Copy(bitmapBGRA, 0, bitmapWrite.Scan0, bitmapLength);
bitmapImage.UnlockBits(bitmapWrite);
return bitmapImage;
}
}
You should invert image bytes in GetTiffImageBytes method, before writing them to TIFF. Also, the Transform method converts bi-level image to 32bpp one and that is why you get LZW compressed image in the end.
So, add the following code
for (int k = 0; k < raster.Length; k++)
raster[k] = (byte)(~raster[k]);
after byte[] raster = GetImageRasterBytes(img); in GetTiffImageBytes method. This will invert image bytes. And don't use the following code
//Invert the tiff image
Bitmap bmpTiff = new Bitmap(#"D:\Projects\ITests\images\output.tif");
Bitmap FBitmap = Transform(bmpTiff);
FBitmap.Save(#"D:\Projects\ITests\images\invOutput1.tif");

how to compress/resize image in windows phone 8

I am creating a project in which I want to compress image so that it can be uploaded easily on windows azure and later can be retrieved easily from windows azure to my application.So can you please help me with how can I do that. I am using BitmapImage right now . Follwoing is the code which I am using to upload image to azure
void photoChooserTask_Completed(object sender, PhotoResult e)
{
if (e.TaskResult == TaskResult.OK)
{
BitmapImage bitmap = new BitmapImage();
bitmap.SetSource(e.ChosenPhoto);
WriteableBitmap wb = new WriteableBitmap(bitmap);
using (MemoryStream stream = new MemoryStream())
{
wb.SaveJpeg(stream, wb.PixelWidth, wb.PixelHeight, 0, 0);
AzureStorage storage = new AzureStorage();
storage.Account = **azure account**;
storage.BlobEndPoint = **azure end point**;
storage.Key = **azure key**;
string fileName = uid;
bool error = false;
if (!error)
{
storage.PutBlob("workerimages", fileName, imageBytes, error);
}
else
{
MessageBox.Show("Error uploading the new image.");
}
}
}
}
Be care using the WriteableBitmap as you may run out of memory if resizing a lot of images. If you only have a few, then pass the size you want saved to the SaveJpeg method. Also make sure you use a value higher than 0 for the quality (last param of SaveJpeg)
var width = wb.PixelWidth/4;
var height = wb.PixelHeight/4;
using (MemoryStream stream = new MemoryStream())
{
wb.SaveJpeg(stream, width, height, 0, 100);
...
...
}
You can also use the JpegRenderer from the Nokia Imaging SDK to resize an image.
var width = wb.PixelWidth/4;
var height = wb.PixelHeight/4;
using (var imageProvider = new StreamImageSource(e.ChosenPhoto))
{
IFilterEffect effect = new FilterEffect(imageProvider);
// Get the resize dimensions
Windows.Foundation.Size desiredSize = new Windows.Foundation.Size(width, height);
using (var renderer = new JpegRenderer(effect))
{
renderer.OutputOption = OutputOption.PreserveAspectRatio;
// set the new size of the image
renderer.Size = desiredSize;
IBuffer buffer = await renderer.RenderAsync();
return buffer;
}
}

Continuously Updating Silverlight Image control from Byte[] in Backgroundworker (about every 100 ms)

I am new to posting in forums, as I usually can help my self through searches, but am really stuck here...
I have written an ASP.NET C# WebApp, which also needs to incorporate a client side serial com interface to a device that basically does live scanning, and streams the images scanned through the serial (USB) interface (These images need to be refreshed Continuously, approximately every 100ms, therefore basically creating an "animation" effect)
To get this to run Client side, I have written a small Silverlight App. running In Browser, with the following code, using a Backgroundworker trying to separate Serial Comms from UI, and refreshing the Silverlight image control from the Byte Arrays received.
I have VERY SIMILAR code working 100% in WinForms, my only issue in Silverlight, is that no image ever gets displayed in my image control.
Below is the relevant WinForms code followed by the corresponding Silverlight code.
My current suspicion is that the image is never rendered as Silverlight only allows for PixelFormat of 32bppArgb where I need to use PixelFormat.Format8bppIndexed, as per my WinForms CreateBitmap() method below.
If this is indeed the issue, I cannot find any way to create this format of Bitmap in Silverlight.
/////////////// WINFORMS CODE (Timer1 Interval = 100ms) //////////////////
private void timer1_Tick(object sender, EventArgs e)
{
BackgroundWorker bw = new BackgroundWorker();
bw.WorkerReportsProgress = true;
bw.DoWork += new DoWorkEventHandler(
delegate(object o, DoWorkEventArgs args)
{
BackgroundWorker b = o as BackgroundWorker;
int BytesToRead = COMport.Read(ReceiveBuffer);
for (int i = 0; i < BytesToRead; i++)
{
//Code that copies ReceiveBuffer to byte[] LiveImgArr
Bitmap liveBMP = CreateBitmap(LiveImgArr, imgWidth, imgHeight);
bw.ReportProgress(i, liveBMP);
}
});
bw.ProgressChanged += new ProgressChangedEventHandler(
delegate(object o, ProgressChangedEventArgs args)
{
pictureBox1.Image = (Bitmap)args.UserState;
});
bw.RunWorkerAsync();
}
private Bitmap CreateBitmap(byte[] buffer, int width, int height)
{
Bitmap bmp = new Bitmap(width, height, System.Drawing.Imaging.PixelFormat.Format8bppIndexed);
BitmapData bmpData = bmp.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadWrite, PixelFormat.Format8bppIndexed);
System.Runtime.InteropServices.Marshal.Copy(buffer, 0, bmpData.Scan0, width * height);
bmp.UnlockBits(bmpData);
ColorPalette pal = bmp.Palette;
for (int i = 0; i < 256; i++)
{
pal.Entries[i] = Color.FromArgb(i, i, i);
}
bmp.Palette = pal;
return bmp;
}
//////////////////////////////////////// END /////////////////////////////////////////
//////////////////////////////// SilverLight Code ///////////////////////////////////
public void StartTimer()//object o, RoutedEventArgs sender)
{
System.Windows.Threading.DispatcherTimer CommsTimer = new System.Windows.Threading.DispatcherTimer();
CommsTimer.Interval = new TimeSpan(0, 0, 0, 0, 100); // 100 Milliseconds
CommsTimer.Tick += new EventHandler(CommsTimer_Tick);
CommsTimer.Start();
}
public void CommsTimer_Tick(object o, EventArgs sender)
{
BackgroundWorker bw = new BackgroundWorker();
bw.WorkerReportsProgress = true;
bw.DoWork += new DoWorkEventHandler(
delegate(object b, DoWorkEventArgs args)
{
BackgroundWorker obw = b as BackgroundWorker;
int BytesToRead = COMport.Read(ReceiveBuffer);
for (int i = 0; i < BytesToRead; i++)
{
//Code that copies ReceiveBuffer to byte[] LiveImgArr
bw.ReportProgress(i, LiveImgArr);
}
}
});
bw.ProgressChanged += new ProgressChangedEventHandler(
delegate(object j, ProgressChangedEventArgs args)
{
byte[] imgByte = (byte[])args.UserState;
using (MemoryStream ms = new MemoryStream(imgByte, 0, imgByte.Length))
{
BitmapImage bmp = new BitmapImage();
bmp.SetSource(ms);
this.image1.Source = bmp;
}
});
//////////////////////////////////////// END //////////////////////////////////////////
///////////////////////////////// NEW CODE AS PER CLEMENTS/////////////////////////////
bw.ProgressChanged += new ProgressChangedEventHandler(
delegate(object j, ProgressChangedEventArgs args)
{
byte[] imgByte = (byte[])args.UserState;
WriteableBitmap wbmp = new WriteableBitmap(208, 208);
int[] wbmpArray = wbmp.Pixels;
for (int pixelIndex = 0; pixelIndex < imgByte.Length; pixelIndex++)
{
byte alpha = 128;
byte red = 255;
byte green = 255;
byte blue = 255;
double scaleAlpha = alpha / 255.0;
// we are not using scaleAlpha here
//wbmp.Pixels[pixelIndex] =
// (alpha << 24)
// | (red << 16)
// | (green << 8)
// | blue;
// notice the alpha value is NOT scaled
// it’s also very important to scale BEFORE
// shifting the values
wbmp.Pixels[pixelIndex] =
(alpha << 24)
| ((byte)(red * scaleAlpha) << 16)
| ((byte)(green * scaleAlpha) << 8)
| (byte)(blue * scaleAlpha);
}
wbmp.Invalidate();
this.image1.Source = wbmp;
//ImageBrush imgBrush = new ImageBrush();
//imgBrush.ImageSource = wbmp;
//imgRect.Fill = imgBrush;
});
BitmapImage.SetSource only accepts streams that contain an encoded image buffer (PNG or JPEG).
In order to set pixel data directly, you would have to use WriteableBitmap instead of BitmapImage. From the Remarks section in the class documentation:
• Construct an initially empty but dimensioned WriteableBitmap using
WriteableBitmap(Int32, Int32).
• Get the pixel array from Pixels.
• Loop through the array, setting the individual pixel values as
integer values that are evaluated as premultiplied ARGB32.
• Call Invalidate.
• To display the image in UI, use the WriteableBitmap as the source
for an imaging control such as Image, or as the source image for an
ImageBrush.
The following code fills a WriteableBitmap with a blue color and an opacity gradient from 0 to 100% from left to right by using only integer arithmetics:
var bitmap = new WriteableBitmap(500, 500);
int red = 0;
int green = 0;
int blue = 255;
for (int i = 0; i < bitmap.PixelWidth * bitmap.PixelHeight; i++)
{
int x = i % bitmap.PixelWidth;
int alpha = x * 256 / bitmap.PixelWidth;
bitmap.Pixels[i] =
(alpha << 24) |
((red * alpha / 256) << 16) |
((green * alpha / 256) << 8) |
(blue * alpha / 256);
}
bitmap.Invalidate();
image.Source = bitmap;

Resources