Replacing System.Drawing with ImageSharp for Barcode .net core 6 - asp.net-core-6.0

As we have upgraded to net core 6 we are rewriting some of our code base. We have a tag helper in AspNet Core which generates a barcode. This currently uses System.Drawing and ZXing.
TagHelper Old version using System.Drawing - working (top barcode)
public override void Process(TagHelperContext context, TagHelperOutput output)
{
var margin = 0;
var qrCodeWriter = new ZXing.BarcodeWriterPixelData
{
Format = ZXing.BarcodeFormat.PDF_417,
Options = new ZXing.Common.EncodingOptions
{
Height = this.Height > 80 ? this.Height : 80,
Width = this.Width > 400 ? this.Width : 400,
Margin = margin
}
};
var pixelData = qrCodeWriter.Write(QRCodeContent);
// creating a bitmap from the raw pixel data; if only black and white colors are used it makes no difference
// that the pixel data ist BGRA oriented and the bitmap is initialized with RGB
using (var bitmap = new Bitmap(pixelData.Width, pixelData.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb))
using (var ms = new MemoryStream())
{
var bitmapData = bitmap.LockBits(new Rectangle(0, 0, pixelData.Width, pixelData.Height),
System.Drawing.Imaging.ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
try
{
// we assume that the row stride of the bitmap is aligned to 4 byte multiplied by the width of the image
System.Runtime.InteropServices.Marshal.Copy(pixelData.Pixels, 0, bitmapData.Scan0,
pixelData.Pixels.Length);
}
finally
{
bitmap.UnlockBits(bitmapData);
}
// save to stream as PNG
bitmap.Save(ms, System.Drawing.Imaging.ImageFormat.Png);
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", Width);
output.Attributes.Add("height", Height);
output.Attributes.Add("alt", Alt);
output.Attributes.Add("src",
$"data:image/png;base64,{Convert.ToBase64String(ms.ToArray())}");
}
}
TagHelper new version using ImageSharp - almost working but not exactly (bottom barcode)
public override void Process(TagHelperContext context, TagHelperOutput output)
{
var margin = 0;
var barcodeWriter = new ZXing.ImageSharp.BarcodeWriter<SixLabors.ImageSharp.PixelFormats.La32>
{
Format = ZXing.BarcodeFormat.PDF_417,
Options = new ZXing.Common.EncodingOptions
{
Height = this.Height > 80 ? this.Height : 80,
Width = this.Width > 400 ? this.Width : 400,
Margin = margin
}
};
var image = barcodeWriter.Write(QRCodeContent);
output.TagName = "img";
output.Attributes.Clear();
output.Attributes.Add("width", Width);
output.Attributes.Add("height", Height);
output.Attributes.Add("alt", Alt);
output.Attributes.Add("src", $"{image.ToBase64String(PngFormat.Instance)}");
}
The issue is as mentioned the 2nd barcode is very slightly different at the end seems to extend the last bar.
What am I missing?

That is a bug in the renderer implementation of the ZXing.Net binding to ImageSharp.
https://github.com/micjahn/ZXing.Net/issues/422
It is fixed in the newest nuget package of the bindings.
https://www.nuget.org/packages/ZXing.Net.Bindings.ImageSharp/
https://www.nuget.org/packages/ZXing.Net.Bindings.ImageSharp.V2/

Related

How to convert a Uint8Array to a CanvasImageSource?

In TypeScript (NodeJS), I am struggling to convert a Uint8Array with bitmap image data into a CanvasImageSource type.
More Context
I am working on a typescript library that will be used in the browser as well as NodeJS environments. The library does image operations with WebGL, so in NodeJS environments, I am attempting to leverage headless-gl. My library includes a function (getCanvasImageSource) that returns a CanvasImageSource for clients to use.
A bunch of code was removed for the purposes of asking the question. A WebGL shader will create the desired image on the gl context, which the client can retrieve through the CanvasImageSource. This works as intended in a browser client.
/**
* Browser version of the library.
*/
export class MyLibrary {
protected gl!: WebGLRenderingContext;
protected width: number;
protected height: number;
public getCanvasImageSource(): CanvasImageSource {
return this.gl.canvas;
}
/**
* the GL context can only be created in a browser.
*/
protected makeGL(): WebGLRenderingContext {
const canvas = document.createElement('canvas');
canvas.width = this.width;
canvas.height = this.height;
const glContext = canvas.getContext('webgl');
if (!glContext) {
throw new Error("Unable to initialize WebGL. This browser or device may not support it.");
}
return glContext;
}
}
import gl from 'gl';
/**
* A subclass of MyLibrary that overrides the browser-specific functionality.
*/
export class MyHeadlessLibrary extends MyLibrary {
public getCanvasImageSource(): CanvasImageSource {
// The canvas is `undefined` from headless-gl.
// this.gl.canvas === undefined;
// But, I can read the pixel data as a bitmap.
const format = this.gl.RGBA;
const type = this.gl.UNSIGNED_BYTE;
const bitmapData = new Uint8Array(this.width * this.height * 4);
this.gl.readPixels(0, 0, this.width, this.height, format, type, bitmapData);
// This is where I am struggling...
// Is there a way to convert my `bitmapData` into a `CanvasImageSource`?
}
/**
* Overrides the browser's WebGL context with the headless-gl implementation.
*/
protected makeGL(): WebGLRenderingContext {
const glContext = gl(this.width, this.height);
return glContext;
}
}
However, I am struggling to find a way to successfully convert the Uint8Array data read from the headless-gl context into a CanvasImageSource object.
Here are some things that I have tried:
1. Return this.gl.canvas
This ends up being undefined in the case of headless-gl.
2. Use JSDOM
JSDOM's canvas does not support the WebGL render context.
3. Use JSDOM to create an HTMLImageElement and set the src to a base64 URL
I haven't quite figured out why, but the promise here does not ever resolve or reject. This results in my library timing out. So perhaps this strategy will work, but there is an issue with my implementation.
This strategy has been used in other areas of the library, but none that involve headless-gl or even WebGL. Just the 2D canvas context.
import gl from 'gl';
import { JSDOM } from 'jsdom';
export class MyHeadlessLibrary extends MyLibrary {
/**
* In this attempt, I changed the return type to Promise<CanvasImageSource> here, in MyLibrary, and in the client code.
*/
public getCanvasImageSource(): Promise<CanvasImageSource> {
// The canvas is `undefined` from headless-gl.
// this.gl.canvas === undefined;
// But, I can read the pixel data as a bitmap.
const format = this.gl.RGBA;
const type = this.gl.UNSIGNED_BYTE;
const bitmapData = new Uint8Array(this.width * this.height * 4);
this.gl.readPixels(0, 0, this.width, this.height, format, type, bitmapData);
// Create a DOM and HTMLImageElement.
const html: string = `<!DOCTYPE html><html><head><meta charset="utf-8" /><title>DOM</title></head><body></body></html>`;
const dom = new JSDOM(html);
const img = dom.window.document.createElement('img');
// Create a base64 data URL
const buffer = Buffer.from(bitmapData);
const dataurl = `data:image/bmp;base64,${buffer.toString('base64')}`;
// Set the image source and wrap the result in a promise
return new Promise((resolve, reject) => {
img.onerror = reject;
img.onload = () => resolve(img);
img.src = dataurl;
});
}
}
Please let me know if something in my code jumps out as a problem, or point me to a potential solution to this problem!
According to the spec a CanvasImageSource is
typedef (HTMLOrSVGImageElement or
HTMLVideoElement or
HTMLCanvasElement or
ImageBitmap or
OffscreenCanvas) CanvasImageSource;
So it depends on what your needs are. If you don't need any alpha then one of those is HTMLCanvasElement so given pixels you could do
function pixelsToCanvas(pixels, width, height) {
const canvas = document.createElement('canvas');
canvas.width = width;
canvas.height = height;
const ctx = canvas.getContext('2d');
const imgData = ctx.createImageData(width, height);
imgData.data.set(pixels);
ctx.putImageData(imgData, 0, 0);
// flip the image
ctx.scale(1, -1);
ctx.globalCompositeOperation = 'copy';
ctx.drawImage(canvas, 0, -height, width, height);
return canvas;
}
This should work as long as either you have no alpha or you don't care about lossy alpha. Note: the code assumes you're providing unpremliplied alpha
The issue is pixels could have a pixel like this 255, 192, 128, 0. But because alpha is zero, when you pass that through the function above you'll get 0, 0, 0, 0 in the canvas that comes out because canvases always use premultiplied alpha. That may not be an issue because for most use cases 255, 192, 128, 0 appears as 0,0,0,0 anyway but if you have a special use case then this solution won't work.
Note: You'll need the canvas package
As for your Image from dataURL example this code makes no sense
// Create a base64 data URL
const buffer = Buffer.from(bitmapData);
const dataurl = `data:image/bmp;base64,${buffer.toString('base64')}`;
First off, whether image/bmp is supported or not is browser dependent so it's possible JSDOM doesn't support image/bmp but further a bmp file has a header which the code is not supplying. Without that header there is no way for any API to know what's in the data. If you gave it 256 bytes is that an 8x8 4byte per pixel image? A 16x4 4byte per pixel image? A black and white 32x64 1 bit per pixel image? etc.. You need the header.
Maybe writing the header will make that code work?
function convertPixelsToBMP(pixels, width, height) {
const BYTES_PER_PIXEL = 4;
const FILE_HEADER_SIZE = 14;
const INFO_HEADER_SIZE = 40;
const dst = new Uint8Array(FILE_HEADER_SIZE + INFO_HEADER_SIZE + width * height * 4);
{
const data = new DataView(dst.buffer);
const fileSize = FILE_HEADER_SIZE + INFO_HEADER_SIZE + (width * height * 4);
data.setUint8 ( 0, 0x42); // 'B'
data.setUint8 ( 1, 0x4D); // 'M'
data.setUint32( 2, fileSize, true)
data.setUint8 (10, FILE_HEADER_SIZE + INFO_HEADER_SIZE);
data.setUint32(14, INFO_HEADER_SIZE, true);
data.setUint32(18, width, true);
data.setUint32(22, height, true);
data.setUint16(26, 1, true);
data.setUint16(28, BYTES_PER_PIXEL * 8, true);
}
// bmp expects colors in BGRA format
const pdst = new Uint8Array(dst.buffer, FILE_HEADER_SIZE + INFO_HEADER_SIZE);
for (let i = 0; i < pixels.length; i += 4) {
pdst[i ] = pixels[i + 2];
pdst[i + 1] = pixels[i + 1];
pdst[i + 2] = pixels[i + 0];
pdst[i + 3] = pixels[i + 3];
}
return dst;
}
note: this code also assumes you're providing unpremultiplied alpha.
#gman, thank you for the help! And that makes sense that I would need the header for the base64 URL, but I didn't need that anyways. Returning an HTMLCanvasElement is sufficient for my needs. There is some alpha in the image, but premultiplied alpha is not an issue.
The one other thing I ran into was that the resulting image was flipped vertically. I assume this is because of the difference in WebGL & 2D canvas coordinate systems. I solved this by looping through the pixels & swapping rows.
The resulting solution looks like this:
export class MyHeadlessLibrary extends MyLibrary {
public getCanvasImageSource(): CanvasImageSource {
// read the pixel data
const pixels = new Uint8Array(this.width * this.height * 4);
this.gl.readPixels(0, 0, this.width, this.height, this.gl.RGBA, this.gl.UNSIGNED_BYTE, pixels);
// create a headless canvas & 2d context
const html: string = `<!DOCTYPE html><html><head><meta charset="utf-8" /><title>DOM</title></head><body></body></html>`;
const dom = new JSDOM(html);
const canvas = dom.window.document.createElement('canvas');
canvas.width = this.width;
canvas.height = this.height;
const ctx = canvas.getContext('2d');
if (!ctx) {
throw Error("Unable to create a 2D render context");
}
// flip the image
const bytesPerRow = this.width * 4;
const temp = new Uint8Array(bytesPerRow);
for (let y = 0; y < this.height / 2; y += 1) {
const topOffset = y * bytesPerRow;
const bottomOffset = (this.height - y - 1) * bytesPerRow;
temp.set(pixels.subarray(topOffset, topOffset + bytesPerRow));
pixels.copyWithin(topOffset, bottomOffset, bottomOffset + bytesPerRow);
pixels.set(temp, bottomOffset);
}
// Draw the pixels into the new canvas
const imgData = ctx.createImageData(this.width, this.height);
imgData.data.set(pixels);
ctx.putImageData(imgData, 0, 0);
return canvas;
}
}

How do I change the background color of a tabbed page in Xamarin iOS?

I need to change the background color of the currently tabbed page in my UITabBarController. I've searched through every stackoverflow post I could find but nothing worked for me. I thought there would be something like UITabBar.Appearance.SelectedImageTintColor, just for the background color but it doesn't seem so.
For example, I want to change the color of that part when I am on the right tab:
Does someone know how to do that?
You could invoked the following code in your UITabBarController
public xxxTabBarController()
{
//...set ViewControllers
this.TabBar.BarTintColor = UIColor.Red;
}
Update
//3.0 here is if you have three child page in tab , set it as the current value in your project
//
var size = new CGSize(TabBar.Frame.Width / 3.0, IsFullScreen());
this.TabBar.SelectionIndicatorImage = ImageWithColor(size,UIColor.Green);
double IsFullScreen()
{
double height = 64;
if (UIDevice.CurrentDevice.CheckSystemVersion(11, 0))
{
if (UIApplication.SharedApplication.Delegate.GetWindow().SafeAreaInsets.Bottom > 0.0)
{
height = 84;
}
}
return height;
}
UIImage ImageWithColor(CGSize size, UIColor color)
{
var rect = new CGRect(0, 0, size.Width, size.Height);
UIGraphics.BeginImageContextWithOptions(size, false, 0);
CGContext context = UIGraphics.GetCurrentContext();
context.SetFillColor(color.CGColor);
context.FillRect(rect);
UIImage image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return image;
}
The trick is to use the SelectionIndicatorImage Property of the UITabBar and generate a completely filled image with your desired color using the following method:
private UIImage ImageWithColor(CGSize size)
{
CGRect rect = new CGRect(0, 0, size.Width, size.Height);
UIGraphics.BeginImageContext(size);
using (CGContext context = UIGraphics.GetCurrentContext())
{
context.SetFillColor(UIColor.Green); //change color if necessary
context.FillRect(rect);
}
UIImage image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return image;
}
To initialize everything we override ViewWillLayoutSubviews() like this:
public override void ViewWillLayoutSubviews()
{
base.ViewWillLayoutSubviews();
// The tabbar height will always be 49 unless we force it to reevaluate it's size on runtime ...
myTabBar.InvalidateIntrinsicContentSize();
double height = myTabBar.Frame.Height;
CGSize size = new CGSize(new nfloat(myTabBar.Frame.Width / myTabBar.Items.Length, height));
// Now get our all-green image...
UIImage image = ImageWithColor(size);
// And set it as the selection indicator
myTabBar.SelectionIndicatorImage = image;
}
As mentioned in this article (google translating it step by step when necessary lol) calling InvalidateIntrinsicContentSize() will force the UITabBar to reevaluate it's size and will get you the actual runtime height of the tab bar (instead of the constant 49 height value from XCode).

How can I restrict gestures are working only draw line area itself in Xamarin.Ios

What I have tried is
CAShapeLayer shapeLayer = new CAShapeLayer();
shapeLayer.Path = new CGPath();
shapeLayer.Path.AddLines(new CGPoint [] { startingPoint, endingPoint});
shapeLayer.StrokeColor = UIColor.Blue.CGColor;
shapeLayer.LineWidth = (System.nfloat)3.0;
shapeLayer.FillColor = UIColor.Clear.CGColor;
var width = Math.Abs(latestPoint.X - initioalPoint.X) + initioalPoint.X;
var height = Math.Abs(latestPoint.Y - initioalPoint.Y) + initioalPoint.Y;
var padding = 10;
shapeLayer.Frame = new CGRect(5, 5, width + padding - 2, height + padding - 2);
var view = new UIView();
view.Layer.AddSubLayer(shapeLayer);
CanvasView canvasDrawLine = new CanvasView(subView);
canvasDrawLine.Frame = new CGRect(5, 5, width + padding, height + padding[![enter image description here][1]][1]);
Public Class CanvasView : UIView
{
public CanvasView(drawline)
{
drawLine.ClipsToBounds = true;
var width = Math.Abs(endingPoint.X - initioalPoint.X) + initioalPoint.X;
var height = Math.Abs(endingPoint.Y - initioalPoint.Y) + initioalPoint.Y;
var padding = 10;
drawLine.Frame = new CGRect(5,5, width + padding, height + padding);
this.AddSubview(drawLine);
setUpGestures();
}
}
scrollView.AddSubview(canvasDrawLine );
when I set the width and height of the frame default, my gestures not working properly, for example If I have apply a pan gesture to the draw line even out of the draw line also pan gesture is working.How can I restrict all gestures are working only draw line area.
you can calculate bounds of your area covered by line :
CoreGraphics.CGPoint initialPoint = new CoreGraphics.CGPoint(1, 2);
CoreGraphics.CGPoint latestPoint = new CoreGraphics.CGPoint(1, 2);
var width = Math.Abs(latestPoint.X - initialPoint.X);
var height = Math.Abs(latestPoint.X - initialPoint.X);
var padding = 10;
subView.Frame = new CGRect(0, 0, width+padding, height+padding);
I have created sample code with simple view controller class -- Have assigned red colour to view containing line. And its working perfect. gesture recogniser is working on red part only --where line is.
public partial class ViewController : UIViewController
{
protected ViewController(IntPtr handle) : base(handle)
{
// Note: this .ctor should not contain any initialization logic.
}
public override void ViewDidLoad()
{
base.ViewDidLoad();
CGPoint initioalPoint = new CGPoint(12, 12);
CGPoint latestPoint = new CGPoint(80, 45);
var shapeLayer = new CAShapeLayer();
shapeLayer.Path = new CGPath();
shapeLayer.Path.AddLines(new CGPoint[] { initioalPoint, latestPoint });
shapeLayer.StrokeColor = UIColor.Blue.CGColor;
shapeLayer.LineWidth = (System.nfloat)3.0;
shapeLayer.FillColor = UIColor.Clear.CGColor;
var subView = new UIView();
var width = Math.Abs(latestPoint.X - initioalPoint.X) + initioalPoint.X;
var height = Math.Abs(latestPoint.Y - initioalPoint.Y) + initioalPoint.Y;
var padding = 10;
subView.Frame = new CGRect(5, 5, width + padding, height + padding);
subView.Layer.AddSublayer(shapeLayer);
shapeLayer.Frame = new CGRect(5, 5, width + padding-2, height + padding-2);
subView.BackgroundColor = UIColor.Red;
var mainView = new UIView();
mainView.Frame = new CGRect(40, 40, width + padding+200, height + padding+200);
mainView.AddSubview(subView);
mainView.BackgroundColor = UIColor.Yellow;
this.View.AddSubview(mainView);
UITapGestureRecognizer tap = new UITapGestureRecognizer();
tap.AddTarget((obj) => {new UIAlertView("tapped","",null,"ok",null).Show();});
subView.AddGestureRecognizer(tap);
//UITapGestureRecognizer tap = new UITapGestureRecognizer(() => Console.WriteLine("tap"));
// Perform any additional setup after loading the view, typically from a nib.
}
public override void DidReceiveMemoryWarning()
{
base.DidReceiveMemoryWarning();
// Release any cached data, images, etc that aren't in use.
}
}
///To check if Touch Point is inside Given Polygon::
CGPoint pt1 = new CGPoint(initioalPoint.X - 50, initioalPoint.Y - 50);
CGPoint pt2 = new CGPoint(initioalPoint.X + 50, initioalPoint.Y - 50);
CGPoint pt3 = new CGPoint(latestPoint.X - 50, latestPoint.Y + 50);
CGPoint pt4 = new CGPoint(latestPoint.X + 50, latestPoint.Y + 50);
UIBezierPath path = new UIBezierPath();
path.MoveTo(pt1);
path.AddLineTo(pt2);
path.AddLineTo(pt3);
path.AddLineTo(pt4);
path.AddLineTo(pt1);
if (!path.ContainsPoint(ToucPoint))
return;

how to apply gradient effect on Image GDI

How can I apply gradient effect on image like this image in c#. I have a transparent image with black drawing I want to apply 2 color gradient on the image is this possible in gdi?
Here is the effect i want to achieve
http://postimg.org/image/ikz1ie7ip/
You create a PathGradientBrush and then you draw your texts with that brush.
To create a bitmap filled with a gradient brush you could do something like:
public Bitmap GradientImage(int width, int height, Color color1, Color color2, float angle)
{
var r = new Rectangle(0, 0, width, height);
var bmp = new Bitmap(width, height);
using (var brush = new LinearGradientBrush(r, color1, color2, angle, true))
using (var g = Graphics.FromImage(bmp))
g.FillRectangle(brush, r);
return bmp;
}
So now that you have an image with the gradient in it, all you have to do is to bring over the alpha channel from your original image into the newly created image. We can take the transferOneARGBChannelFromOneBitmapToAnother function from a blog post I once wrote:
public enum ChannelARGB
{
Blue = 0,
Green = 1,
Red = 2,
Alpha = 3
}
public static void transferOneARGBChannelFromOneBitmapToAnother(
Bitmap source,
Bitmap dest,
ChannelARGB sourceChannel,
ChannelARGB destChannel )
{
if ( source.Size!=dest.Size )
throw new ArgumentException();
Rectangle r = new Rectangle( Point.Empty, source.Size );
BitmapData bdSrc = source.LockBits( r, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb );
BitmapData bdDst = dest.LockBits( r, ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb );
unsafe
{
byte* bpSrc = (byte*)bdSrc.Scan0.ToPointer();
byte* bpDst = (byte*)bdDst.Scan0.ToPointer();
bpSrc += (int)sourceChannel;
bpDst += (int)destChannel;
for ( int i = r.Height * r.Width; i > 0; i-- )
{
*bpDst = *bpSrc;
bpSrc += 4;
bpDst += 4;
}
}
source.UnlockBits( bdSrc );
dest.UnlockBits( bdDst );
}
Now you could do something like:
var newImage = GradientImage( original.Width, original.Height, Color.Yellow, Color.Blue, 45 );
transferOneARGBChannelFromOneBitmapToAnother( original, newImage, ChannelARGB.Alpha, ChannelARGB.Alpha );
And there you are. :-)

thumbnail, cut resize and upload to DB

I use this code to create thumbnails and then store the original and the thumbnail into a DB. It creates tn that are always of a fixed sized and if the original image is wither than it's higher it is cut and then resized to the fixed size.
The code is working however I would really appreciate some help modifying this code to do the following (I have tried it but didn't succeeded):
Make high-quality thumbnails
cut the height if the image is way taller
than it's width (If the width is
200px and height is 1000px what will
happen?)
Accept png and tiff.
This is the code so far:
public void imgUpload()
{
if (ImgUpload.PostedFile != null)
{
System.Drawing.Image image_file = System.Drawing.Image.FromStream(ImgUpload.PostedFile.InputStream);
string fileName = Server.HtmlEncode(ImgUpload.FileName);
string extension = System.IO.Path.GetExtension(fileName);
bool sizeError = false;
if(image_file.Width < 200)
sizeError = true;
if(image_file.Height < 250)
sizeError = true;
if ((extension.ToUpper() == ".JPG") && !sizeError)
{
//**** Resize image section ****
int image_height = image_file.Height;
int image_width = image_file.Width;
int original_width = image_width;
int original_height = image_height;
int max_height = 250;
int max_width = 200;
Rectangle rect;
if (image_width > image_height)
{
image_width = (image_width * max_height) / image_height;
image_height = max_height;
rect = new Rectangle(((image_width - max_width) / 2), 0, max_width, max_height);
}
else
{
image_height = (image_height * max_width) / image_width;
image_width = max_width;
rect = new Rectangle(0, ((image_height - max_height) / 2), max_width, max_height);
}
Bitmap bitmap_file = new Bitmap(image_file, image_width, image_height);
Bitmap new_bitmap_file = bitmap_file.Clone(rect, bitmap_file.PixelFormat);
System.IO.MemoryStream stream = new System.IO.MemoryStream();
new_bitmap_file.Save(stream, System.Drawing.Imaging.ImageFormat.Jpeg);
stream.Position = 0;
byte[] imageThumbnail = new byte[stream.Length + 1];
stream.Read(imageThumbnail, 0, imageThumbnail.Length);
Bitmap Original_bitmap_file = new Bitmap(image_file, original_width, original_height);
System.IO.MemoryStream Original_stream = new System.IO.MemoryStream();
Original_bitmap_file.Save(Original_stream, System.Drawing.Imaging.ImageFormat.Jpeg);
Original_stream.Position = 0;
byte[] imageOriginal = new byte[Original_stream.Length + 1];
Original_stream.Read(imageOriginal, 0, imageOriginal.Length);
//**** End resize image section ****
saveImage(imageThumbnail, imageOriginal, IDTextBox.Text);
lblOutput.Visible = false;
}
else
{
lblOutput.Text = "Please only upload .jpg files and make sure the size is minimum 200x250";
lblOutput.Visible = true;
}
}
else
{
lblOutput.Text = "No file selected";
lblOutput.Visible = true;
}
}
Here is an example of how to scale and crop an image:
Bitmap b = new Bitmap(200, 1000);
using (var g = Graphics.FromImage(b))
{
g.DrawLine(Pens.White, 0, 0, b.Width, b.Height);
}
b.Save("b.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
Bitmap thumb = new Bitmap(100, 100);
using (var g = Graphics.FromImage(thumb))
{
g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
g.DrawImage(b, new Rectangle(0,0,100,100), new Rectangle(0, 400, 200, 200), GraphicsUnit.Pixel);
}
thumb.Save("thumb.jpg", System.Drawing.Imaging.ImageFormat.Jpeg);
I think you can adapt the code to all what's and if's about when the ratio height/width is to high, etc.
I use the following code when creating thumbnails for http://www.inventerat.se and a few other sites and it works like a charm:
public static Bitmap WithinMaxSize(Image imgPhoto, int maxWidth, int maxHeight)
{
int sourceWidth = imgPhoto.Width;
int sourceHeight = imgPhoto.Height;
int destWidth;
int destHeight;
float sourceAspect = sourceWidth / (sourceHeight * 1.0F);
float maxAspect = maxWidth / (maxHeight * 1.0F);
if (sourceAspect > maxAspect)
{
// Width is limiting.
destWidth = maxWidth;
destHeight = (int)Math.Round(destWidth / sourceAspect);
}
else
{
// Height is limiting.
destHeight = maxHeight;
destWidth = (int)Math.Round(destHeight * sourceAspect);
}
Bitmap bmPhoto = new Bitmap(destWidth, destHeight);
Graphics grPhoto = Graphics.FromImage(bmPhoto);
grPhoto.Clear(Color.White);
grPhoto.CompositingMode = CompositingMode.SourceCopy;
grPhoto.CompositingQuality = CompositingQuality.HighQuality;
grPhoto.InterpolationMode = InterpolationMode.HighQualityBicubic;
grPhoto.DrawImage(imgPhoto, 0, 0, destWidth, destHeight);
grPhoto.Dispose();
return bmPhoto;
}
Just don't forget to dispose the Bitmap returned from this method.
It does not work very well if users are uploading very large images (e.g. > 5000 * 5000 pixels) though. If that is a requirement, I would recommend using ImageMagick or a similar imaging package.
To accept png and tiff, just adjust how You check the file extension so that You accept "png", "tif" and "tiff" as well. Support for tiff is a bit sketchy in .NET though since tiff itself is only a container for MANY different encodings.

Resources