The images that are fresh out from digital cameras are often above the size of 2-3 MB making it difficult for it to get transferred over email and other ways.
This requires the image to be resized (in terms of file size and not height or width). Quite similar to MS Paint offering image resizing functionality.
I am not well educated on image file theories. I would appreciate if someone can point me towards following information sources:
Image theory (how various image formats work jpeg, png, tiff etc).?
How does the image looses its sharpness when resized? Is there some
Are there any free .Net (I am using 4.0) libraries available for doing this? If not can I use any library using com interoperabilty?
Many thanks,
Image resizing is functionality is built right into the .NET framework. There are a couple of different approaches:
GDI+
WIC
WPF
Here's a nice blog post covering the differences between them.
Here's an example with GDI+:
public void Resize(string imageFile, string outputFile, double scaleFactor)
{
using (var srcImage = Image.FromFile(imageFile))
{
var newWidth = (int)(srcImage.Width * scaleFactor);
var newHeight = (int)(srcImage.Height * scaleFactor);
using (var newImage = new Bitmap(newWidth, newHeight))
using (var graphics = Graphics.FromImage(newImage))
{
graphics.SmoothingMode = SmoothingMode.AntiAlias;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
graphics.DrawImage(srcImage, new Rectangle(0, 0, newWidth, newHeight));
newImage.Save(outputFile);
}
}
}
I used the example provided by Darin Dimitrov,
Image was inflated to and took up a lot of disk space (from 1.5MB to 17MB or so).
This is due to a small mistake in the last line of code.
The function below will save the image as Bitmap (huge image size).
newImage.Save(outputFile)
The correct function should be:
newImage.Save(outputFile, ImageFormat.Jpeg);
Imageresizer works well.
http://imageresizing.net/
Related
I generate pdf file from a HTML-page via jspdf plugin addHTML.
It works but the rendered text / font is really blurry, the original HTML page is not. Rendered images are fine, only text is the problem (see attached images).
original_image: http://111900.test-my-website.de/stackoverflow/orig.jpg
blurry_image: http://111900.test-my-website.de/stackoverflow/blurry.jpg
I read all google results the last three days - maybe I am the only person in the world I have exact this problem?!?! :/
I added the following scripts in my code:
spdf.js
jspdf.plugin.from_html.js
jspdf.plugin.split_text_to_size.js
jspdf.plugin.standard_fonts_metrics.js
pdf generation code:
pdf.addHTML(document.getElementById("container"),10,15,function() {
var string = pdf.save(filename);
});
Is there a quality option in jspdf I missed?
How can I render the font?
Thanks for reply,
Thomas
I found that when creating a PDF and the text was blurred when using addHtml this was because of the width of the web page. Try using it with the browser not maximised as a test.
My solution was to add some styles to adjust the width before calling addHTML with a width parameter that matches the styles I added. I then remove the additional styles in the function that runs after addHTML.
I had the same problem and I resolved it.
Actually, the main issue here is to specify the 'dpi' to avoid having a blurred image. In addition to that, try to avoid any 'smoothening' features beacuse it may make it worse. I have taken a look around the API and other discussion about it and I came back with the following solution:
1- update your version of html2canvas : many blurring issues have been fixed after the 1.0.0-alpha release.
2- use the following properties :
const context = canvas.getContext('2d');
context.scale(2, 2);
context['dpi'] = 144;
context['imageSmoothingEnabled'] = false;
context['mozImageSmoothingEnabled'] = false;
context['oImageSmoothingEnabled'] = false;
context['webkitImageSmoothingEnabled'] = false;
context['msImageSmoothingEnabled'] = false;
When creating a SVG image you have to set width,height and position otherwise it will not be rendered.
How do I read them from the original image?
Using Dart I first load the html image and after it's loaded I get the size and then define the SVG image and use the info I got before. This is a bit cumbersome and I wondered if there is another way.
The dart code looks like this:
ImageElement img = new ImageElement(src:'2.jpg'); //401x600
img.onLoad.listen((e) {
svg.ImageElement image = new svg.ImageElement();
image.setAttribute('x', '0');
image.setAttribute('y', '0');
image.setAttribute('width', img.width.toString());
image.setAttribute('height', img.height.toString());
image.getNamespacedAttributes('http://www.w3.org/1999/xlink')['href'] = '2.jpg';
});
There seems not to be a more convenient method (also not in JavaScript except when you use jQuery or another framework that includes methods for this).
Just create a method yourself and reuse that method for each image you load.
I've managed to make a WritableImage using
WritableImage snapshot = obj.getScene().snapshot(null);
Now I would like to output this screenshot on a pdf file. I've already managed to output text to a pdf using Apache pdfbox library using the following code:
PDDocument doc = null;
PDPage page = null;
try{
doc = new PDDocument();
page = new PDPage();
doc.addPage(page);
PDFont font = PDType1Font.HELVETICA_BOLD;
PDPageContentStream content = new PDPageContentStream(doc, page);
content.beginText();
content.setFont( font, 12 );
content.moveTextPositionByAmount( 100, 700 );
content.drawString("Hello World");
content.endText();
content.close();
doc.save("PDFWithText.pdf");
doc.close();
} catch (Exception e){
System.out.println(e);
}
How can I do this when using WritableImage rather that using basic String texts?
Also, how can I take a screenshot of certain nodes within a scene?
Thanks
Taking a screenshot of a scene
You already have working code for this in your question.
WritableImage snapshot = stage.getScene().snapshot(null);
Taking a screenshot of a . . . portion of a scene in JavaFx 2.2
Taking a snapshot of Node is similar to taking snapshot of a Scene, you just use the snapshot methods on the Node rather than the scene. First place your Node in a Scene, and then snapshot the Node.
WritableImage snapshot = node.snapshot(null, null);
The first parameter which may be passed to the node.snapshot call is some configuration for SnapshotParameters (which you probably don't need, but you can investigate them to see if they are required or useful for your case).
Now I would like to output this screenshot on a pdf file. How can I do this when using WritableImage rather that using basic String texts?
I have not used the pdfbox toolkit you reference in your question. Likely the toolkit works with awt based images rather than JavaFX images, so you will need to convert your JavaFX snapshot image to an awt buffered image using SwingFXUtils.fromFXImage.
To actually get the awt encoded image into a pdf file, consult the documentation for your pdfbox toolkit. Kasas's answer to Add BufferedImage to PDFBox document would seem to provide a code snippet for this operation. Looks like the relevant code (and I haven't tried this) is:
PDPageContentStream content = new PDPageContentStream(doc, page);
PDXObjectImage ximage = new PDJpeg(doc, bufferedImage);
content.drawImage(ximage, x, y);
I have a legacy MCF application that displays some images (bmp 32-bits with alpha channel information) by pre-multiplying the images and using CDC::AlphaBlend method.
I would like to introduce some new graphics using Direct2D but I don't want to migrate all the images to png or other formats.
I managed to draw a bmp image from a file but I'm facing problems to get the image from resources and also the displayed image does not use the alpha channel information.
So could anybody help me out with this?
This is my code to create the bitmap:
hr = pIWICFactory->CreateDecoderFromFilename( L"D:\\image.bmp",
NULL,
GENERIC_READ,
WICDecodeMetadataCacheOnDemand,
&pDecoder);
if (SUCCEEDED(hr))
{
// Create the initial frame.
hr = pDecoder->GetFrame(0, &pSource);
}
if (SUCCEEDED(hr))
{
//create a Direct2D bitmap from the WIC bitmap.
hr = pRenderTarget->CreateBitmapFromWicBitmap(
pSource,
NULL,
ppBitmap
);
}
This is the code to draw the bitmap:
m_pRenderTarget->DrawBitmap(
m_pBitmap,
D2D1::RectF(0.0f, 0.0f, size.width, size.height)
);
You'll need to make an IStream from the resource to pass to IWICImagingFactory::CreateDecoderFromStream.
Since resources are available in memory (assuming the module that contains them is loaded), the easiest way to do that is to create an IWICStream object using IWICImagingFactory::CreateStream and initialize it using IWICStream::InitializeFromMemory.
To get the size of the resource and a pointer to the first byte, use the FindResource, LoadResource, LockResource, and SizeofResource functions.
If your bitmap's header uses BI_BITFIELDS to specify a format with alpha data, I believe WIC will respect that. I don't have any experience with Direct2D, so I can't say if you need to do anything further to make it use alpha data.
If you can't use BI_BITFIELDS (or if that doesn't work), you can write your own IWICBitmapSource implementation that wraps the frame's IWICBitmapSource. You should be able to pass most calls directly to the frame source, and supply your own GetPixelFormat method that returns the real format of your image data. Alternatively, you can create an IWICBitmap with the format you want, lock the bitmap, and copy in the pixel data from the frame source.
In an iOS app, I need to provide image filters based on their size (width/height), think of something similar to "Large, Medium, Small" in Google images search. Opening each image and reading its dimensions when creating the list would be very performance intensive. Is there a way to get this info without opening the image itself?
Damien DeVille answered the question below, based on his suggestion, I am now using the following code:
NSURL *imageURL = [NSURL fileURLWithPath:imagePath];
if (imageURL == nil)
return;
CGImageSourceRef imageSourceRef = CGImageSourceCreateWithURL((CFURLRef)imageURL, NULL);
if(imageSourceRef == NULL)
return;
CFDictionaryRef props = CGImageSourceCopyPropertiesAtIndex(imageSourceRef, 0, NULL);
CFRelease(imageSourceRef);
NSLog(#"%#", (NSDictionary *)props);
CFRelease(props);
You can use ImageIO to achieve that.
If you have your image URL (or from a file path create a URL with +fileURLWithPath on NSURL) you can then create an image source with CGImageSourceCreateWithURL (you will have to bridge cast the URL to CFURLRef).
Once you have the image source, you can get a CFDictionaryRef of properties of the image (that you can again bridge cast to NSDictionary) by calling CGImageSourceCopyPropertiesAtIndex. What you get is a dictionary with plenty of properties about the image including the pixelHeight and pixelWidth.
You can pass 0 as the index. The index is because some images might have various embedded images (such as a thumbnail, or multiple frame like in a gif).
Note that by using an image source, the full image won't have to be loaded into memory but you will still be able to access its properties.
Make you you import and add the framework to your project.
Just one addition:
if you want to use web-url like http://www.example.com/image.jpg you should use then
[NSURL URLWithString:imagePath] instead [NSURL fileURLWithPath:imagePath].