Jpeg Turbo works with Color but not greyscale - libjpeg-turbo

I am wrote a function with Jpeg Turbo to allow it to do compression and it looks something like this:
bool JPEGCompress::compress(const uint8_t* data, size_t width, size_t height, ImageFrame::image_type type, std::string& out)
{
// Prealloc if not
out.reserve(1 << 24); // Preallocate 16 MB output buffer
// Allocate memory for compressed output
int subsamp = TJSAMP_420;
auto outsize_max = tjBufSize(width, height, subsamp);
if (size_t(-1) == outsize_max) {
fmt::print("Size out of bounds: w={} h={} ss={}\n", width, height, subsamp);
return false;
}
out.resize(outsize_max);
// Select Pixel Format
int pix_fmt = -1;
if(type == ImageFrame::image_type::ImageColor)
pix_fmt = TJPF_RGB;
else if(type == ImageFrame::image_type::ImageGray)
pix_fmt = TJPF_GRAY;
else{
fmt::print("Compression doesn't work for this format: {}", ImageFrame::convEnum(type));
return false;
}
// Compress
auto outptr = (uint8_t*) out.data();
auto outsize = outsize_max;
if (-1 == tjCompress2(
compressor_, data, width, 0, height, pix_fmt,
&outptr, &outsize, subsamp, quality_, TJFLAG_NOREALLOC)) {
fmt::print("Error encoding image: {}\n", tjGetErrorStr2(compressor_));
return false;
}
out.resize(outsize);
return true;
}
It works great with RGB images, but fails on greyscale images with
Error encoding image: Unsupported color conversion request
I don't know what I am doing wrong with the library

I recently had the same issue (with similar code that writes both RGB and grayscale images), and it took me a while to find the culprit:
The sub-sampling that is applied must match the color space: for actual color images, using any of the TJSAMP_4xx constants (e.g. TJSAMP_420) is a valid choice, but for grayscale images the sub-sampling must be set to TJSAMP_GRAY.
With that libjpeg-turbo will happily store grayscale JPEG files using the tjCompress2() API.

Related

Difficulty creating thumbnails of all monitors connected to laptop to populate a list control

I have some code to create thumbnails of the monitors connected to a PC. They are rendered into a list control.
This is the code that iterates the monitors to create the thumbnails:
void CCenterCursorOnScreenDlg::DrawThumbnails()
{
int monitorIndex = 0;
// Stop redrawing the CListCtrl
m_ListThumbnail.SetRedraw(FALSE);
// Loop monitor info
for (auto& strMonitor : m_monitors.strMonitorNames)
{
// Create the thumbnail image
CImage monitorThumbnail;
CreateMonitorThumbnail(monitorIndex, monitorThumbnail, true);
// Convert it to a CBitmap
CBitmap* pMonitorThumbnailBitmap = CBitmap::FromHandle(monitorThumbnail);
// Add the CBitmap to the CImageList
m_ImageListThumb.Add(pMonitorThumbnailBitmap, nullptr);
// Build the caption description
CString strMonitorDesc = m_monitors.strMonitorNames.at(monitorIndex);
strMonitorDesc.AppendFormat(L" (Screen %d)", monitorIndex + 1);
// Add the item to the CListCtrl
m_ListThumbnail.InsertItem(monitorIndex, strMonitorDesc, monitorIndex);
monitorIndex++;
}
// Start redrawing the CListCtrl again
m_ListThumbnail.SetRedraw(TRUE);
}
The m_monitors variable is an instance of:
struct MonitorRects
{
std::vector<RECT> rcMonitors;
std::vector<CString> strMonitorNames;
static BOOL CALLBACK MonitorEnum(HMONITOR hMon, HDC hdc, LPRECT lprcMonitor, LPARAM pData)
{
MonitorRects* pThis = reinterpret_cast<MonitorRects*>(pData);
pThis->rcMonitors.push_back(*lprcMonitor);
MONITORINFOEX sMI{};
sMI.cbSize = sizeof(MONITORINFOEX);
GetMonitorInfo(hMon, &sMI);
pThis->strMonitorNames.emplace_back(sMI.szDevice);
return TRUE;
}
MonitorRects()
{
EnumDisplayMonitors(nullptr, nullptr, MonitorEnum, (LPARAM)this);
}
};
The initial thumbnail sizes is determined in OnInitDialog:
// Use monitor 1 to work out the thumbnail sizes
CRect rcMonitor = m_monitors.rcMonitors.at(0);
m_iThumbnailWidth = rcList.Width();
double dHt = ((double)rcMonitor.Height() / (double)rcMonitor.Width()) * (double)m_iThumbnailWidth;
m_iThumbnailHeight = (int)dHt;
These values are used when creating the CImageList to show the images.
Finally, I have the function that is supposed to make the thumbnails:
bool CCenterCursorOnScreenDlg::CreateMonitorThumbnail(const int iMonitorIndex, CImage &rImage, bool bResizeAsThumbnail)
{
const CRect rcCapture = m_monitors.rcMonitors.at(iMonitorIndex);
// Destroy the currently contained bitmap to create a new one
rImage.Destroy();
// Massage the dimensions as we want a thumbnail
auto nWidth = rcCapture.Width();
auto nHeight = rcCapture.Height();
if (bResizeAsThumbnail)
{
nWidth = m_iThumbnailWidth;
double dHt = ((double)rcCapture.Height() / (double)rcCapture.Width()) * (double)m_iThumbnailWidth;
nHeight = (int)dHt;
if (nHeight > m_iThumbnailHeight)
{
// Aspect ratio of this monitor is not the same as the primary monitor
}
}
// Create bitmap and attach it to this object
if (!rImage.Create(nWidth, nHeight, 32, 0))
{
AfxMessageBox(L"Cannot create image!", MB_ICONERROR);
return false;
}
// Create virtual screen DC
CDC dcScreen;
dcScreen.CreateDC(L"DISPLAY", nullptr, nullptr, nullptr);
// Copy (or resize) the contents from the virtual screen DC
BOOL bRet = FALSE;
auto dcImage = rImage.GetDC();
if (bResizeAsThumbnail)
{
// Set the mode first!
SetStretchBltMode(dcImage, COLORONCOLOR);
int iTop = (m_iThumbnailHeight - nHeight) / 2;
// Copy (and resize)
bRet = ::StretchBlt(dcImage, 0, iTop,
nWidth,
nHeight,
dcScreen.m_hDC,
rcCapture.left,
rcCapture.top,
rcCapture.Width(),
rcCapture.Height(), SRCCOPY | CAPTUREBLT);
}
else
{
// Copy
bRet = ::BitBlt(dcImage, 0, 0,
rcCapture.Width(),
rcCapture.Height(),
dcScreen.m_hDC,
rcCapture.left,
rcCapture.top, SRCCOPY | CAPTUREBLT);
}
// Do cleanup and return
dcScreen.DeleteDC();
rImage.ReleaseDC();
return bRet;
}
On my PC at how where I have two monitors of the same dimensions it works fine. But when I tried it at another site, which has two large TVs connected to a laptop, and an additional monitor connected to the laptop, the thumbnails render wrong:
I would say that the thumbnail of screen two (the TVs) is about 2/3 of the size.
I was hoping to create a set of thumbnails for my list control of all monitors and did not anticipate this. What am I doing wrong?
I am wondering if the SetStretchBltMode / StretchBlt logic is incorrect that does the transform.
Update
I just realised:
GetMonitorInfo provides the screen data in virtual coordinates.
StretchBlt uses logical screen coordinates.
Is this the reason I am ended up with an incorrect thumbnail when I am trying to take another monitors screen and scale it down?

Multiple Render Targets (MRT) and OSG

Folks,
I have studied about FBO, RTT and MRT to include this feature in my application, however I faced with some problems/doubts I did not find answers/tips during my search. Follows below the description of my scenario. I´ll be grateful if anyone can help me.
What I want to do?
Attach two render textures (for color and depth buffers) to the same camera;
Display only the color buffer in the post render camera;
Read images from depth and color buffer in a final draw callback;
Write collected float images in disk.
What have I got so far?
Allow rendering for color or depth buffers separately, but not both on the same camera;
Display the color buffer in the post render camera;
Read color or depth buffer in the final draw callback;
Write collected image (color or depth) in disk - only images as GL_UNSIGNED_BYTE. The following error is presented:
Error writing file ./Test-depth.png: Warning: Error in writing to "./Test-depth.png".
What are the doubts? (help!)
How can I properly render both textures (color and depth buffer) in the same camera?
How can I properly read both depth and color buffers in the final draw callback?
During image writing in disk, why the error is presented only for images as GL_FLOAT, not for GL_UNSIGNED_BYTE?
Is the render texture attached to an osg::Geode mandatory or optional in this process? Do I need to create two osg::Geode (one for each buffers), or only one osg::Geode for both?
Please take a look in my current source code (what am I doing wrong here?):
// OSG includes
#include <osgDB/ReadFile>
#include <osgDB/WriteFile>
#include <osgViewer/Viewer>
#include <osg/Camera>
#include <osg/Geode>
#include <osg/Geometry>
#include <osg/Texture2D>
struct SnapImage : public osg::Camera::DrawCallback {
SnapImage(osg::GraphicsContext* gc) {
_image = new osg::Image;
_depth = new osg::Image;
if (gc->getTraits()) {
int width = gc->getTraits()->width;
int height = gc->getTraits()->height;
_image->allocateImage(width, height, 1, GL_RGBA, GL_FLOAT);
_depth->allocateImage(width, height, 1, GL_DEPTH_COMPONENT, GL_FLOAT);
}
}
virtual void operator () (osg::RenderInfo& renderInfo) const {
osg::Camera* camera = renderInfo.getCurrentCamera();
osg::GraphicsContext* gc = camera->getGraphicsContext();
if (gc->getTraits() && _image.valid()) {
int width = gc->getTraits()->width;
int height = gc->getTraits()->height;
_image->readPixels(0, 0, width, height, GL_RGBA, GL_FLOAT);
_depth->readPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT);
osgDB::writeImageFile(*_image, "./Test-color.png");
osgDB::writeImageFile(*_depth, "./Test-depth.png");
}
}
osg::ref_ptr<osg::Image> _image;
osg::ref_ptr<osg::Image> _depth;
};
osg::Camera* setupMRTCamera( osg::ref_ptr<osg::Camera> camera, std::vector<osg::Texture2D*>& attachedTextures, int w, int h ) {
camera->setClearColor( osg::Vec4() );
camera->setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
camera->setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT );
camera->setRenderOrder( osg::Camera::PRE_RENDER );
camera->setViewport( 0, 0, w, h );
osg::Texture2D* tex = new osg::Texture2D;
tex->setTextureSize( w, h );
tex->setSourceType( GL_FLOAT );
tex->setSourceFormat( GL_RGBA );
tex->setInternalFormat( GL_RGBA32F_ARB );
tex->setResizeNonPowerOfTwoHint( false );
tex->setFilter( osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR );
tex->setFilter( osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR );
attachedTextures.push_back( tex );
camera->attach( osg::Camera::COLOR_BUFFER, tex );
tex = new osg::Texture2D;
tex->setTextureSize( w, h );
tex->setSourceType( GL_FLOAT );
tex->setSourceFormat( GL_DEPTH_COMPONENT );
tex->setInternalFormat( GL_DEPTH_COMPONENT32 );
tex->setResizeNonPowerOfTwoHint( false );
attachedTextures.push_back( tex );
camera->attach( osg::Camera::DEPTH_BUFFER, tex );
return camera.release();
}
int main() {
osg::ref_ptr< osg::Group > root( new osg::Group );
root->addChild( osgDB::readNodeFile( "cow.osg" ) );
unsigned int winW = 800;
unsigned int winH = 600;
osgViewer::Viewer viewer;
viewer.setUpViewInWindow( 0, 0, winW, winH );
viewer.setSceneData( root.get() );
viewer.realize();
// setup MRT camera
std::vector<osg::Texture2D*> attachedTextures;
osg::Camera* mrtCamera ( viewer.getCamera() );
setupMRTCamera( mrtCamera, attachedTextures, winW, winH );
// set RTT textures to quad
osg::Geode* geode( new osg::Geode );
geode->addDrawable( osg::createTexturedQuadGeometry(
osg::Vec3(-1,-1,0), osg::Vec3(2.0,0.0,0.0), osg::Vec3(0.0,2.0,0.0)) );
geode->getOrCreateStateSet()->setTextureAttributeAndModes( 0, attachedTextures[0] );
geode->getOrCreateStateSet()->setMode( GL_LIGHTING, osg::StateAttribute::OFF );
geode->getOrCreateStateSet()->setMode( GL_DEPTH_TEST, osg::StateAttribute::OFF );
// configure postRenderCamera to draw fullscreen textured quad
osg::Camera* postRenderCamera( new osg::Camera );
postRenderCamera->setClearMask( 0 );
postRenderCamera->setRenderTargetImplementation( osg::Camera::FRAME_BUFFER, osg::Camera::FRAME_BUFFER );
postRenderCamera->setReferenceFrame( osg::Camera::ABSOLUTE_RF );
postRenderCamera->setRenderOrder( osg::Camera::POST_RENDER );
postRenderCamera->setViewMatrix( osg::Matrixd::identity() );
postRenderCamera->setProjectionMatrix( osg::Matrixd::identity() );
postRenderCamera->addChild( geode );
root->addChild(postRenderCamera);
// setup the callback
SnapImage* finalDrawCallback = new SnapImage(viewer.getCamera()->getGraphicsContext());
mrtCamera->setFinalDrawCallback(finalDrawCallback);
return (viewer.run());
}
Thanks in advance,
Rômulo Cerqueira

How to update Texture2D in pixel shader every frame (in D3D10)?

Using D3D10, I am drawing a 2d rectangle and want to fill it with a texture (bitmap) that should change a few times every second (like displaying video).
I am using a shader effect, with a Texture2D variable, and trying to update a ID3D10EffectShaderResourceVariable and redraw the mesh.
My actual usage will be by copying bitmaps from memory, and using UpdateSubresource.
But it did not work, so I reduced it to test switching between two DDS images.
The result is that it draws the first image as expected, but keeps drawing it instead of switching between the two images.
I am new to D3D. Can you explain if this method can work at all, or suggest the right way to do it.
The shader effect:
Texture2D txDiffuse;
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = input.Pos;
output.Tex = input.Tex;
return output;
}
float4 PS( PS_INPUT input) : SV_Target
{
return txDiffuse.Sample( samLinear, input.Tex );
}
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}
Code (skipped many parts):
ID3D10ShaderResourceView* g_pTextureRV = NULL;
ID3D10EffectShaderResourceVariable* g_pDiffuseVariable = NULL;
D3DX10CreateEffectFromResource(gInstance, MAKEINTRESOURCE(IDR_RCDATA1), NULL, NULL, NULL, "fx_4_0", dwShaderFlags, 0, device, NULL, NULL, &g_pEffect, NULL, NULL);
g_pTechnique = g_pEffect->GetTechniqueByName( "Render" );
g_pDiffuseVariable = g_pEffect->GetVariableByName( "txDiffuse" )->AsShaderResource();
// this part is called on Frame render:
device->CreateRenderTargetView( backBuffer, NULL, &rtView);
device->ClearRenderTargetView( rtView, ClearColor );
if(g_pTextureRV != NULL) {
g_pTextureRV->Release();
g_pTextureRV = NULL;
}
D3DX10CreateShaderResourceViewFromFile(device, pCurrentDDSFilePath, NULL, NULL, &g_pTextureRV, NULL );
g_pDiffuseVariable->SetResource( g_pTextureRV );
D3D10_TECHNIQUE_DESC techDesc;
g_pTechnique->GetDesc( &techDesc );
for( UINT p = 0; p < techDesc.Passes; ++p )
{
g_pTechnique->GetPassByIndex( p )->Apply( 0 );
direct2dDrawingContext->dev->Draw( 6, 0 );
}
// ... present the current back buffer
One solution, not necessarily the best, but one that doesn't use custom shaders, follows (I wrote it in C# / Managed DirectX but it should be easy to transcode.)
Bitmap bmp; //the bitmap that you will use to update the texture
Texture tex; //the texture that DirectX will render
void Render()
{
//render some stuff
bmp = GetNextTextureFrame(); //whatever you do to update your bitmap
Surface s = tex.GetSurfaceLevel(0);
Graphics g = s.GetGraphics();
//IntPtr hdc = g.GetHdc();
//BitBlt(hdc, 0, 0, bmp.Width, bmp.Height, bmpHdc, 0, 0, 0xcc0020);
g.DrawImageUnscaled(bmp, 0, 0);
g.ReleaseHdc(hdc);
s.ReleaseGraphics();
device.SetTexture(0, tex);
//now render your primitives
//render some more stuff
//present
}
The commented out lines are the way I actually did it, using an hBitmap and DC with BitBlt, because it's faster than GDI+. A lot of people will probably tell you that the above is a bad way to do it, because of all the memory locking that has to occur, and they're probably right. But I was able to achieve 30fps with multiple 1920x1080 textures, so regardless of whether it's proper, it works.

Qimage to cv::Mat convertion strange behaviour

I am trying to create an application where I am trying to integrate opencv and qt.
I managed successfully to convert a cv::Mat to QImage by using the code below:
void MainWindow::loadFile(const QString &fileName)
{
cv::Mat tmpImage = cv::imread(fileName.toAscii().data());
cv::Mat image;
if(!tmpImage.data || tmpImage.empty())
{
QMessageBox::warning(this, tr("Error Occured"), tr("Problem loading file"), QMessageBox::Ok);
return;
}
/* Mat to Qimage */
cv::cvtColor(tmpImage, image, CV_BGR2RGB);
img = QImage((const unsigned char*)(image.data), image.cols, image.rows, QImage::Format_RGB888);
imgLabel->setPixmap(QPixmap::fromImage(img));
imgLabel->resize(imgLabel->pixmap()->size());
saveAsAct->setEnabled(true);
}
However, when I am trying to convert the QImage to cv::Mat by using the following code:
bool MainWindow::saveAs()
{
if(fileName.isEmpty())
{
QMessageBox::warning(this, tr("Error Occured"), tr("Problem loading file"), QMessageBox::Close);
return EXIT_FAILURE;
}else{
outputFileName = QFileDialog::getSaveFileName(this, tr("Save As"), fileName.toAscii().data(), tr("Image Files (*.png *.jpg *.jpeg *.bmp)\n *.png\n *.jpg\n *.jpeg\n *.bmp"));
/* Qimage to Mat */
cv::Mat mat = cv::Mat(img.height(), img.width(), CV_8UC4, (uchar*)img.bits(), img.bytesPerLine());
cv::Mat mat2 = cv::Mat(mat.rows, mat.cols, CV_8UC3 );
int from_to[] = {0,0, 1,1, 2,2};
cv::mixChannels(&mat, 1, &mat2, 1, from_to, 3);
cv::imwrite(outputFileName.toAscii().data(), mat);
}
saveAct->setEnabled(true);
return EXIT_SUCCESS;
}
I have no success and the result is totally disordered image. In the net that I searched I saw that the people are using this way without mentioning any specific problems. Does someone have any idea, about what could be cause the problem? Thanks in advance.
Theoodore
P.S. I am using opencv 2.4 and Qt 4.8, under a Arch Linux system with gnome-3.4
Just find out the "correct" solution of copying(not reference) the QImage to cv::Mat
The answer of Martin Beckett is almost correct
for (int i=0;i<image.height();i++) {
memcpy(mat.ptr(i),image.scanline(i),image.bytesperline());
}
I don't see the full codes, but I guess you may want to use it like this way
cv::Mat mat(image.height(), image.width(), CV_8UC3);
for (int i=0;i<image.height();i++) {
memcpy(mat.ptr(i),image.scanline(i),image.bytesperline());
}
But this code exist a problem,
the memory allocated by cv::Mat may not have the same "bytesperline" as the QImage
The solution I found is take the reference of the QImage first, then clone it
return cv::Mat(img.height(), img.width(), format, img.bits(), img.bytesPerLine()).clone();
The solution of Martin Beckett suggested could generate correct result in most
of the times, I didn't notice there are a bug until I hit it.
Whatever, I hope this is a "correct" solution. If you find any bug(s), please let
everyone know so we could have a change to improve the codes.
I think you might find this useful. http://www.jdxyw.com/?p=1480
It uses IplImage to get data, but you can use cv::Mat and Mat.data to get a pointer to the original matrix. I hope you might find this useful.
opencv images are stepped so that each row begins on a multiple of 32bits, this makes memory access faster. If you are using a 3byte/pixel format then unless you have a width that is 1/3 of a multiple of 4 you will have 'spare' memory at the end of each line
The safe way is to copy the image data a row at a time. The opencv mat.ptr(row) returns a pointer to the start of each row and the QImage .scanline(row) member does the same
See How to output this 24 bit image in Qt
edit: Something like
for (int i=0;i<image.height();i++) {
memcpy(mat.ptr(i),image.scanline(i),image.bytesperline());
}
From this source code
QImage MatToQImage(const Mat& mat)
{
// 8-bits unsigned, NO. OF CHANNELS=1
if(mat.type()==CV_8UC1)
{
// Set the color table (used to translate colour indexes to qRgb values)
QVector<QRgb> colorTable;
for (int i=0; i<256; i++)
colorTable.push_back(qRgb(i,i,i));
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_Indexed8);
img.setColorTable(colorTable);
return img;
}
// 8-bits unsigned, NO. OF CHANNELS=3
if(mat.type()==CV_8UC3)
{
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
return img.rgbSwapped();
}
else
{
qDebug() << "ERROR: Mat could not be converted to QImage.";
return QImage();
}
} // MatToQImage()
This is what i got from here thanks to jose. It helps me to get over this.
To visualize OpenCV image (cv::Mat) in Qt (QImage), you must follow these steps:
Invert color sequence: cv::cvtColor(imageBGR, imageRGB, CV_BGR2RGB);
Change format OpenCV to Qt: QImage qImage((uchar*) imageRGB.data, imageRGB.cols, imageRGB.rows, imageRGB.step, QImage::Format_RGB888);
Use QPainter to render the image.
Please note the use of QImage::Format. Read the Qt on this issue.
if you're still looking for the solution. here's one :
Cv::Mat to QImage :
QImage Mat2QImage(cv::Mat &image )
{
QImage qtImg;
if( !image.empty() && image.depth() == CV_8U ){
if(image.channels() == 1){
qtImg = QImage( (const unsigned char *)(image.data),
image.cols,
image.rows,
QImage::Format_Indexed8 );
}
else{
cvtColor( image, image, CV_BGR2RGB );
qtImg = QImage( (const unsigned char *)(image.data),
image.cols,
image.rows,
QImage::Format_RGB888 );
}
}
return qtImg;
}
For QImage to cv::Mat .
cv::Mat QImage2Mat(QImage &image) {
cv::Mat cvImage;
switch (image.format()){
case QImage::Format_RGB888:{
cvImage = cv::Mat(image.height(),
image.width(),
CV_8UC3,
image.bits(),
image.bytesPerLine());
cv::cvtColor(cvImage, cvImage, CV_RGB2BGR);
return cvImage;
}
case QImage::Format_Indexed8:{
cvImage = cv::Mat(image.height(),
image.width(),
CV_8U,
image.bits(),
image.bytesPerLine());
return cvImage;
}
default:
break;
}
return cvImage;
}

Blackberry - how to resize image?

I wanted to know if we can resize an image. Suppose if we want to draw an image of 200x200 actual size with a size of 100 x 100 size on our blackberry screen.
Thanks
You can do this pretty simply using the EncodedImage.scaleImage32() method. You'll need to provide it with the factors by which you want to scale the width and height (as a Fixed32).
Here's some sample code which determines the scale factor for the width and height by dividing the original image size by the desired size, using RIM's Fixed32 class.
public static EncodedImage resizeImage(EncodedImage image, int newWidth, int newHeight) {
int scaleFactorX = Fixed32.div(Fixed32.toFP(image.getWidth()), Fixed32.toFP(newWidth));
int scaleFactorY = Fixed32.div(Fixed32.toFP(image.getHeight()), Fixed32.toFP(newHeight));
return image.scaleImage32(scaleFactorX, scaleFactorY);
}
If you're lucky enough to be developer for OS 5.0, Marc posted a link to the new APIs that are a lot clearer and more versatile than the one I described above. For example:
public static Bitmap resizeImage(Bitmap originalImage, int newWidth, int newHeight) {
Bitmap newImage = new Bitmap(newWidth, newHeight);
originalImage.scaleInto(newImage, Bitmap.FILTER_BILINEAR, Bitmap.SCALE_TO_FILL);
return newImage;
}
(Naturally you can substitute the filter/scaling options based on your needs.)
Just an alternative:
BlackBerry - draw image on the screen
BlackBerry - image 3D transform
I'm not a Blackberry programmer, but I believe some of these links will help you out:
Image Resizing Article
Resizing a Bitmap on the Blackberry
Blackberry Image Scaling Question
Keep in mind that the default image scaling done by BlackBerry is quite primitive and generally doesn't look very good. If you are building for 5.0 there is a new API to do much better image scaling using filters such as bilinear or Lanczos.
For BlackBerry JDE 5.0 or later, you can use the scaleInto API.
in this there is two bitmap.temp is holding the old bitmap.In this method you just pass
bitmap ,width,height.it return new bitmap of your choice.
Bitmap ImgResizer(Bitmap bitmap , int width , int height){
Bitmap temp=new Bitmap(width,height);
Bitmap resized_Bitmap = bitmap;
temp.createAlpha(Bitmap.HOURGLASS);
resized_Bitmap.scaleInto(temp , Bitmap.FILTER_LANCZOS);
return temp;
}
Here is the function or you can say method to resize image, use it as you want :
int olddWidth;
int olddHeight;
int dispplayWidth;
int dispplayHeight;
EncodedImage ei2 = EncodedImage.getEncodedImageResource("add2.png");
olddWidth = ei2.getWidth();
olddHeight = ei2.getHeight();
dispplayWidth = 40;\\here pass the width u want in pixels
dispplayHeight = 80;\\here pass the height u want in pixels again
int numeerator = net.rim.device.api.math.Fixed32.toFP(olddWidth);
int denoominator = net.rim.device.api.math.Fixed32.toFP(dispplayWidth);
int widtthScale = net.rim.device.api.math.Fixed32.div(numeerator, denoominator);
numeerator = net.rim.device.api.math.Fixed32.toFP(olddHeight);
denoominator = net.rim.device.api.math.Fixed32.toFP(dispplayHeight);
int heighhtScale = net.rim.device.api.math.Fixed32.div(numeerator, denoominator);
EncodedImage newEi2 = ei2.scaleImage32(widtthScale, heighhtScale);
Bitmap _add =newEi2.getBitmap();
I am posting this answers for newbie in Blackberry Application development. Below code is for processing Bitmap images from URL and Resizing them without loass of Aspect Ratio :
public static Bitmap imageFromServer(String url)
{
Bitmap bitmp = null;
try{
HttpConnection fcon = (HttpConnection)Connector.open(url);
int rc = fcon.getResponseCode();
if(rc!=HttpConnection.HTTP_OK)
{
throw new IOException("Http Response Code : " + rc);
}
InputStream httpInput = fcon.openDataInputStream();
InputStream inp = httpInput;
byte[] b = IOUtilities.streamToBytes(inp);
EncodedImage img = EncodedImage.createEncodedImage(b, 0, b.length);
bitmp = resizeImage(img.getBitmap(), 100, 100);
}
catch(Exception e)
{
Dialog.alert("Exception : " + e.getMessage());
}
return bitmp;
}
public static Bitmap resizeImage(Bitmap originalImg, int newWidth, int newHeight)
{
Bitmap scaledImage = new Bitmap(newWidth, newHeight);
originalImg.scaleInto(scaledImage, Bitmap.FILTER_BILINEAR, Bitmap.SCALE_TO_FIT);
return scaledImage;
}
The Method resizeImage is called inside the method imageFromServer(String url).
1) the image from server is processed using EncodedImage img.
2) Bitmap bitmp = resizeImage(img.getBitmap(), 100, 100);
parameters are passed to resizeImage() and the return value from resizeImage() is set to Bitmap bitmp.

Resources