Trying to create a DvbBufferedImage but its width and height are 0 - java-me

I'm trying to create a fade-in animation using DvbBufferedImage for my BD-J application by changing alpha value of the images,
doubleBuffer = new DVBBufferedImage(1920, 2180, DVBBufferedImage.TYPE_ADVANCED);
but after it creates the buffer, its width and height are 0 and when I'm trying to get graphics:
DVBGraphics bufferGraphics = doubleBuffer.createGraphics();
It returns null.
after that, I want to draw images onto buffer and I get NullPointerException.
Do you have and suggestion?
I think it is related to my libraries, because when I replaced DvbBufferedImage with BufferedImage using this code:
protected BufferedImage bufImage = new BufferedImage(1920, 2180, BufferedImage.TYPE_INT_ARGB );
it says :
The constructor BufferedImage(int, int, int) is undefined
I mention that I'm using customized eclipse for developing Bd-j Applications and my java version is jre1.8.0_77.
Classes used for this application listed below:
basis.jar
btclasses.zip
j2me_xml_cdc.jar
javatv.jar
jsse-cdc.jar
pbp_1_0.jar
SonicBDJ.jar
Your help will be appreciated on this problem, Thanks in advance!

This could be related to a memory issue.
Blu-ray Players are only required to have 4 mb of memory according to the specification. This includes the space for the actual JAR file currently loaded. So if you're using an image of 1920x2180 pixels in high quality, then your JAR is probably already taking up 1-2 mb. Then loading that image into memory might cause an OutOfMemoryException, which means the image won't be loaded, which is why you get the NullPointerException.
Blu-ray Disc Java is JavaME. We're dealing with a limited platform. ;-)

Related

Saving the stream using Intel RealSense

I'm new to Intel RealSense. I want to learn how to save the color and depth streams to bitmap. I'm using C++ as my language. I have learned that there is a function ToBitmap(), but it can be used for C#.
So I wanted to know is there any method or any function that will help me in saving the streams.
Thanks in advance.
I'm also working my way through this, It seems that the only option is to do it manually. We need to get ImageData from PXCImage. The actual data is stored in ImageData.planes but I still don't understand how it's organized.
https://software.intel.com/en-us/articles/dipping-into-the-intel-realsense-raw-data-stream?language=en Here you can find example of getting depth data.
But I still have no idea what is pitches and how data inside planes is organized.
Here: https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/332718 kind of backwards process is described.
I would be glad if you will be able to get some insight from this information.
And I obviously would be glad if you've discovered some insight you can share :).
UPD: Here is something that looks like what we need, I haven't worked with it yet, but it sheds some light on internal organization of planes[0] https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/514663
UPD2: To add some completeness to the answer:
You then can create GDI+ image from data in ImageData:
auto colorData = PXCImage::ImageData();
if (image->AcquireAccess(PXCImage::ACCESS_READ, PXCImage::PIXEL_FORMAT_RGB24, &colorData) >= PXC_STATUS_NO_ERROR) {
auto colorInfo = image->QueryInfo();
auto colorPitch = colorData.pitches[0] / sizeof(pxcBYTE);
Gdiplus::Bitmap tBitMap(colorInfo.width, colorInfo.height, colorPitch, PixelFormat24bppRGB, baseColorAddress);
}
And Bitmap is subclass of Image (https://msdn.microsoft.com/en-us/library/windows/desktop/ms534462(v=vs.85).aspx). You can save Image to file in different formats.

XNA: I only have 1 supported display mode (800x600)

I'm trying to get my game to automatically set the window size as the correct resolution for the monitor.
For example, my desktop PC is at 1920x1080 resolution, so I want my game to run at 1920x1080 on here, however my laptop is at 1366x768 so I want my game to run at 1366x768 on there, etc.
I've tried so many different things such as GraphicsDevice.Adapter.CurrentDisplayMode.Width/Height, and even printed out the list of GraphicsDevice.Adapter.SupportedDisplayModes and they all tell me that the only display mode supported for me is 800x600. This is surely not the case, because I'm running my Windows 7 at 1920x1080.
So what on earth am I doing wrong? I tried putting this code in the Game1 constructor, the initialiser, I can't figure out why it isn't working properly!
Okay I fixed it. I just realised I was being a little bit stupid in that I forgot to mention this a "MonoGame" application, not a straight forward XNA project... (I didn't think it would make a difference but oh I was wrong)..
As it turns out, MonoGame has a massive bug to do with the graphics devices, and there is supposedly a way to solve it (build from the latest source or something?) but what I did was install the XNA 4.0 Refresh for Visual Studio 2013, and copied all my source code across to a new XNA project as opposed to a MonoGame project.
And hey presto, GraphicsDevice.DisplayMode.Width and Height are now correctly registering as 1920 and 1080 pixels. So now I can carry on with my game FINALLY.
Thanks to all the people that tried to help me solve this issue!
You can set the resolution of your game in the constructor by adjusting the graphics' PreferredBackBufferWidth and PreferredBackBufferHeight:
For example this will produce a game window that's 480x320:
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
graphics.PreferredBackBufferHeight = 320;
graphics.PreferredBackBufferWidth = 480;
}
Keep in mind that when in windowed mode your game will (by default) have a title bar which prevents the game window from being as big as your full screen.
This is my method on how to get your maximum supported resolution(and set it, as an example to clarify it):
// in the Initialize method
graphics.PreferredBackBufferWidth = GraphicsDevice.DisplayMode.Width;
graphics.PreferredBackBufferHeight = GraphicsDevice.DisplayMode.Height;
graphics.IsFullScreen = false;
graphics.ApplyChanges(); // <-- not needed in the Game constructor
However, I don't know what you're doing wrong.

"Pixels out of bounds" error in IRAF

I am trying to do a PSF fitting on a fits image using SNOOPY (a point spread function fitter) and IRAF. I can open this image fine using imexam but when I select a point (a star or whatever) I get an error:
Warning: Pixels out of bounds
It seems that what I am seeing and what IRAF is seeing (behind the scenes) are not the same. As if there is some kind of a co-ordinate off-shift or something.
How would one go about fixing this ?
[Scientific Linux 6, 16bit , IRAF v2.16]
If you're using DS9 v7.2 then the problem is a bug in the DS9 cursor read causing the offset. The only workaround is to use an earlier version of DS9 or a more recent beta.
I am also getting the "Warning Pixels out of bounds" error in imexamine for parts of images in the 8th frame only of an 8 frame FITS image, using DS9 7.3.2 with pyraf. The 7.3 beta is no longer available.
A workaround is to load the subimage into ds9 using iraf, rather than loading it directly in ds9.

Bad Performance of QPainter::drawText on Linux

I have noticed that QPainter::drawText is horribly slow on Linux when using it with a scaled window mapping. Is there anything I can do about this? I already checked whether disabling anti-aliasing or enabled the raster-renderer makes a difference, but it doesn't.
Example: When using a viewport size of (450px, 200px), a window size of factor 100 (45000, 20000) and thus font sizes scaled up by factor 100 as well (1400pt), rendering 30 times the text "hello" takes about 4(!) seconds on Linux - both on OpenSuse and Ubuntu.
The same sample renders in a snap on Windows and Mac.
Just for clarification: although the font size is scaled up, the text appears in "normal" size on screen due to the described window<->viewport mapping.
Here is the simple sample code I am using:
void Widget::paintEvent(QPaintEvent *event)
{
const int scaleFactor = 100;
QPainter painter(this);
// Setup font
QFont font;
font.setPointSize(14*scaleFactor);
painter.setFont(font);
// Setup mapping
painter.setWindow(0, 0, width() * scaleFactor, height() * scaleFactor);
// Render the text
for (int i = 0; i < 30; i++)
painter.drawText(qrand() % (width() * scaleFactor), qrand() % (height() * scaleFactor), "Hello");
}
Any help would be awesome.
Note: I am using Qt 4.8.5
This question is quite old but as be Qt bug still seems to be unresolved here we go...
Not sure if this might be an option but in two projects I worked for we implemented labels which internally rendered into a pimap/image first which was then drawn.
So caching your text in an image whith transparent background should solve the problem.
I do not think it makes a difference here, but you might also check if QStaticText has a beneficial influence on performance in your case.
Problem found!
The FontConfig developer libraries where not installed on my Linux system. This caused Qt to be built against XLFD, which obviously doesn't work well with scaled mappings (see report above).
After installing the FontConfig dev libs and rebuilding Qt the text now gets rendered nice and fast. I did additionally specify the "-fontconfig" parameter when rebuilding Qt, just to be sure, but according to the Qt guys this shouldn't be necessary.

Transparency issue loading texture from PNG in Monogame

I'm trying to accomplish something that I figure should be quite simple in MonoGame GL on Windows.. That is to load a texture from a PNG file and render it to the screen as a sprite. So far I'm having a lot of trouble with this. I'm loading the PNG to a Texture2D with the following code (F#):
use file = System.IO.File.OpenRead("testTexture.png")
this.texture <- Texture2D.FromStream(this.GraphicsDevice, file)
and then rendering with:
this.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.NonPremultiplied);
this.spriteBatch.Draw(this.texture, Vector2.Zero, Color.White)
this.spriteBatch.End()
The problem is some kind of weird effect with the alpha channels, as if there's something up with channels not being alpha pre-multiplied or something, but I can't place for sure exactly what's happening. What's notable though is that the exact same code renders perfectly using the official XNA libraries. The issue is only in MonoGame, which I tested with version 3.0 and 3.2, both of which have the same issue.
Here's the rendering a test PNG in MonoGame to illustrate the problem:
The background in each image is cornflower blue, then pure red, green and blue respectively. Notice in the image with the red background, you can see a dark outline around the red lines in the texture. This outline shouldn't be there as the lines and the background are both pure red. Note the same thing occurs around the blue lines in the image with the blue background, but not in the image with the green background. In that image the green lines blend in with the green background as they should.
Below is how the exact same file renders using the official XNA library.
Notice how the blue, green and red lines blend in to the background when the background is the same colour. This is the correct behaviour.
Given that the same code behaves differently in both XNA and MonoGame, I believe there must be a bug in the framework. Would anyone have any good guess as to where the bug might be and of what nature? If it's an easy fix then the best solution might be to just fix that bug myself.
Besides that though I really just want to learn a way I can simply load a PNG and render it correctly to the screen in MonoGame. I'm sure I can't be the first one who's wanted to do this. I would like to avoid the content pipeline if at all possible, simply for the sake of simplicity.
Update
I've done some more digging trying to figure out the nature of this problem. I've changed the testTexture.png file to be more revealing about alpha blending problems. After using this new texture file, I took screenshots of the incorrect MonoGame rendering and viewed it in it's separate colour channels. What's happening is pretty perplexing to me. I initially though it might be a simple case of BlendState.NonPremultiplied being ignored, but what I'm seeing looks more complicated that that. Among other things, the green colour channel appears to be blending differently to the blue and red channels. Here's a rather large PNG image that's a compilation of screenshots and explanations as to what I'm talking about:
i.imgur.com/CEUQqm0.png
Clearly there's some kind of bug in MonoGame for Windows GL, and possibly other editions I haven't tried this with (though I'd be interested to see verified). If anyone thinks they know what might be happening here please let me know.
lastly, here's the project files to reproduce the problem I'm having:
mega.co.nz/#!llFxBbrb!OrpPIn4Tu2UaHHuoSPL03nqOtAJFK59cfxI5TDzLyYI
This issue has been resolved. The bug in the framework was found at:
MonoGame.Framework/Graphics/ImageEx.cs in the following method:
internal static void RGBToBGR(this Image bmp)
{
System.Drawing.Imaging.ImageAttributes ia = new System.Drawing.Imaging.ImageAttributes();
System.Drawing.Imaging.ColorMatrix cm = new System.Drawing.Imaging.ColorMatrix(rgbtobgr);
ia.SetColorMatrix(cm);
using (System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(bmp))
{
g.DrawImage(bmp, new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), 0, 0, bmp.Width, bmp.Height, System.Drawing.GraphicsUnit.Pixel, ia);
}
}
changing it to this fixes the issue:
internal static void RGBToBGR(this Image bmp)
{
using (Bitmap bmpCopy = (Bitmap)bmp.Clone())
{
System.Drawing.Imaging.ImageAttributes ia = new System.Drawing.Imaging.ImageAttributes();
System.Drawing.Imaging.ColorMatrix cm = new System.Drawing.Imaging.ColorMatrix(rgbtobgr);
ia.SetColorMatrix(cm);
using (System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(bmp))
{
g.Clear(Color.Transparent);
g.DrawImage(bmpCopy, new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), 0, 0, bmp.Width, bmp.Height, System.Drawing.GraphicsUnit.Pixel, ia);
}
}
}
What's happening is when the RGBToBGR conversion performed, it draws the converted version of the image on top of the image object it's converting from. This is what's causing the strange effects. As far as I can tell the only thing to do is to clear the image object before drawing over it again, which of course means the bitmap being converting from must be copied so you can read from the copied version while writing over the original version that was just cleared.
Thanks to dellis1972 for pointing me in the right direction on this: https://github.com/mono/MonoGame/issues/1946

Resources