I'm trying to accomplish something that I figure should be quite simple in MonoGame GL on Windows.. That is to load a texture from a PNG file and render it to the screen as a sprite. So far I'm having a lot of trouble with this. I'm loading the PNG to a Texture2D with the following code (F#):
use file = System.IO.File.OpenRead("testTexture.png")
this.texture <- Texture2D.FromStream(this.GraphicsDevice, file)
and then rendering with:
this.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.NonPremultiplied);
this.spriteBatch.Draw(this.texture, Vector2.Zero, Color.White)
this.spriteBatch.End()
The problem is some kind of weird effect with the alpha channels, as if there's something up with channels not being alpha pre-multiplied or something, but I can't place for sure exactly what's happening. What's notable though is that the exact same code renders perfectly using the official XNA libraries. The issue is only in MonoGame, which I tested with version 3.0 and 3.2, both of which have the same issue.
Here's the rendering a test PNG in MonoGame to illustrate the problem:
The background in each image is cornflower blue, then pure red, green and blue respectively. Notice in the image with the red background, you can see a dark outline around the red lines in the texture. This outline shouldn't be there as the lines and the background are both pure red. Note the same thing occurs around the blue lines in the image with the blue background, but not in the image with the green background. In that image the green lines blend in with the green background as they should.
Below is how the exact same file renders using the official XNA library.
Notice how the blue, green and red lines blend in to the background when the background is the same colour. This is the correct behaviour.
Given that the same code behaves differently in both XNA and MonoGame, I believe there must be a bug in the framework. Would anyone have any good guess as to where the bug might be and of what nature? If it's an easy fix then the best solution might be to just fix that bug myself.
Besides that though I really just want to learn a way I can simply load a PNG and render it correctly to the screen in MonoGame. I'm sure I can't be the first one who's wanted to do this. I would like to avoid the content pipeline if at all possible, simply for the sake of simplicity.
Update
I've done some more digging trying to figure out the nature of this problem. I've changed the testTexture.png file to be more revealing about alpha blending problems. After using this new texture file, I took screenshots of the incorrect MonoGame rendering and viewed it in it's separate colour channels. What's happening is pretty perplexing to me. I initially though it might be a simple case of BlendState.NonPremultiplied being ignored, but what I'm seeing looks more complicated that that. Among other things, the green colour channel appears to be blending differently to the blue and red channels. Here's a rather large PNG image that's a compilation of screenshots and explanations as to what I'm talking about:
i.imgur.com/CEUQqm0.png
Clearly there's some kind of bug in MonoGame for Windows GL, and possibly other editions I haven't tried this with (though I'd be interested to see verified). If anyone thinks they know what might be happening here please let me know.
lastly, here's the project files to reproduce the problem I'm having:
mega.co.nz/#!llFxBbrb!OrpPIn4Tu2UaHHuoSPL03nqOtAJFK59cfxI5TDzLyYI
This issue has been resolved. The bug in the framework was found at:
MonoGame.Framework/Graphics/ImageEx.cs in the following method:
internal static void RGBToBGR(this Image bmp)
{
System.Drawing.Imaging.ImageAttributes ia = new System.Drawing.Imaging.ImageAttributes();
System.Drawing.Imaging.ColorMatrix cm = new System.Drawing.Imaging.ColorMatrix(rgbtobgr);
ia.SetColorMatrix(cm);
using (System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(bmp))
{
g.DrawImage(bmp, new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), 0, 0, bmp.Width, bmp.Height, System.Drawing.GraphicsUnit.Pixel, ia);
}
}
changing it to this fixes the issue:
internal static void RGBToBGR(this Image bmp)
{
using (Bitmap bmpCopy = (Bitmap)bmp.Clone())
{
System.Drawing.Imaging.ImageAttributes ia = new System.Drawing.Imaging.ImageAttributes();
System.Drawing.Imaging.ColorMatrix cm = new System.Drawing.Imaging.ColorMatrix(rgbtobgr);
ia.SetColorMatrix(cm);
using (System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(bmp))
{
g.Clear(Color.Transparent);
g.DrawImage(bmpCopy, new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), 0, 0, bmp.Width, bmp.Height, System.Drawing.GraphicsUnit.Pixel, ia);
}
}
}
What's happening is when the RGBToBGR conversion performed, it draws the converted version of the image on top of the image object it's converting from. This is what's causing the strange effects. As far as I can tell the only thing to do is to clear the image object before drawing over it again, which of course means the bitmap being converting from must be copied so you can read from the copied version while writing over the original version that was just cleared.
Thanks to dellis1972 for pointing me in the right direction on this: https://github.com/mono/MonoGame/issues/1946
Related
I'm using Skia m62 with Open GL backend and getting the glitch while rendering png file.
To create SkBitmap I'm using the following code:
const auto codec = SkCodec::MakeFromStream(SkStream::MakeFromFile("test.png"));
const SkImageInfo imageInfo = codec->getInfo().makeColorType(kN32_SkColorType);
SkBitmap bm;
bm.allocPixels(imageInfo);
codec->getPixels(imageInfo, bm.getPixels(), bm.rowBytes());
The rest of the code is slightly modified (cannot find gl/GrGLUtil.h header) example found in Skia sources: https://github.com/google/skia/blob/master/example/SkiaSDLExample.cpp
The library is built with arguments: skia_use_freetype=true skia_use_system_freetype2=false skia_use_libpng=true skia_use_system_libpng=false skia_use_expat=false skia_use_icu=false skia_use_libjpeg_turbo=false skia_use_libwebp=false skia_use_piex=false skia_use_sfntly=false skia_use_zlib=true skia_use_system_zlib=false is_official_build=true target_os="mac" target_cpu="x86_64"
Here is the FULL EXAMPLE on GitHub illustrating the issue. It contains the png under observation and full setup to run on Mac OS x86_64.
UPD: Filed a bug in Skia tracker: https://bugs.chromium.org/p/skia/issues/detail?id=7361
I'll quote the answer from Skia's bugtracker:
Skia's GPU backend doesn't support drawing unpremultiplied images, but that is the natural state of most encoded images, including all .pngs. What you're seeing is an unpremultiplied bitmap being drawn as if it were a premultiplied bitmap. Pretty isn't it?
There are a couple easy ways to fix this. The simplest way to adapt the code you have there is to make sure to ask SkCodec for premultiplied output, e.g.
const SkImageInfo imageInfo = codec->getInfo().makeColorType(kN32_SkColorType)
.makeAlphaType(kPremul_SkAlphaType);
Or, you can use this simpler way to decode and draw:
sk_sp img = SkImage::MakeFromEncoded(SkData::MakeFromFileName("test.png"));
...
canvas->drawImage(img, ...);
That way lets SkImage::MakeFromEncoded() make all the choices about format for you.
This solves the issue.
I'm trying to create a fade-in animation using DvbBufferedImage for my BD-J application by changing alpha value of the images,
doubleBuffer = new DVBBufferedImage(1920, 2180, DVBBufferedImage.TYPE_ADVANCED);
but after it creates the buffer, its width and height are 0 and when I'm trying to get graphics:
DVBGraphics bufferGraphics = doubleBuffer.createGraphics();
It returns null.
after that, I want to draw images onto buffer and I get NullPointerException.
Do you have and suggestion?
I think it is related to my libraries, because when I replaced DvbBufferedImage with BufferedImage using this code:
protected BufferedImage bufImage = new BufferedImage(1920, 2180, BufferedImage.TYPE_INT_ARGB );
it says :
The constructor BufferedImage(int, int, int) is undefined
I mention that I'm using customized eclipse for developing Bd-j Applications and my java version is jre1.8.0_77.
Classes used for this application listed below:
basis.jar
btclasses.zip
j2me_xml_cdc.jar
javatv.jar
jsse-cdc.jar
pbp_1_0.jar
SonicBDJ.jar
Your help will be appreciated on this problem, Thanks in advance!
This could be related to a memory issue.
Blu-ray Players are only required to have 4 mb of memory according to the specification. This includes the space for the actual JAR file currently loaded. So if you're using an image of 1920x2180 pixels in high quality, then your JAR is probably already taking up 1-2 mb. Then loading that image into memory might cause an OutOfMemoryException, which means the image won't be loaded, which is why you get the NullPointerException.
Blu-ray Disc Java is JavaME. We're dealing with a limited platform. ;-)
I'm trying to get my game to automatically set the window size as the correct resolution for the monitor.
For example, my desktop PC is at 1920x1080 resolution, so I want my game to run at 1920x1080 on here, however my laptop is at 1366x768 so I want my game to run at 1366x768 on there, etc.
I've tried so many different things such as GraphicsDevice.Adapter.CurrentDisplayMode.Width/Height, and even printed out the list of GraphicsDevice.Adapter.SupportedDisplayModes and they all tell me that the only display mode supported for me is 800x600. This is surely not the case, because I'm running my Windows 7 at 1920x1080.
So what on earth am I doing wrong? I tried putting this code in the Game1 constructor, the initialiser, I can't figure out why it isn't working properly!
Okay I fixed it. I just realised I was being a little bit stupid in that I forgot to mention this a "MonoGame" application, not a straight forward XNA project... (I didn't think it would make a difference but oh I was wrong)..
As it turns out, MonoGame has a massive bug to do with the graphics devices, and there is supposedly a way to solve it (build from the latest source or something?) but what I did was install the XNA 4.0 Refresh for Visual Studio 2013, and copied all my source code across to a new XNA project as opposed to a MonoGame project.
And hey presto, GraphicsDevice.DisplayMode.Width and Height are now correctly registering as 1920 and 1080 pixels. So now I can carry on with my game FINALLY.
Thanks to all the people that tried to help me solve this issue!
You can set the resolution of your game in the constructor by adjusting the graphics' PreferredBackBufferWidth and PreferredBackBufferHeight:
For example this will produce a game window that's 480x320:
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
graphics.PreferredBackBufferHeight = 320;
graphics.PreferredBackBufferWidth = 480;
}
Keep in mind that when in windowed mode your game will (by default) have a title bar which prevents the game window from being as big as your full screen.
This is my method on how to get your maximum supported resolution(and set it, as an example to clarify it):
// in the Initialize method
graphics.PreferredBackBufferWidth = GraphicsDevice.DisplayMode.Width;
graphics.PreferredBackBufferHeight = GraphicsDevice.DisplayMode.Height;
graphics.IsFullScreen = false;
graphics.ApplyChanges(); // <-- not needed in the Game constructor
However, I don't know what you're doing wrong.
I have noticed that QPainter::drawText is horribly slow on Linux when using it with a scaled window mapping. Is there anything I can do about this? I already checked whether disabling anti-aliasing or enabled the raster-renderer makes a difference, but it doesn't.
Example: When using a viewport size of (450px, 200px), a window size of factor 100 (45000, 20000) and thus font sizes scaled up by factor 100 as well (1400pt), rendering 30 times the text "hello" takes about 4(!) seconds on Linux - both on OpenSuse and Ubuntu.
The same sample renders in a snap on Windows and Mac.
Just for clarification: although the font size is scaled up, the text appears in "normal" size on screen due to the described window<->viewport mapping.
Here is the simple sample code I am using:
void Widget::paintEvent(QPaintEvent *event)
{
const int scaleFactor = 100;
QPainter painter(this);
// Setup font
QFont font;
font.setPointSize(14*scaleFactor);
painter.setFont(font);
// Setup mapping
painter.setWindow(0, 0, width() * scaleFactor, height() * scaleFactor);
// Render the text
for (int i = 0; i < 30; i++)
painter.drawText(qrand() % (width() * scaleFactor), qrand() % (height() * scaleFactor), "Hello");
}
Any help would be awesome.
Note: I am using Qt 4.8.5
This question is quite old but as be Qt bug still seems to be unresolved here we go...
Not sure if this might be an option but in two projects I worked for we implemented labels which internally rendered into a pimap/image first which was then drawn.
So caching your text in an image whith transparent background should solve the problem.
I do not think it makes a difference here, but you might also check if QStaticText has a beneficial influence on performance in your case.
Problem found!
The FontConfig developer libraries where not installed on my Linux system. This caused Qt to be built against XLFD, which obviously doesn't work well with scaled mappings (see report above).
After installing the FontConfig dev libs and rebuilding Qt the text now gets rendered nice and fast. I did additionally specify the "-fontconfig" parameter when rebuilding Qt, just to be sure, but according to the Qt guys this shouldn't be necessary.
I have downloaded the TeeChart for .Net 2012 Evaluation version.
I have worked for 2 days trying to get transparency to work on the pie chart, circular gauge, and line chart. Everything I try does not change the black color.
Here is some of the sample code I have used in my attempt to make it transparent.
pieChart.Series.Clear();
pieChart.Header.Visible = true;
pieChart.Header.Text = "Pie Chart"; //At top on the chart
pieChart.Aspect.View3D = false;
pieChart.Walls.Back.Transparent = true;
pieChart.Walls.Back.Gradient.Visible = false;
pieChart.Panel.Transparent = true;
pieChart.Panel.Gradient.Visible = true;
pieChart.Panel.Brush.Transparency = 50;
pie.Add(10,Color.Red);
pie.Add(15, Color.Green);
pie.Add(10, Color.PowderBlue);
pie.Add(15, Color.DarkGoldenrod);
pie.Add(10, Color.Bisque);
pie.Marks.Style = Steema.TeeChart.Styles.MarksStyles.Percent;
pieChart.Series.Add(pie);
The colors in the slices work as expected. I have tried changing the colors of Palettes, background color, canvas, and wall and nothing seems to work. I cannot find any documentation that give any explanation of what these objects are either to know if I am using each correctly. Please note that in each attempt only 1 was used as a time.
I am exporting the image to a PNG if that makes a difference - both to the stream and to a file. Is transparency a feature of the full version or something that can be done in the trial version? Please confirm as transparent charts are a requirement for our decision to buy this version.
Thanks
PS - It would be nice to also have access to the .Net documentation (The Java version, via the documentation, does not seem to be a direct comparison http://www.steema.com/files/public/teechart/java/v1/docs/JavaDoc/overview-summary.html)
I get a transparent chart using the code you posted in a WinForms application. They key is Panel.Transparent property. You can check that painting the form in a very visible color, eg.:
this.BackColor = Color.Red;
Exporting this chart to PNG also creates a transparent image for me. At run-time you need to set BufferStyle to BufferStyles.None before exporting, for example:
Steema.TeeChart.Drawing.BufferStyle buffer = chart1.Graphics3D.BufferStyle;
chart1.Graphics3D.BufferStyle = Steema.TeeChart.Drawing.BufferStyle.None;
chart1.Export.Image.PNG.Save(#"C:\temp\TransparentWebChart.png");
chart1.Graphics3D.BufferStyle = buffer;
Does this work fine for you? Which kind of application are you developing (WinForms, WebForms, WFP, Silverlight, WP7, etc.)?
Regarding the documentation, both evaluation and registered installerse include help file and tutorials shortcuts at the program group TeeChart entry and also in the Docs folder created by the installer.