Webplayer GUI Rndering issue pixelated - graphics

I made a game in unity3d, Its graphics looking perfect in unity engine. but when i built it and played in web-player its graphics become pixelated and blurry.
So how can i make it pixel perfect game for web-player?

This also happened to me once. But i got the answer after some searching on unity.
this is what you need to do.
Select the texture which becomes pixelated.
-From import settings
Texture type=texture
Filter mode=Trilinear
Slect web as a platform and check yes on override for web
Max size=max
Format=truecolor
and click apply this should and definetly help me.
Source

Try to change you image size in photoshop i think you save it in small size.
And always make you graphics in vector so it become pixel free and you always have achance to make a new image from vector.

Related

Scaling a kha-app for retina on iPad

I have a kha app that runs perfecly on an iPad2 (1024/768px).
When I run the same project on a later iPad Mini with 2048/1516. My coordinates are all half the size, which kinda makes sense.
So when I double all the sizes of my objects and GFX it will work on the iPad mini, but will be too big for iPad2.
I looked into a backbuffer and a renderTarget as explained here:
https://www.youtube.com/watch?v=OV1PTo5XSCA
There is also the windowSize option in khafile, which seems to do nothing.
Surface x and y coodinates always seem to come in in real screen coodrdinates of the device.
What is the best way to write a resolution independent app?
Perfect would be a way that is either retina or non-retina, depending on the device, where the code stays the same.
According to https://github.com/Kode/Kha/wiki/Screen-Size-and-Scaling there's automated scaling for some targets. If you need other targets you have to manually scale everything to fit the screen.
The page mentions using this class for the task: https://github.com/Kode/Kha/blob/master/Sources/kha/Scaler.hx
Also you could take a look at how Wyngine does it:
https://github.com/laxa88/wyngine/search?utf8=%E2%9C%93&q=scale & https://github.com/laxa88/wyngine/blob/master/Wyngine.hx
You replied (to my comment) that scaling wasn't enough. So far it was enough for all of my games with the right display settings, but if you really need retina sized graphics you always have the option of using multiple graphics sets. Eg:
a set for retina resultion (eg iPad 3)
a default resolution (eg iPad 2) set at half retina size
a low res set for cheap android devices?
At startup of your app you check the screen size. You use that to choose the internal game size and the graphics set that fits the actual screen resolution the best. The internal game size as well as all X/Y positions for the selected graphics set can be calculated by applying the graphics sets scale factor to the raw base values.
Finally you use Scale.scale() to scale your game from the internal game size to fit devices like the iPad pro 12" and the wide variety of Android devices.
That approach is common with a lot of game engines, google should find you links like https://v-play.net/doc/vplay-different-screen-sizes/ that also explain screen ratios and how those can be handled.

Graphics get aliased when go fullscreen in HaxeFlixel?

The graphics in Flash and non-fullscreen is anti-aliased and really smooth. But when go fullscreen or on mobile, the graphics become aliased. Even when I use SVG image.
Yes Flash has some great aliasing with software cpu based rendering. In the mobile targets of HaxeFlixel there is a method for drawing that is very different, mostly for performance reasons.
In HaxeFlixel the mobile and cpp targets will use the gpu which is more like webgl or Flash's Stage3d. This means that there will be differences in the way things like edges of images and text look.
Flixel and OpenFL do a very good job in making these two methods as similar as possible. Some recent work on text for cpp in OpenFL has been very impressive. I am not aware of any solution that makes the two pixel perfect in a complex game engine for every use case. You will find similar differences with aliasing in flash game engines like Starling which also use the gpu.
Some things you can try:
For OpenFL/HaxeFlixel I have set gpu antialiasing before, this should be the default:
<window antialiasing="4" />
If you are wanting to test, you will loose performance however I believe you can still set software rendering in cpp with.
FlxG.camera.antialiasing = true;
You mention SVG, I think you are assuming since its a vector format it should render perfectly. The gpu rendering first rasterises the image to a bitmap so if you are expecting it to scale etc like it does in the browser it wont. You could in this case use a higher resolution image and scale it down first.

Difference between offline rendering and real time rendering

I have a model that I am trying to use in a web game using three.js.
When I render an image of the scene in blender, the quality of the image is very good. Specifically, the quality of the textures is very high and they are very crisp and matte.
When I setup the scene in my game, they look very dull and almost plain.
I've looked up Raytracing, Ambient occlusion, lightmaps. But all of these effect the lighting. They should not affect the quality of the textures. What am I missing here?
What all does blender's offline renderer do that real time renderers (like threejs's webgl render) usually don't do?
Thanks alot in advance.
Merry Christmas,
i guess the best way is to use baking... That means you save the high qualitiy lightning information into an image texture. (This should solve your problem with plain looking textures)
I recommend to check out this tutorial by Andrew Price(blenderguru.com):
https://www.youtube.com/watch?v=sB09T--_ZvU
And make sure your realtime client uses a proper texture filtering, has support for normalmaps
etc and that the webclient does not downscale your images for some reason.

Advanced Text Rendering with Direct3D

Let me describe the "battlefield" of my task:
Multi-room audio/video chat with more than 1M users;
Custom Direct3D renderer;
What I need to implement is a TextOverVideo feature. The Text itself goes via network and is to be rendered on the recipient side with Direct3D renderer. AFAIK, it is commonly used in game development to create your own texture with letters/numbers and draw this items. Because our application must support many languages, we ought to use a standard. That's why I've been working with ID3DXFont interface but I've found out some unsatisfied limitations.
What I've faced is a lack of scalability. E.g. if user is resizing video window I have to RE-create D3DXFont with new D3DXFONT_DESC while he's doing that. I think it is unacceptable.
That is why the ONLY solution I see (due to my skills) is somehow render the text to a texture and therefore draw sprite with scaling, translation etc.
So, I'm not sure if I go into the correct direction. Please help with advice, experience, literature, sources...
Your question is a bit unclear. As I understand it, you want easily scalable font.
I think it is unacceptable
As far as I know, this is standard behavior for fonts - even for system fonts. They aren't supposed to be easily scalable.
Possible solutions:
Use ID3DXRenderTarget for rendering text onto texture. Font will be filtered when you scale it up too much. Some people will think that it looks ugly.
Write custom library that supports vector fonts. I.e. - it should be able to extract font outline from font, and build text from it. It will be MUCH slower than ID3DXFont (which is already slower than traditional "texture" fonts). Text will be easily scalable. Using this way, you are very likely to get visible artifacts ("noise") for small text. I wouldn't use that approach unless you want huge letters (40+ pixels). Freetype library may have functions for processing font outlines.
Or you could try using D3DXCreateText. This will create 3D text for ONE string. Won't be fast at all.
I'd forget about it. As long as user is happy about overall performance, improving font rendering routines (so their behavior looks nice to you) is not worth the effort.
--EDIT--
About ID3DXRenderTarget.
EVen if you use ID3DXRenderTarget, you'll need ID3DXFont. I.e. you use ID3DXFont to render text onto texture, and then use texture to blit text onto screen.
Because you said that performance is critical, you can delay creation of new ID3DXFont until user stops resizing video. I.e. When user starts resizing video, you use old font, but upscale it using texture. There will be filtering, of course. Once user stops resizing, you create new font when you have time. you probably can do that in separate thread, but I'm not sure about it. OR you could simply always render text in the same resolution as video. This way you won't have to worry about resizing it (it still will be filtered - along with the video). Some video players work this way.
Few more things about ID3DXFont. There is one problem with ID3DXFont - it is slow in situations where you need a lot of text (but you still need it, because it supports unicode, and writing texturefont with unicode support is pain). Last time I worked with it I optimized things by caching commonly used strings in the textures. I.e. any string that was drawn more than 3 frames in the row were rendered onto D3DFMT_A8R8G8B8 texture/render target, and then I've been copying that string from texture instead of using ID3DXFont. Strings that weren't rendered for a while, were removed from texture. That gave some serious boost. This solution, however is tricky - monitoring empty space in the texture, removing unused strings, and defragmenting the texture isn't exactly trivial (there is nothing exceptionally complicated, but it is easy to make a mistake). You won't need such complicated system unless your screen is literally covered by text.
ID3DXFont fonts are flat, always parallel to the screen. D3DXCreateText are meshes that can be scaled and rotated.
Texture fonts are fuzzy and don't look very clear. Not good for an app that uses lots of small text.
I am writing an app that can create 500 text meshes, each mesh averaging 3,000-5,000 vertices. The text meshes are created once, then are static. I get 700 fps on a GeForce 8800.

Capturing high-quality(300dpi) screenshots of QT-based app in Linux

I need to make a screenshot of my form created in QT designer. There are numerous approaches to do screenshots(gimp, import, etc..) but alt of them deal with same dpi as on my monitor(about 100dpi). This is quite enough to publish on web site, but 300dpi images are required for paper publications. Are there any ways to create 300dpi screenshots?
I don't think that the 300dpi requirement for publication applies to things like screenshots, where the data is inherently pixelated. It's meant for things like graphs that can and should be generated in a vector format.
Just get the best results you can, and only use screenshots for things that are absolutely necessary, and not, for example, commandline I/O or results graphs.
If the final images are being shown smoothed and blurry, either find settings in your PDF creator to prevent this, or manually blow up the image to a multiple of its original size to preserve the original sharp pixelation.
Painting can be done on any QPaintDevice, which includes QPrinter. If you wanted to, you could set up painting redirection to a given device, then have the widget repaint itself. This might give you the higher precision you desire. For more information, look on Qt's website for the Paint System overview, and also maybe look at the QPixmap::grabWidget functions.
You can not grab screenshot in a best resolution than the one of your monitor. DPI has no sense in computer display. Some software convert pixel per point (ppp) to dot per inch (dpi) for paper publication.
Once you have made your screenshots, you can convert them to 300 dpi using a software like photoshop or equivalent.
You can't have more pixels on your screenshot than your widget displays.
For a given widget size (say 900x900px) you can have your image printed at 300dpi, but it will only make a 3 inch square on your paper.
You can force your screen to behave as a 4K display with the command:
xrandr --output eDP1 --rate 40.01 --mode 1366x768 --fb 4096x3072 --panning 4096x3072
remmember to fit the rate and the mode fields as stated from your default xrandr configuration. You can see that with xrandr
and then acquire the screenshot with
import -window root imagefile.png

Resources