Note: Yes I know there are other ways of doing buttons in Android, but this is just an example to demonstrate my issue (the actuall buttons are far far more complex). So please don't reply in offering other solutions for buttons in Android, I am looking for a solution with PaintCode...
I have been using PaintCode for drawing custom buttons for years in iOS, it works brilliantly. I want to do the same for android and have the following issue:
In PaintCode I draw a button which is basically a rounded rectangle with a radius of 20 points.
I draw a frame around and then setting the correct resizing behaviour using the springs (see screenshot).
The result is that whatever the size of the button is going to be (= the frame) the corners will always be nicely rounded with 20 points. Basically a nicely resizable button.
This works very well on iOS but on android, the radius is 20 pixels not points, resulting in a far to small radius (now with the high res devices).
Or in general all drawings that I make in PaintCode when drawn using the draw method generated by PaintCode are to small.
It seams that the generated drawing code does not take into account the scale of the device (as it does on iOS).
Looking at https://www.paintcodeapp.com/documentation/android section "scale" PaintCode suggest to play with the density metric in android to perform scaling.
This does work, but makes the generated drawing fuzzy, I guess this is because we are drawing in lower resolution due to the scaling. So its not a viable solution.
class Button1 #JvmOverloads constructor(
context: Context, attrs: AttributeSet? = null, defStyleAttr: Int = 0
) : Button(context, attrs, defStyleAttr) {
var frame = RectF()
override fun onDraw(canvas: Canvas?) {
super.onDraw(canvas)
val displayDensity = resources.displayMetrics.density
canvas?.scale(displayDensity, displayDensity)
frame.set(0f,0f,width.toFloat()/displayDensity,height.toFloat()/displayDensity)
StyleKitName.drawButton1(canvas, frame)
}
}
Any suggestions to solve this? Is this a bug in PaintCode?
I'm the developer. Sorry about the long answer.
TLDR: handle the scaling yourself, for example the way you do. Switch layerType of your View to software to avoid blurry results of scale.
First I totally understand the confusion, for iOS it just works and in Android you have to fiddle around with some scales. It would make much more sense if it just worked the same I would love that and also would other PaintCode users. Yet it’s not a bug. The problem is difference between UIKit and android.graphic.
In UIKit the distances are measured in points. That means if you draw a circle with diameter 40 points, it should be more-less the same size on various iOS devices. PaintCode adopted this convention and all the numbers you see in PaintCode's user interface like position of shapes, stroke width or radius - everything is in points. The drawing code generated by PaintCode is not only resolution independent (i.e. you can resize/scale it and it keeps the sharpness), but also display-density independent (renders about the same size on retina display, regular display and retina HD display). And there isn’t anything special about the code. It looks like this:
NSBezierPath* rectanglePath = [NSBezierPath bezierPathWithRect: NSMakeRect(0, 0, 100, 50)];
[NSColor.grayColor setFill];
[rectanglePath fill];
So the display scaling is handled by UIKit. Also the implicit scale depends on the context. If you call the drawing code within drawRect: of some UIView subclass, it takes the display-density, but if you are drawing inside a custom UIImage, it takes the density of that image. Magic.
Then we added support for Android. All the measures in android.graphic are represented in pixels. Android doesn’t do any of UIKit's “display density” magic. Also there isn’t a good way to find out what the density is in the scope of drawing code. You need access to resources for that. So we could add that as a parameter to all the drawing methods. But what if you are not going to publish the drawing to the display but you are rather creating an image (that you are going to send to your friend or whatever)? Then you don’t want display density, but image density.
OK so if adding a parameter, we shouldn’t add resources, but the density itself as a float and generate the scaling inside every drawing method. Now what if you don’t really care about the density? What if all you care about is that your drawing fills some rectangle and have the best resolution possible? Actually I think that that is usually the case. Having so many different display resolutions and display densities makes the “element of one physical size fits all” approach pretty minor in my opinion. So in most cases the density parameter would be extraneous. We decided to leave the decision of how the scale should be handled to user.
Now for the fuzziness of the scaled drawing. That’s another difference between UIKit and android.graphics. All developers should understand that CoreGraphics isn’t very fast when it comes to rendering large scenes with multiple objects. If you are programming performance sensitive apps, you should probably consider using SpriteKit or Metal. The benefit of this is that you are not restricted in what you can do in CoreGraphics and you will almost always get very accurate results. Scaling is one such example. You can apply enormous scale and the result is still crisp. If you want more HW acceleration, use a different API and handle the restrictions yourself (like how large textures you can fit in your GPU).
Android took other path. Their android.graphic api can work in two modes - without HW acceleration (they call it software) or with HW acceleration (they call it hardware). It’s still the same API, but one of the hardware modes has some significant restrictions. This includes scale, blur (hence shadows), some blend modes and more.
https://developer.android.com/guide/topics/graphics/hardware-accel.html#unsupported
And they decided that every view will use the hardware mode by default if target API level >= 14. You can of course turn it off and magically your scaled button will be nice and sharp.
We mention that you need to turn off hardware acceleration in our documentation page section “Type of Layer” https://www.paintcodeapp.com/documentation/android
And it’s also in Android documentation https://developer.android.com/guide/topics/graphics/hardware-accel.html#controlling
I remember having a similar issue and, after talking to PaintCode support guys, we came up with a function to convert DP to PX. So, say, in Android those 20 points will be converted to "whatever" pixels -- considering the different display densities.
Long story short, this is the summary of PaintCode support answer:
The fact that Android doesn't support device independent pixels in its
drawing API is the reason for the different behaviour of iOS and
Android. When using UIKit, the coordinates entered are in Points, so
it takes care of display density itself. When generating bitmap in
Android the display density needs to be taken into account.
This is the function I created:
private static PointF convertDPtoPX(float widthDP, float heightDP) {
Context context = getAppContext();
DisplayMetrics metrics = context.getResources().getDisplayMetrics();
float widthPX = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, widthDP, metrics);
float heightPX = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, heightDP, metrics);
return new PointF(widthPX, heightPX);
}
Here is an example on how to call it:
Bitmap myButton = StyleKit.imageOfMyButton(convertDPtoPX(widthDP, heightDP));
Of course in this example I'm using a bitmap of myButton (StyleKit.imageOfMyButton) due to the specifics of the project. You'd need to adjust the example to fit your needs.
You could, for instance, make that radius an external parameter in PaintCode, making each platform responsible for providing its value. Like:
// In Android, convert 20DP to 20PX based on the above function.
// In iOS, just pass 20.
StyleKitName.drawButton1(canvas, frame, radius)
I hope this helps.
Related
We're trying to visualize a massive graph that transforms over time but we're unsure which platform would be powerful enough to do this.
We tried using Unity, but importing the 50.000 vertices was a big problem already. With static Batching we could get up to 100fps, but we want to change the vertices color depending on parameters. Therefore static Batching was not an option. We tried using other batching systems in combination with different rendering pipelines but then we could get 20 fps at most. We concluded that unity probably isn't the best platform for our purposes since there is so much stuff happening in the background.
try doing some optimizations like
I think your problem is all about performance these is some performance tips as i see 50K vertices are good
rendering only what camera see by using Occlusion culling
Make objects that dosnt move mark as static by selecting it and check static in top right corner in inpector
back light instead of realtime lighting & reflections
try use texture map to use same material on multiple objects and select material> inspector check Enable gpu instancing
good luck !
I have a kha app that runs perfecly on an iPad2 (1024/768px).
When I run the same project on a later iPad Mini with 2048/1516. My coordinates are all half the size, which kinda makes sense.
So when I double all the sizes of my objects and GFX it will work on the iPad mini, but will be too big for iPad2.
I looked into a backbuffer and a renderTarget as explained here:
https://www.youtube.com/watch?v=OV1PTo5XSCA
There is also the windowSize option in khafile, which seems to do nothing.
Surface x and y coodinates always seem to come in in real screen coodrdinates of the device.
What is the best way to write a resolution independent app?
Perfect would be a way that is either retina or non-retina, depending on the device, where the code stays the same.
According to https://github.com/Kode/Kha/wiki/Screen-Size-and-Scaling there's automated scaling for some targets. If you need other targets you have to manually scale everything to fit the screen.
The page mentions using this class for the task: https://github.com/Kode/Kha/blob/master/Sources/kha/Scaler.hx
Also you could take a look at how Wyngine does it:
https://github.com/laxa88/wyngine/search?utf8=%E2%9C%93&q=scale & https://github.com/laxa88/wyngine/blob/master/Wyngine.hx
You replied (to my comment) that scaling wasn't enough. So far it was enough for all of my games with the right display settings, but if you really need retina sized graphics you always have the option of using multiple graphics sets. Eg:
a set for retina resultion (eg iPad 3)
a default resolution (eg iPad 2) set at half retina size
a low res set for cheap android devices?
At startup of your app you check the screen size. You use that to choose the internal game size and the graphics set that fits the actual screen resolution the best. The internal game size as well as all X/Y positions for the selected graphics set can be calculated by applying the graphics sets scale factor to the raw base values.
Finally you use Scale.scale() to scale your game from the internal game size to fit devices like the iPad pro 12" and the wide variety of Android devices.
That approach is common with a lot of game engines, google should find you links like https://v-play.net/doc/vplay-different-screen-sizes/ that also explain screen ratios and how those can be handled.
I'm working on a game using OpenGL displaying sprites, i.e. 2d quad-mapped graphics with no projection, that will be displayed on several different resolution screens. (i.e. iPhone retina/non-retina, iPad.. my next project the problem will expand to desktop resolutions which are far more numerous)
I'm OK with handling different aspect ratios, that can be handled by opengl and my placement of the sprites. I'm also OK with slightly different resolutions - use same art and either border the screen, or display a little bit more info.. but when things start to grow/shrink by like 50%+ it's a major issue.
What is standard procedure for generating the art assets in this situation? Generate for the largest resolution and just let OpenGL worry about resizing during it's rasterizing, or do people generate art sets for each main resolution?
Rasterized sprite art tends to get ugly when it's stretched (interpolated), so I'm concerned.. but generating different sizes really means for practical purposes I have to go with vector drawings and export several resolutions. Limits the artist and is somewhat complicated as far as loading and managing the assets
(Yes, I can "just try it" to an extent, but I already have an idea of the results. I'm looking for solutions people use and angles I maybe wouldn't have thought of. This question does have an answer(s) it's not subjective or lazy)
You are correct that scaling bitmaps tends to make sprites bad. There are a couple of ways of dealing with that:
Draw them (pixelart) at all required resolutions. That is a lot of work but gives you full control.
Draw them (vectors) and render them at all required resolutions. Less work but scaling up or down beyond 50% or 200% might give bad results.
Draw them (3D appliction) and render them at all required resolutions. Quite some work but a very consistent set of sprites.
For each of these options you are free to post-process the bitmaps to clean them up or add details but if you do this for options 2 and 3, you are breaking the chain and will have to apply the changes again when rendering the same set again.
An other option is to limit the variation of resolutions.
As far as I know it is very common in the (game) industry to make all (or the most used/visible) sprites as pixel perfect as possible. This is what they pay the artists for...
My application produces an "animation" in a per-pixel manner, so i need to efficiently draw them. I've tried different strategies/libraries with unsatisfactory results especially at higher resolutions.
Here's what I've tried:
SDL: ok, but slow;
OpenGL: inefficient pixel operations;
xlib: better, but still too slow;
svgalib, directfb, (other frame buffer implementations): they seem perfect but definitely too tricky to setup for the end user.
(NOTE: I'm maybe wrong about these assertions, if it's so please correct me)
What I need is the following:
fast pixel drawing with performances comparable to OpenGL rendering;
it should work on Linux (cross-platform as a bonus feature);
it should support double buffering and vertical synchronization;
it should be portable for what concerns the hardware;
it should be open source.
Can you please give me some enlightenment/ideas/suggestions?
Are your pixels sparse or dense (e.g. a bitmap)? If you are creating dense bitmaps out of pixels, then another option is to convert the bitmap into an OpenGL texture and use OpenGL APIs to render at some framerate.
The basic problem is that graphics hardware will be very different on different hardware platforms. Either you pick an abstraction layer, which slows things down, or code more closely to the type of graphics hardware present, which isn't portable.
I'm not totally sure what you're doing wrong, but it could be that you are writing pixels one at a time to the display surface.
Don't do that.
Instead, create a rendering surface in main memory in the same format as the display surface to render to, and then copy the whole, rendered image to the display in a single operation. Modern GPU's are very slow per transaction, but can move lots of data very quickly in a single operation.
Looks like you are confusing window manager (SDL and xlib) with rendering library (opengl).
Just pick a window manager (SDL, glut, or xlib if you like a challenge), activate double buffer mode, and make sure that you got direct rendering.
What kind of graphical card do you have? Most likely it will process pixels on the GPU. Look up how to create pixel shaders in opengl. Pixel shaders are processing per pixel.
Let me describe the "battlefield" of my task:
Multi-room audio/video chat with more than 1M users;
Custom Direct3D renderer;
What I need to implement is a TextOverVideo feature. The Text itself goes via network and is to be rendered on the recipient side with Direct3D renderer. AFAIK, it is commonly used in game development to create your own texture with letters/numbers and draw this items. Because our application must support many languages, we ought to use a standard. That's why I've been working with ID3DXFont interface but I've found out some unsatisfied limitations.
What I've faced is a lack of scalability. E.g. if user is resizing video window I have to RE-create D3DXFont with new D3DXFONT_DESC while he's doing that. I think it is unacceptable.
That is why the ONLY solution I see (due to my skills) is somehow render the text to a texture and therefore draw sprite with scaling, translation etc.
So, I'm not sure if I go into the correct direction. Please help with advice, experience, literature, sources...
Your question is a bit unclear. As I understand it, you want easily scalable font.
I think it is unacceptable
As far as I know, this is standard behavior for fonts - even for system fonts. They aren't supposed to be easily scalable.
Possible solutions:
Use ID3DXRenderTarget for rendering text onto texture. Font will be filtered when you scale it up too much. Some people will think that it looks ugly.
Write custom library that supports vector fonts. I.e. - it should be able to extract font outline from font, and build text from it. It will be MUCH slower than ID3DXFont (which is already slower than traditional "texture" fonts). Text will be easily scalable. Using this way, you are very likely to get visible artifacts ("noise") for small text. I wouldn't use that approach unless you want huge letters (40+ pixels). Freetype library may have functions for processing font outlines.
Or you could try using D3DXCreateText. This will create 3D text for ONE string. Won't be fast at all.
I'd forget about it. As long as user is happy about overall performance, improving font rendering routines (so their behavior looks nice to you) is not worth the effort.
--EDIT--
About ID3DXRenderTarget.
EVen if you use ID3DXRenderTarget, you'll need ID3DXFont. I.e. you use ID3DXFont to render text onto texture, and then use texture to blit text onto screen.
Because you said that performance is critical, you can delay creation of new ID3DXFont until user stops resizing video. I.e. When user starts resizing video, you use old font, but upscale it using texture. There will be filtering, of course. Once user stops resizing, you create new font when you have time. you probably can do that in separate thread, but I'm not sure about it. OR you could simply always render text in the same resolution as video. This way you won't have to worry about resizing it (it still will be filtered - along with the video). Some video players work this way.
Few more things about ID3DXFont. There is one problem with ID3DXFont - it is slow in situations where you need a lot of text (but you still need it, because it supports unicode, and writing texturefont with unicode support is pain). Last time I worked with it I optimized things by caching commonly used strings in the textures. I.e. any string that was drawn more than 3 frames in the row were rendered onto D3DFMT_A8R8G8B8 texture/render target, and then I've been copying that string from texture instead of using ID3DXFont. Strings that weren't rendered for a while, were removed from texture. That gave some serious boost. This solution, however is tricky - monitoring empty space in the texture, removing unused strings, and defragmenting the texture isn't exactly trivial (there is nothing exceptionally complicated, but it is easy to make a mistake). You won't need such complicated system unless your screen is literally covered by text.
ID3DXFont fonts are flat, always parallel to the screen. D3DXCreateText are meshes that can be scaled and rotated.
Texture fonts are fuzzy and don't look very clear. Not good for an app that uses lots of small text.
I am writing an app that can create 500 text meshes, each mesh averaging 3,000-5,000 vertices. The text meshes are created once, then are static. I get 700 fps on a GeForce 8800.