I recently created my own DirectShow movie player as a control for use in WPF. This all works fine, but my customer would like a feature to boost colors depending on his preference.
Based on an old application he uses, he defines this as three seperate sliders, one for each color channel (RGB).
When R would be set to 255, and the others lower to zero, the red color in the playing video would be boosted, or at least "noticeably more red".
I have already messed with Hue, Saturation, Contrast, Brightness, all kinds of RGB to HSL/HSV calculations as found on wikipedia and such, but none of those really do what I expect.
Could anyone point me in the right direction? Perhaps an exposed interface in DirectShow I missed, because I'm kind of clueless at the moment. Is it even possible by default in DirectShow?
Do you use oldskool directshow graphfilter interfaces?
IID_IAMVideoProcAmp:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd376033%28v=vs.85%29.aspx
this interface has brightness, contrast, hue, saturation, sharpness, gamma etc.. setters and getters.
Transform Filter
http://msdn.microsoft.com/en-us/library/windows/desktop/dd391015%28v=vs.85%29.aspx
This might not be click-click-it-works task if you create everything from scratch. Transformation filter is a capability to create intermediate filters changing video buffer pixels at runtime.
Do you use C#, C++ or other language?
You did say using Windows Presentation Framework(WPF), but then mediaplayer is a directshow application. My understanding is directshow is more or less dead tech.
Study WPF Effects interface, its a modern shader tech where you should easier transform pixel colors or whatnot effects be created.
Pixel Shader Effect Examples
Edit Few directshow filter related links for reference
http://msdn.microsoft.com/en-us/library/windows/desktop/dd375468%28v=vs.85%29.aspx (list of MS directshow samples)
http://en.verysource.com/code/4151972_1/ezrgb24.cpp.html (ezrgb24.cpp filter source)
http://www.gdcl.co.uk/downloads.htm
http://mathinfo.univ-reims.fr/image/mmVideo/cours/DirectShow3Annexe.pdf
Related
I have a D3D11 device created, windows 10, latest edition, and a ID3D11Texture2D * created, GPU memory. I want to get the contents of this Texture2D stretched and drawn onto a region of an HWND. I don't want to use vertex buffers, I want to use "something else". I don't want to copy the bits down to the CPU, then bring them back up to GPU again. StretchDIBits or StretchBlt would be way too slow.
Let's pretend I want to use D2D1... I need a way to get my D3D11 texture2D copied or shared over to D2D1. Then, I want to use a D2D1 render target to stretch blit it to the HWND.
I've read the MS documents, and they don't make a lot of sense.
ideas?
If you already have a ID3D11Texture, why aren't you just using Direct3D to render it to a texture? That's what the hardware is designed to do very fast with high quality.
The DirectX Tool Kit SpriteBatch class is a good place to start for general sprite rendering, and it does indeed make use of VBs, shader, etc. internally.
Direct2D is really best suited to scenarios where you are drawing classic vector/presentation graphics, like circles, ellipses, arcs, etc. It's also useful as a way to use DirectWrite for high-quality, highly scalable fonts. For blitting rectangles, just use Direct3D which is what Direct2D has to use under the covers anyhow.
Note that if you require Direct3D Hardware Feature Level 10.0 or better, you can use a common trick which relies on the Vertex_IDin the vertex shader, so you can self-generate the geometry without any need for a VB or IB. See this code.
Note: Yes I know there are other ways of doing buttons in Android, but this is just an example to demonstrate my issue (the actuall buttons are far far more complex). So please don't reply in offering other solutions for buttons in Android, I am looking for a solution with PaintCode...
I have been using PaintCode for drawing custom buttons for years in iOS, it works brilliantly. I want to do the same for android and have the following issue:
In PaintCode I draw a button which is basically a rounded rectangle with a radius of 20 points.
I draw a frame around and then setting the correct resizing behaviour using the springs (see screenshot).
The result is that whatever the size of the button is going to be (= the frame) the corners will always be nicely rounded with 20 points. Basically a nicely resizable button.
This works very well on iOS but on android, the radius is 20 pixels not points, resulting in a far to small radius (now with the high res devices).
Or in general all drawings that I make in PaintCode when drawn using the draw method generated by PaintCode are to small.
It seams that the generated drawing code does not take into account the scale of the device (as it does on iOS).
Looking at https://www.paintcodeapp.com/documentation/android section "scale" PaintCode suggest to play with the density metric in android to perform scaling.
This does work, but makes the generated drawing fuzzy, I guess this is because we are drawing in lower resolution due to the scaling. So its not a viable solution.
class Button1 #JvmOverloads constructor(
context: Context, attrs: AttributeSet? = null, defStyleAttr: Int = 0
) : Button(context, attrs, defStyleAttr) {
var frame = RectF()
override fun onDraw(canvas: Canvas?) {
super.onDraw(canvas)
val displayDensity = resources.displayMetrics.density
canvas?.scale(displayDensity, displayDensity)
frame.set(0f,0f,width.toFloat()/displayDensity,height.toFloat()/displayDensity)
StyleKitName.drawButton1(canvas, frame)
}
}
Any suggestions to solve this? Is this a bug in PaintCode?
I'm the developer. Sorry about the long answer.
TLDR: handle the scaling yourself, for example the way you do. Switch layerType of your View to software to avoid blurry results of scale.
First I totally understand the confusion, for iOS it just works and in Android you have to fiddle around with some scales. It would make much more sense if it just worked the same I would love that and also would other PaintCode users. Yet it’s not a bug. The problem is difference between UIKit and android.graphic.
In UIKit the distances are measured in points. That means if you draw a circle with diameter 40 points, it should be more-less the same size on various iOS devices. PaintCode adopted this convention and all the numbers you see in PaintCode's user interface like position of shapes, stroke width or radius - everything is in points. The drawing code generated by PaintCode is not only resolution independent (i.e. you can resize/scale it and it keeps the sharpness), but also display-density independent (renders about the same size on retina display, regular display and retina HD display). And there isn’t anything special about the code. It looks like this:
NSBezierPath* rectanglePath = [NSBezierPath bezierPathWithRect: NSMakeRect(0, 0, 100, 50)];
[NSColor.grayColor setFill];
[rectanglePath fill];
So the display scaling is handled by UIKit. Also the implicit scale depends on the context. If you call the drawing code within drawRect: of some UIView subclass, it takes the display-density, but if you are drawing inside a custom UIImage, it takes the density of that image. Magic.
Then we added support for Android. All the measures in android.graphic are represented in pixels. Android doesn’t do any of UIKit's “display density” magic. Also there isn’t a good way to find out what the density is in the scope of drawing code. You need access to resources for that. So we could add that as a parameter to all the drawing methods. But what if you are not going to publish the drawing to the display but you are rather creating an image (that you are going to send to your friend or whatever)? Then you don’t want display density, but image density.
OK so if adding a parameter, we shouldn’t add resources, but the density itself as a float and generate the scaling inside every drawing method. Now what if you don’t really care about the density? What if all you care about is that your drawing fills some rectangle and have the best resolution possible? Actually I think that that is usually the case. Having so many different display resolutions and display densities makes the “element of one physical size fits all” approach pretty minor in my opinion. So in most cases the density parameter would be extraneous. We decided to leave the decision of how the scale should be handled to user.
Now for the fuzziness of the scaled drawing. That’s another difference between UIKit and android.graphics. All developers should understand that CoreGraphics isn’t very fast when it comes to rendering large scenes with multiple objects. If you are programming performance sensitive apps, you should probably consider using SpriteKit or Metal. The benefit of this is that you are not restricted in what you can do in CoreGraphics and you will almost always get very accurate results. Scaling is one such example. You can apply enormous scale and the result is still crisp. If you want more HW acceleration, use a different API and handle the restrictions yourself (like how large textures you can fit in your GPU).
Android took other path. Their android.graphic api can work in two modes - without HW acceleration (they call it software) or with HW acceleration (they call it hardware). It’s still the same API, but one of the hardware modes has some significant restrictions. This includes scale, blur (hence shadows), some blend modes and more.
https://developer.android.com/guide/topics/graphics/hardware-accel.html#unsupported
And they decided that every view will use the hardware mode by default if target API level >= 14. You can of course turn it off and magically your scaled button will be nice and sharp.
We mention that you need to turn off hardware acceleration in our documentation page section “Type of Layer” https://www.paintcodeapp.com/documentation/android
And it’s also in Android documentation https://developer.android.com/guide/topics/graphics/hardware-accel.html#controlling
I remember having a similar issue and, after talking to PaintCode support guys, we came up with a function to convert DP to PX. So, say, in Android those 20 points will be converted to "whatever" pixels -- considering the different display densities.
Long story short, this is the summary of PaintCode support answer:
The fact that Android doesn't support device independent pixels in its
drawing API is the reason for the different behaviour of iOS and
Android. When using UIKit, the coordinates entered are in Points, so
it takes care of display density itself. When generating bitmap in
Android the display density needs to be taken into account.
This is the function I created:
private static PointF convertDPtoPX(float widthDP, float heightDP) {
Context context = getAppContext();
DisplayMetrics metrics = context.getResources().getDisplayMetrics();
float widthPX = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, widthDP, metrics);
float heightPX = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, heightDP, metrics);
return new PointF(widthPX, heightPX);
}
Here is an example on how to call it:
Bitmap myButton = StyleKit.imageOfMyButton(convertDPtoPX(widthDP, heightDP));
Of course in this example I'm using a bitmap of myButton (StyleKit.imageOfMyButton) due to the specifics of the project. You'd need to adjust the example to fit your needs.
You could, for instance, make that radius an external parameter in PaintCode, making each platform responsible for providing its value. Like:
// In Android, convert 20DP to 20PX based on the above function.
// In iOS, just pass 20.
StyleKitName.drawButton1(canvas, frame, radius)
I hope this helps.
I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga
Given two seperate computers, how could one ensure that colours are being projected roughly the same on each screen?
IE, one screen might have 50% brightness more than another, so colours appear duller on one screen. One artist on one computer might be seeing the pictures differently to another, it's important they are seeing the same levels.
Is there some sort of callibration technique via software you can do? Any techniques? Or is a hardware solution the only way?
If you are talking about lab-critical calibration (that is, the colours on one monitor need to exactly match the colours on another, and both need to match an external reference as closely as possible) then a hardware colorimeter (with its own appropriate software and test targets) is the only solution. Software solutions can only get you so far.
The technique you described is a common software-only solution, but it's only for setting the gamma curves on a single device. There is no control over the absolute brightness and contrast; you are merely ensuring that solid colours match their dithered equivalents. That's usually done after setting the brightness and contrast so that black is as black as it can be and white is as white as it can be, but you can still distinguish not-quite-black from black and not-quite-white from white. Each monitor, then, will be optimized for its own maximum colour gamut, but it will not necessarily match any other monitor in the shop (even monitors that are the same make and model will show some variation due to manufacturing tolerances and age/use). A hardware colorimeter will (usually) generate a custom colour profile for the device under test as it is at the time of testing, and there is generally and end-to-end solution built into the product (so your scanner, printer, and monitor are all as closely matched as they can be).
You will never get to an absolute end-to-end match in a complete system, but hardware will get you as close as you can get. Software alone can only get you to a local maximum for the device it's calibrating, independent of any other device.
What you need to investigate are color profiles.
Wikipedia has some good articles on this:
https://en.wikipedia.org/wiki/Color_management
https://en.wikipedia.org/wiki/ICC_profile
The basic thing you need is the color profile of the display on which the color was seen. Then, with the color profile of display #2, you can take the original color and convert it into a color that will look as close as possible (depends on what colors the display device can actually represent).
Color profiles are platform independent and many modern frameworks support them directly.
You may be interested in reading about how Apple has dealt with this issue:
Color Programming Topics
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/DrawColor/DrawColor.html
You'd have to allow or ask the individual users to calibrate their monitors. But there's enough variation across monitors - particularly between models and brands - that trying to implement a "silver bullet" solution is basically impossible.
As #Matt Ball observes calibrating your monitors is what you are trying to do. Here's one way to do it without specialised hardware or software. For 'roughly the same' visual calibration against a reference image is likely to be adequate.
Getting multiple monitors of varying quality/brand/capabilities to render a given image the same way is simply not possible.
IF you have complete control over the monitor, video card, calibration hardware/software, and lighting used then you have a shot. But that's only if you are in complete control of the desktop and the environment.
Assuming you are just accounting for LCDs, they are built different types of panels with a host of different capabilities. Brightness is just one factor (albeit a big one). Another is simply the number of colors they are capable of rendering.
Beyond that, there is the environment that the monitor is in. Even assuming the same brand monitor and calibration points, a person will perceive a different color if an overhead fluorescent is used versus an incandescent placed next to the monitor itself. At one place I was at we had to shut off all the overheads and provide exact lamp placement for the graphic artists. Picky picky. ;)
I assume that you have no control over the hardware used, each user has a different brand and model monitor.
You have also no control over operating system color profiles.
An extravagant solution would be to display a test picture or pattern, and ask your users to take a picture of it using their mobile or webcam.
Download the picture to the computer, and check whether its levels are valid or too out of range.
This will also ensure ambient light at the office is appropiate.
Let me describe the "battlefield" of my task:
Multi-room audio/video chat with more than 1M users;
Custom Direct3D renderer;
What I need to implement is a TextOverVideo feature. The Text itself goes via network and is to be rendered on the recipient side with Direct3D renderer. AFAIK, it is commonly used in game development to create your own texture with letters/numbers and draw this items. Because our application must support many languages, we ought to use a standard. That's why I've been working with ID3DXFont interface but I've found out some unsatisfied limitations.
What I've faced is a lack of scalability. E.g. if user is resizing video window I have to RE-create D3DXFont with new D3DXFONT_DESC while he's doing that. I think it is unacceptable.
That is why the ONLY solution I see (due to my skills) is somehow render the text to a texture and therefore draw sprite with scaling, translation etc.
So, I'm not sure if I go into the correct direction. Please help with advice, experience, literature, sources...
Your question is a bit unclear. As I understand it, you want easily scalable font.
I think it is unacceptable
As far as I know, this is standard behavior for fonts - even for system fonts. They aren't supposed to be easily scalable.
Possible solutions:
Use ID3DXRenderTarget for rendering text onto texture. Font will be filtered when you scale it up too much. Some people will think that it looks ugly.
Write custom library that supports vector fonts. I.e. - it should be able to extract font outline from font, and build text from it. It will be MUCH slower than ID3DXFont (which is already slower than traditional "texture" fonts). Text will be easily scalable. Using this way, you are very likely to get visible artifacts ("noise") for small text. I wouldn't use that approach unless you want huge letters (40+ pixels). Freetype library may have functions for processing font outlines.
Or you could try using D3DXCreateText. This will create 3D text for ONE string. Won't be fast at all.
I'd forget about it. As long as user is happy about overall performance, improving font rendering routines (so their behavior looks nice to you) is not worth the effort.
--EDIT--
About ID3DXRenderTarget.
EVen if you use ID3DXRenderTarget, you'll need ID3DXFont. I.e. you use ID3DXFont to render text onto texture, and then use texture to blit text onto screen.
Because you said that performance is critical, you can delay creation of new ID3DXFont until user stops resizing video. I.e. When user starts resizing video, you use old font, but upscale it using texture. There will be filtering, of course. Once user stops resizing, you create new font when you have time. you probably can do that in separate thread, but I'm not sure about it. OR you could simply always render text in the same resolution as video. This way you won't have to worry about resizing it (it still will be filtered - along with the video). Some video players work this way.
Few more things about ID3DXFont. There is one problem with ID3DXFont - it is slow in situations where you need a lot of text (but you still need it, because it supports unicode, and writing texturefont with unicode support is pain). Last time I worked with it I optimized things by caching commonly used strings in the textures. I.e. any string that was drawn more than 3 frames in the row were rendered onto D3DFMT_A8R8G8B8 texture/render target, and then I've been copying that string from texture instead of using ID3DXFont. Strings that weren't rendered for a while, were removed from texture. That gave some serious boost. This solution, however is tricky - monitoring empty space in the texture, removing unused strings, and defragmenting the texture isn't exactly trivial (there is nothing exceptionally complicated, but it is easy to make a mistake). You won't need such complicated system unless your screen is literally covered by text.
ID3DXFont fonts are flat, always parallel to the screen. D3DXCreateText are meshes that can be scaled and rotated.
Texture fonts are fuzzy and don't look very clear. Not good for an app that uses lots of small text.
I am writing an app that can create 500 text meshes, each mesh averaging 3,000-5,000 vertices. The text meshes are created once, then are static. I get 700 fps on a GeForce 8800.