I'm trying to understand how does Office 2013 - the part where it draws content to the screen. My findings to far:
Office is creating a Wic based bitmap, using CreateWicBitmapRenderTarget.
Using this Render Target, Office uses various drawing methods. It wraps those calls with BeginDraw/EndDraw as expected.
Office uses DrawGlyphRun method to draw the text of the document on the Wic bitmap (i.e. the This pointer of DrawGlyphRun is the one returned from CreateWicBitmapRenderTarget).
At this point I'm confused how Office continue. So my question is:
What are the various possible ways to copy a Wic based Render Target to the screen?
I'm pretty sure that at the end, Office is using a swap chain (probably craeted with CreateSwapChain on Windows 7 and CreateSwapChainForHwnd on Windows 8). I'm stepping through various "suspicous" functions such as CreateBitmapFromWicBitmap and CreateBitmapFromDxgiSurface) - but I don't understand the complete chain of API calls.
EDIT (answer to MooseBoys):
This is NOT a general question for putting pixels on the screen. The pixels are already on one RenderTarget (A render target that was created with CreateWicBitmapRenderTarget, and then has been drawn with BeginDraw/DrawGlyphRun/EndDraw) and I now want to move those pixes to another RenderTarget (a render target on the actual screen).
On the GDC world, I would look for something like BitBlt, to move pixels from one hdc to another. I wonder what is the protocol in the D2D world.
Related
I am not sure if Revit is part of the Stackoverflow community but it seems that the Tag already existed so I decided to give it a try.
I have created a section of my 3D model. In order to export it as a PDF, I have created a new sheet and I have dragged a 2D of a section onto the sheet.
On top of the 2D section I needed to add some elements as filled areas with different hatches. When I check the areas on Revit, they are shown with a non transparent background (the section's lines of a 2D drawing are not visible behind), nevertheless when exporting it to a PDF the areas are transparent and I can see the section lines through them.
When checking the settings configurations for transparency filled areas the value is set to 0. Therefore, I would have expected not to have any difference between the Revit view and the PDF version (meaning non transparent background)
Am I missing something?
The StackOverflow tag is actually revit-api, standing for the Revit API, the built-in .NET programming interface. The API wraps the UI for automation purposes. Do you know how to achieve what you need manually through the end user interface? If so, that knowledge will be very helpful to determine how to achieve the same programmatically. If not, it can probably not be achieved programmatically either. The best place to raise a question to determine how to achieve the desired result manually in the user interface is the generic Revit Architecture discussion forum.
I use “extended tracking” mode for my Vuforia, unity project. And I find a problem that when my ARcamera is losing track of the imageTarget(the objects will still display) but the virtual button will not work any more? So my question do these virtual buttons only work while the ARCamera can recognize the imageTarget?
The virtual buttons are not supposed to work when the target is lost, even when using Extended Tracking feature. This is because virtual buttons work by covering specific features of the target, so if there is no target, it cannot work. Extended Tracking feature allows Vuforia to guess and keep telling you what was the position of the target, based on other means, and it does not recognize the existence of virtual buttons by definition.
You can find article about Virtual Buttons on Vuforia Library site here.
There is a paragraph saying:
The rectangle that you define for the area of a Virtual button should be equal to, or greater than, 10% of the overall target area. Button events are triggered when a significant proportion of the features underlying the area of the button are concealed from the camera. This can occur when the user covers the button or otherwise blocks it in the camera view. For this reason, the button should be sized appropriately for the source of the action it is intended to respond to. For example, a button that should be triggered by a user's finger needs to be smaller than one that will be triggered by their entire hand.
The image target is lose tracking
=> Assets => Editor =>Vuforia => Image Target ='your License Manager' => ' your photo' => change texture type into sprite
enter image description here
Just started working with this awesome external but have a couple of questions.
When the control is evoked, is it always the top layer or can I have a background transparent image on top of it so I can frame the control nicely?
Also, my testing seems to read most Barcodes but when it comes down to reading Barcodes on hard drives, the control does not want to decode those.... Too dense of bar code pattern?
I am very impressed thus far with the ease of use of your externals. Makes we want to code more for mobile devices!
an overlaying transparent image is not possible, as far as i know.
but couldn´t you use
command mergZXingControlSetRect pLeft,pTop,pRight,pBottom
to define the rect of that scanner after creation
or
command mergZXingControlCreate pLeft,pTop,pRight,pBottom
to create the scanner control in the specified rect.
Set the rect smaller than the width and the height of the screen.
You could then use an underlying image, which is displayed outside of the scanner rect, to show the frame around scanner control. Did not test it myself, but i would assume that this should work.
Unfortunately the native controls in externals and the ones the engine provides are added as views on top of the LiveCode view. That means you can't intermingle LiveCode controls with them. One thing that some users have done is add a web view with a transparent background and a load a png image. If you create the barcode view first and the web view second then the web view will be on top.
I want to make a simple paint program on visual c++ which allows the user to draw a path of a series of straight lines which follow on from each other. Once the user is done this, they should double click to stop drawing. It is important that I record the co-ordinates of the beginning and end points of each line of the path because I want to use this information to find the magnitude and direction of each line using simple math. Please can someone give me somewhere to start and any other guidance.
You should start with a tutorial in: MFC.
Learn the basics: Document/View architecture and
how painting is done (GDI and device contexts).
Basically, you should:
1. create an MFC application (SDI - single document interface),
2. Handle the OnLButtonDown (WM_LBUTTONDOWN), OnMouseMove (WM_MOVE), OnLButtonUp (WM_LBUTTONUP).
3. Maintain an dynamic array/List (TypedPtrList) of the points
4. handle the double-click event for detecting completion.
You should use the Invalidate() function on (after) each click, in order to see the changes
on the screen.
That's just a little bit of information to get you started
You'll want:
a class or struct to represent a point (if you make it a class, it could have computation methods that would, for example, calculate the distance and direction to another point)
a member variable: an instance of a container class (list, array, etc) to hold your points
a member variable: a boolean flag to represent whether you are drawing or not (starting with not)
and you'll need to handle:
the mouse click event to instantiate a point and add it to your container
the mouse move event to draw a line from the last point to the current mouse position if the drawing flag is true
the mouse double-click event to add the double-click location to your container of points and turn off the drawing flag
Yaron's strategy doesn't draw lines until 2 points are clicked. Mine uses "rubberbanding" to anchor the first end of the line then let the second end follow your cursor until you click to anchor it down. Use whichever one you like better.
if i were you i would use Qt.
Qt widgets are great for user interface. you should check qt examples...
if you want to make an image processing behind, you can use imagemagick library.
this library is great for any image manipulation.
I have several nested X Windows - let's say - a scrollable window within a scrollable window (see the example below). In such case the main window contains (at least) the major scroll bars and the (major) drawing area they control. This drawing area on its turn contains (at least) a scrollable window batch - a (minor) main window, containing a scroll bar and minor drawing area.
During live scrolling of an inner drawing area the redraw procedure messes up, because I am using the XCopyArea to speed the process and move the contents that are valid and invoke the actual redraw routine for just the newly appeared content. This works fine when the inner drawing batch is by itself, but when nested within another one a problem occurs - when the inner scrolling-batck is partially visible (i.e. the major drawing area is scrolled) redrawing of newly appeared contents is clipped from the major drawing area and never actually redrawn, but considered to be so. When on the next scroll XCopyArea gets this supposedly-redrawn area it is actually empty. Finally this empty area show up on the partially visible inner scrolling-batch and it is empty. On the first general redraw message they are fixed.
If I can obtain the clipping mask for what is actually visible from (my) inner drawing area I can adjust the XCopyArea() call and redraw call and overcome the problem without the plan "B" which is redrawing all contents on each scroll bar movement.
Example: Developing a plugin for Mozilla Firefox and needing to determine the region that describes the visible area of "my" window, i.e. the one that is passed from the Mozilla system as plugin viewport.
If its really an X Window you get, and not a widget from some specific toolkit (like GTK+ maybe?) then you can use the XGetWindowAttributes function call.
This fills out a provided XWindowAttributes structure, which includes integers for the x and y position of the window as well as its width and height and other useful facts.
But in reality I think you are probably using the Mozilla plugin API inherited from Netscape, aka NSAPI, and in that case what you get is a call to your function NPP_SetWindow() at least once (and again if necessary because something changed) with a structure which contains the information you're looking for. Try looking at http://www.mozilla.org/projects/plugins/ for more information about the APIs you should use.