Coded UI test on RDP - coded-ui-tests

I need to record the RDP connection from local machine thru coded ui testing framework (Visual Studio - coded ui project).
FYI. I have Coded UI test project in my local machine and as soon as i start recording I'm going to click on Remote desktop connection and it needs to be recorded.

I've played once with such a thing. Coded UI does not support RDP. There's no known way I've heard that you can just record actions inside of remote desktop.
If you really need to do something with Remote Desktop, you may try using OpenCv Library to visually identify screen coordinates of your controls. I've done it once. The algorithm is:
make a screenshot of UI control you want to click on;
save it inside your Coded UI project;
pass the image to the OpenCV library when the control is present on screen;
OpenCV returns you coordinates' rectangle of the control;
Perform Mouse.Click(); inside the rectangle.
If you are ready to go with such a solution and need more information, please let me know...

Related

Can we read text from the selected objects in perception simulation

I am automating the hololense application using perception simulation. In one of the scenario, I need to perform click action on the specific objects based on the name.So, Is it possible to read the text of the selected objects ? (Note: I have selected the objects using right hand /light hand move and object is selected with distinguished color )
It seems that you want to build test automation for your app or file explorer base on the Hololens2 emulator. And what your requirement is making it automatically tap an object with a matching name in the emulator.
If so, the emulator does not support the feature which recognizing text or direct returning data from the application memory. However, you can provide more information about your business request and submit a feature request via feedback hub on new feature request to be considered in future releases of HoloLens2 emulator.
For how to post feedback request, you can follow this doc: Send feedback to Microsoft with the Feedback Hub app.
Out of the field of HoloLens app development, you can code your own desktop program to capture the view in the emulator window, and then use OCR technology to recognize the character in the screen. Finally, customize your input to the simulator according to the result. However, this is not a simple way.

Capture invisible (i.e.locked) virtual desktop

For test automation I'd like to capture a virtual desktop which is not visible. It is not even accessible, as a secure desktop is shown.
I know it is possible to hook into the composite manager ("dwm") to capture each and every window on that desktop. And I kmow it is possible to send events to windows on that desktop. (I know that because otherwise the test tools wouldn't work)
Before I start to re-implement the composite manager: Is it possible to get the DesktopWindow from dwm, and if so how Do I force dwm to do its job even if a secure desktop is shown?
If I have to bite the bullet and need to implemrnt compositing myself, what is the fastest way to order all windows bottom to tom and to render them to some image?
Does the win10 capture api work for invisible desktops?
To answer the last question: No, the new win10 capture API doesnt't help. For example the program
https://github.com/robmikh/SimpleRecorder/tree/master/SimpleRecorder
cannot capture a locked desktop nor can it capture sub windows.
The above is the elaborate version of:
GDI32Util.getScreenshot(handle)
with handle being the desktop window (doent work when locked) or some other window handle (works when locked, but misses the subwindows).
So the only option is to traverse all windows in z order from bottom to top.

Demystifying the Virtual Keyboard and Touchpad in Windows 10

I'm new to Windows development, and am looking for assistance on where to get started for a particular project.
In short, I want to create a windowed application that allows a user to send keyboard and mouse inputs to another application, by interacting with various UI controls via touch. Essentially a custom on-screen keyboard/touchpad that can be used for sending keyboard-shortcuts to other applications.
There are two applications in Windows 10 that perform exactly the way I would want my new app to - the On-Screen Keyboard and Touchpad:
https://support.microsoft.com/en-us/help/4337906/windows-10-open-the-on-screen-touchpad
https://support.microsoft.com/en-us/help/10762/windows-use-on-screen-keyboard
At the most basic level, I want to define my own interface (or allow the end user to define their own), and use the same code that the onscreen keyboard/touchpad are using for handling touch events and injecting inputs into the system.
I'm uncertain at what level I would need to start to get the functionality I need - UWP? WPF? C++?
If anyone has any insight into how the on-screen utilities were built, I think that would give me an excellent head start.

Linux Window Manager Forces Window Size/Location

We're using Red Hat Linux 6.4, and our application is built using Qt. The application has multiple windows and we support a layout system where our users can save the application layout and restore it later. The application is cross-platform, and on Windows, everything is fine. On Linux, we're having problems restoring windows when a window spans multiple monitors. Our configuration uses a single virtual X display spanning all monitors, and the users can manually position and size windows across the monitors as desired.
What we've found is that the window manager is enforcing a policy on windows that are programmatically set and forcing them not to span across divide between two monitors. When we attempt to restore a saved layout containing a window that spanned monitors, the window manager reduces its size and repositions at as it sees fit. Basically, as long as the user makes the change by dragging and resizing the window, the window manager respects it, but an application that programmatically sets it gets overridden. I'm sure someone somewhere thought this was a reasonable restriction, but our customers disagree.
A developer here has spent days searching and experimented trying to find a way to work around this behavior programmatically, or better yet, tell the window manager to stop doing that. We're using the GNOME desktop and Qt 4.8.x.
Any ideas?
Thank you,
Doug McGrath

Hidden controls appearing in web instance[Labview]

I have created an application in which the vi has some controls and these controls are useful only during the development and on special instance can be unlocked in the application. i basically use app.kind property node to determine what environment the vi is running in and suitably hide/unhide the controls.
I have the application published on the web using the NI Web publishing tool. The computer which hosts the app works fine(and these controls remain invisible) but these controls can be sen on the web page. The vi is in "Embedd" mode. As a workaround i have pushed these controls some distance away and hence avoided the user from knowing about it. but this introduces the problem that i cannot view these controls when i unlock them.
Any help would be greatly appreciated.
You have built a stand-alone application and enabled web server, correct?
Are you sure the web panel is connecting to the stand-alone application (app.kind=2)
and it is not reaching the development LabView (app.kind=1) still listening on that web server port?
I would add an indicator to display the value of app.kind at all times.
What happens if you toggle the hidden fields on and off? I would add a button to do this on the vi.
Do they disappear/reappear reliably in the window where you have control?
Also, you said this was in Embedded mode - but are you also transferring control to the web page?
Those are some approaches I'd try to help pin this down.

Resources