I am automating the hololense application using perception simulation. In one of the scenario, I need to perform click action on the specific objects based on the name.So, Is it possible to read the text of the selected objects ? (Note: I have selected the objects using right hand /light hand move and object is selected with distinguished color )
It seems that you want to build test automation for your app or file explorer base on the Hololens2 emulator. And what your requirement is making it automatically tap an object with a matching name in the emulator.
If so, the emulator does not support the feature which recognizing text or direct returning data from the application memory. However, you can provide more information about your business request and submit a feature request via feedback hub on new feature request to be considered in future releases of HoloLens2 emulator.
For how to post feedback request, you can follow this doc: Send feedback to Microsoft with the Feedback Hub app.
Out of the field of HoloLens app development, you can code your own desktop program to capture the view in the emulator window, and then use OCR technology to recognize the character in the screen. Finally, customize your input to the simulator according to the result. However, this is not a simple way.
Related
In the new Microsoft Flight Simulator you can pop different cockpit displays out into their own external windows, like this:
However, none of the buttons needed to interact with the displays get "popped out" as well.
I'd like to build a web app that can embed (the continuously updating image of) one of these windows that I can surround with buttons, etc, for interaction to have, say, running on a tablet next to you.
My question is, is it possible with Node to embed the continuously updating image of a native Windows window within a webpage?
Stumbled upon the Screen Capture API. This is what I was looking for.
https://developer.mozilla.org/en-US/docs/Web/API/Screen_Capture_API
I'm new to Windows development, and am looking for assistance on where to get started for a particular project.
In short, I want to create a windowed application that allows a user to send keyboard and mouse inputs to another application, by interacting with various UI controls via touch. Essentially a custom on-screen keyboard/touchpad that can be used for sending keyboard-shortcuts to other applications.
There are two applications in Windows 10 that perform exactly the way I would want my new app to - the On-Screen Keyboard and Touchpad:
https://support.microsoft.com/en-us/help/4337906/windows-10-open-the-on-screen-touchpad
https://support.microsoft.com/en-us/help/10762/windows-use-on-screen-keyboard
At the most basic level, I want to define my own interface (or allow the end user to define their own), and use the same code that the onscreen keyboard/touchpad are using for handling touch events and injecting inputs into the system.
I'm uncertain at what level I would need to start to get the functionality I need - UWP? WPF? C++?
If anyone has any insight into how the on-screen utilities were built, I think that would give me an excellent head start.
I have built an action with Actions-on-Google(2.5.0) and dialogflow-fulfillment(0.6.1) Node.js Library. I cannot test my app on dialogflow test console because I return conv object which is not supported there. Now, I cannot test it in the google action simulator, either. This is the error I get:
Invocation Error
You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices.
I'd like to use the simulator, so I can debug better.
It is how the error message says: The simulator lacks many features that normal Assistant surfaces (speaker, Assistant app) have and can even sometimes give you completely wrong error messages. There is really no way around testing your app on real devices.
You can however view the same logs that you see in the simulator in Google Stackdriver Logging. To activate this go to the settings of your Dialogflow agent, select the "General" tab and activate the "Log interactions to Google Cloud" option. Then click on the link below the button to get to the logs. The default view will probably show you only the Actions-on-Google logs, i.e. the requests between your users and AoG. To see the requests between Dialogflow and your webhook click on the dropdown arrow in the filter box, select "Convert to advanced filter" and set the filter to resource.type="global".
If you have multiple Actions projects that use the same display name, the simulator chooses one at random. For consistent testing results, use unique names or release channels for each Action.
Reference Link: https://support.google.com/actions-console/answer/9613473?hl=en
Now how to give a display name or change the display name.
Go to develop tab and give display name or change display name as follows
You should definitely be able to test your action in the Actions simulator. Note that the interaction model b/w Dialogflow and Actions simulators are different. In Dialogflow, you can send commands directly to your agent. In the Actions simulator, you first need to invoke your Action.
At the bottom of the screen, you'll see a suggested input like "talk to my test app".
You'll need to send this, or a similar command, first. That will then invoke your action, and you'll be able to send commands to it after. You will see it is invoked by a banner at the top of the simulator.
I'm using node-notifier (link) in node.js to show a toast notification in Windows 8. I have it working and I'm able to adjust the title, text, and main image in the notification just fine. However, in a Windows 8 toast notifications, there is a secondary (smaller) image. See below:
So, node-notifier uses toaster, which in turn uses ToastNotificationManager. But, I cannot find any reference anywhere to this secondary image. I've looked here and here on Microsoft's site.
This secondary image also shows in other notifications I receive from applications like Outlook, Slack, etc.
Where is this secondary image coming from? Is the documentation just out of date? Can Toaster be modified to access this secondary image?
The secondary image is the icon for the shortcut in the Start Menu folder for the program registered to raise a toast. To change it, you'll need to modify the icon on the shortcut.
For a desktop application to use the ToastNotificationManager class, it is required to have a shortcut in the start menu, and an AppUserModelId associated with that shortcut. At ToastNotificationManager creation time, the caller passes in the same AppUserModelId, which ties back to the associated icon for the shortcut. More about registering desktop applications to raise toasts this can be found on this MSDN documentation page.
Looking at the toaster code here, it is installing the shortcut to a file called toast.lnk in the Start Menu:
String shortcutPath =
Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData) +
"\\Microsoft\\Windows\\Start Menu\\Programs\\toast.lnk";
And, it is creating the shortcut targeting the initial calling process:
String exePath = Process.GetCurrentProcess().MainModule.FileName;
Updating the icon in the shortcut manually should verify that you can change what is shown locally, but an update to toaster to set the icon location is likely required (to support multiple callers with different shortcuts, or by having it call IShellLink::SetIconLocation).
I need to record the RDP connection from local machine thru coded ui testing framework (Visual Studio - coded ui project).
FYI. I have Coded UI test project in my local machine and as soon as i start recording I'm going to click on Remote desktop connection and it needs to be recorded.
I've played once with such a thing. Coded UI does not support RDP. There's no known way I've heard that you can just record actions inside of remote desktop.
If you really need to do something with Remote Desktop, you may try using OpenCv Library to visually identify screen coordinates of your controls. I've done it once. The algorithm is:
make a screenshot of UI control you want to click on;
save it inside your Coded UI project;
pass the image to the OpenCV library when the control is present on screen;
OpenCV returns you coordinates' rectangle of the control;
Perform Mouse.Click(); inside the rectangle.
If you are ready to go with such a solution and need more information, please let me know...