I'm struggling in finding a working solution and I need aid.
I am working on a small IoT project where I want to abuse NFC tags.
I've succeeded in reading/writing in the open app but I wish to read while the app is closed.
More or less I just want to send a small UDP message when reading the appropriate NFC tag, which turns out is a bit more difficult doing in a background task.
The main headache is that I can't find a task trigger that runs upon NFC chatter. I've tried SmartCardsTrigger and ProximitySensorTrigger from the following sources:
https://msdn.microsoft.com/en-us/windows/uwp/devices-sensors/host-card-emulation
https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/ProximitySensor
The ProximitySensorTrigger seems like it's almost triggering at random, and kinda triggers less when I push the NFC tag against the phone. Maybe I'm doing something wrong.
The SmartCardsTrigger doesn't trigger at all. I guess the EmulatorNearFieldEntry trigger type is what I want but for some reason it's unsupported (?).
Anyhow, I am using a Lumia 920 running windows 10 mobile. To my knowledge it does not support smartcards but I just hoped it could use the same trigger for NFC tags.
Reading the responses on a similar question, Akash Chowdary suggested that it may be possible writing a custom trigger. If you know any tips that may point me in the right direction then please do tell them. I got the competence in researching but it's a big sea, and it would really help knowing where to start ^^.
I'm quite the noob when it comes to background tasks, and I am very confused of why after registering a SmartCardTrigger task I have no tasks running.
If I do for example a TimezoneChange trigger or a ProximitySensor trigger, the task is shown as it should. Maybe because my Lumia doesn't support the SmartCardTrigger? Would've guessed it to thrown an error if that's the case but what do I know.
Tl;dr: I want to read NFC tags in a background task, how do I do that on a Lumia 920 inside a basic UWP project?
Related
I successfully wrote a basic JS application reading what is the Stereo/Left/Right value of a Sony Bluetooth speaker. I was able to do this by going to chrome://bluetooth-internals/ , finding and connecting to the device, then I began to click Read on each characteristic while changing the stereo on the speaker. Eventually I found the correct one that is changing when I change the setting.
I was able to get the service and characteristic ID from this and successfully write a basic prototype JS application to read this data live when notifications have started. I used code in various samples in the following page to build it. https://googlechrome.github.io/samples/web-bluetooth/index.html
My second experiment is with a Goodmans fitness watch, something very close to this (exact model does not seem to be online).
This provides information like heart rate, amount of steps complete etc.
I successfully paired it with the recommended app and can confirm that I can receive data using this Orunning app.
My next step I was hoping to achieve was to repeat the same process as that I did with the speaker. However values do not seem to be updating in chrome://bluetooth-internals/
I am successfully connecting to the device, I then check all the characteristics that have read values, I then proceeded to try read them again but no updates are occurring even though the data in the watch has changed.
My questions are:
Is there a better way to figure out the service you are looking for? I have reading this to get a better understanding: https://learn.adafruit.com/introduction-to-bluetooth-low-energy/further-information
Am I missing something regarding how it reads data? I even tried the ones with notification permissions but literally no values have changed.
I even disconnected everything, refreshed the browser and reconnected, and when I click Read on all the characteristics, its still the same values.
I have searched online to try understand this process, so if there is documentation you can point me to understand specifically this I would appreciate it.
I have a problem with Azure Media Services. It's configured to get stream from RTMP source, then encode it to multiple resolutions (pretty standard i think). But the problem is, that when the source stream ends (for example, powers goes down or internet disconnects) and I resume streaming it doesn't come back, so to speak.
The only thing that anyone using the player can see is the slate, that I've set up.
It happens with every piece of software that I could use, that is OBS, FLE, vMix.
Stream is published all the time, and I'm using DefaultProgram, but this happens anyway, doesn't matter if on Default or created manually.
If anyone has an idea what's going on, it would be greatly appreciated.
Unfortunately, if you disconnect the streaming ingesting, the current solution is to restart the channel.
To give a brief backstory to bring things up to my current position / reason for my question:
I originally wanted to use sendkeys to send keyboard presses to a Citrix Xenapp Remote Terminal Application (VT320 Emulator).
This does not work.
After some investigation it became apparent that this has been a reasonably common issue.
I eventually found a work-around that involved opening the windows 'On-Screen Keyboard' application and sending mouseclicks using VBA to the OSK app itself. The key transmissions would be successfully received in the remote terminal application.
This solution is a rather awkward and not very practical solution as it relies on many factors e.g. screen resolution, co-ordinates / current position of the OSK etc.
With the above in mind, I am looking to achieve a more full proof method and here's my thoughts:
Rather than using simulated mouseclicks I would ideally like to be able to either 'embed' the OSK app into the excel instance and reference each key
or hide the app and find a way to make the application receive the VBA keys requested.
I'm aware that Sendkeys has its limitations so I have also tried using SendInput via a Keyb_Event and this also didn't work.
To any half experienced expert, I'm clearly a beginner so I'm suffering from a lack of knowledge here perhaps.
If anyone can point me in the right direction for solving this issue, I'd really appreciate it!
Many thanks.
EDIT
I've looked into this a little more and found this post:
Finding the class name of the On-Screen Keyboard?
Which would suggest that if I know the class of the on screen keyboard, I could use its commands within excel VBA?
I did try to use the code within the question but couldn't get it to work.
So hopefully my question is a little easier to answer?
Can I use the class name of the on-screen keyboard app / declare an API function that will allow me to send simulated key functions as if it's the OSK app being clicked by the mouse?
Hopefully someone can help!!
Trying to automate apps locally can be quite fiddly. Doing it through a Citrix HDX connection is just painful.
Do you have any say over the Citrix environment? If so I'd try writing an automation app that actually runs on the Citrix server in the same session as the published app you're trying to automate. This has the advantage that you're effectively automating a local app which would make life easier.
Depending on how your automation works you may need to communicate between your automation app running in the Citrix session and your client. You could use WCF to bridge the two together.
So that's how I would try and do, as regarding your specific question I've provided some thoughts below...
OSK automation thoughts
I've done some limited automation of the OSK. There are actually two OSKs if you're using Win8. Osk.exe is the old one which has been around a while. TabTip.exe is the new Win8 specific OSK.
One problem to keep in mind is that both of these processes run as high integrity processes which means normal (medium) integrity processes have very limited abilities to automate them. So while I could automate some stuff, many messages would just get ignored. So this maybe why you are finding the OSK is not responding like you expect.
You can work around this by running your automation app as a high integrity process, but this generally means you need local admin (or local system) privilege to start the high integrity process. I never looked into the specifics of how you create high integrity processes. I know there's a command line tool you can use to force a process to run at a certain level (icacls.exe), e.g.
https://msdn.microsoft.com/en-us/library/bb625960.aspx
I imagine there would be APIs to do this as well.
I'll be the first to admit that my programming experience and skill in web services is practically non-existent. I usually program things that run completely isolated or locally, with either C or assembly. I'm proficient enough to get a website going, with some basic authentication and directory read access on the system. That's about it.
I'm trying to do a project that's well outside of my comfort zone and get some experience in controlling stuff remotely/via web. Using a Raspberry Pi running Debian, I'm running a program on it in C that takes in information such as video and UART data, does some crunching and triggers some outputs and writes events to a file/folder. This component is fairly straightforward to get running automatically. Getting a webserver up so a remote user can look at the files and pictures the driver program creates is also extremely easy.
The problem for me comes in trying to make a GUI on a webpage that can be used to manually control these outputs. I'm going to need some scripting to handle the button presses on the web page, clearly, but is there a scripting language in particular that stands out for using kernel objects/system calls so I can actually talk to that process? I figure the best way is to use message queues, but I don't know if Python or PHP (or another scripting language) are capable of doing this, and if there are any that are better at this than others. What is the preferred way of doing this?
I know it's possible since we've all seen those kitten-cams with the flash container where you can move the camera or trigger things. I just have no idea where to start.
Thanks for any help
Java can call native commands via JNI (http://en.wikipedia.org/wiki/Java_Native_Interface) from within a JVM. So if you already have C code that can handle the controls, it's just a matter of getting Java code to call them.
As for the scripts to handle button presses, there are several options. One way is to do it asynchronously via AJAX (which requires some JavaScript knowledge) or the other is by doing the traditional page refresh on each press. Sorry to be a bit vague on the answer, but this requires a lengthy explanation of how the whole JSP (Java Server Pages)/Servlets eco system works.
Here's a good place to start:
http://www.apl.jhu.edu/~hall/java/Servlet-Tutorial/
we are using the native BlackBerry camera in our app, using the Invoke class to start the camera. We listen for an image being written to the filesystem, and when the user is finished with the camera, we call
Application.getApplication().requestForeground();
inside fileJournalChanged() to get back to our app.
This caused a problem with the camera lingering on the image taking on some devices, some of the time. If you want gory details you can see my post on the BB forums from a while back.
http://supportforums.blackberry.com/t5/Java-Development/restore-invoked-camera-after-deleting-an-image-from-the/m-p/511332
Suffice to say, I am still trying to fix this. Using EventInjector to inject an ESC key press works, however in this question
Getting Event Injector Permission
it is described as a security threat. However this is widely suggested as the way to close the camera and work around other issues. Has anyone had problems using this method to close the camera or to do anything else? Is there a better "best practices" method for closing the camera, as there apparently is in Android (I don't actually know, a senior developer here mentioned it)?
By "problems" I guess I really mean business rules types of problems... app getting blacklisted by an organization, slammed in the app store, etc?
Thanks in advance, this has been troubling me for a while.
I think the biggest problem you'll face is that using event injection requires special application permissions - ApplicationPermissions.PERMISSION_INPUT_SIMULATION to be exact. Since granting an application this permission basically allows it to simulate input events into ANY application at any time, it is considered quite dangerous because a badly-written or intentionally malicious application could do a lot of damage. Therefore many end-users and business do not allow applications that require this permission.