Is it possible to share my position to another device (secured)? - android-studio

I try to share my position to another secured device like my second phone. I use MapBox and AndroidStudio and I'm able to see my position but only from my phone. I try to find a solution that helps me in my problem. So I want to know if it's possible to do that.
Sincerely, Tony

I see that you are new here. I would recommend you not to ask questions about problems that you have not yet properly researched on your own.
It would help if you could share at least your thoughts on how you would approach this task. Also, make sure to answer specific questions. Otherwise you will get unspecific answers, like the one below.
Now to your problem:
This definitely is possible and there are many ways to accomplish this. The approaches to accomplish this depend on means to your disposal. Do you have a webserver that you can utilize? Or would you like to transmit the position directly from one device to another?
Via Webserver:
Create Webserver side script that listens to HTTP POST requests and writes the POST parameter (your position) to a database/file.
Create a second script that will answer a request for this position.
Call script 1 with Device 1, that wants to share its position.
Call script 2 with Device 2, that wants to read position.
Display position on map in application on Device 2.
Direct, one device to another:
You could even send your position via text message, and make your mobile application read the message, then display the position on a map.

Related

Localize Hololens in mapped space between application runs (wihout anchors)

Please is it possible to localize Hololens in specific map between application runs without using anchors? I would like to use coordinate system provided by SpatialStationaryFrameOfReference as it makes more sense for my use-case than anchors. However this would obviously result in different origin for each application run. Therefore I am looking for a way to get transform from SpatialStationaryFrameOfReference -> to absolute position in current map.
For example 3D view in device portal always shows Hololens in correct relation to current map - even when no anchors are placed in it. Therefore I thought it should be possible to do this also from application. So far I have only thought of placing some random anchor that would be then used to get a transform from the anchor’s coordinate system to that of the stationary reference frame (using TryGetTransformTo). I guess this should technically work, but I there might be also a better way, right? Thanks.
Another way to set the application's origin is to detect the QR code in the environment and establish a coordinate system. For more information please see:QR code tracking

Having several GoogleResponses in a row without user input or interaction

I am working on a cooking recipe app for google home and I need a way to string several GoogleResponses (SimpleResponse etc..) together without requiring user interaction between them.
I have searched for other answers pertaining to this, and while I have found a few similar questions to mine, the replies tend to be along the lines of "the system was designed for dialogues so what would be the point?".
I fully understand this point of view, however because of the nature and behaviour requirements of the app that I am developing I find myself in need of this particular possibility.
The recipes are divided into steps (revolutionary, I know..) and there is roughly a 1 to 1 correspondence between steps and GoogleResponses.
To give an example of how a typical recipe unfolds it is usually like this (this is a simplification of course):
main content -> question -> main content -> question -> etc..
With each instance of "main content" being a step of the recipe and each "question" requiring user input.
If if was just like this all the time then there would not be a problem, I could just bundle each "main content -> question" section into one GoogleResponse and be done.
However there are often times where the recipe flows more like:
main content -> main content -> main content -> question
With each "main content" being a step in the recipe, it does not make sense in this context to bundle them together into the same response (there is a system for the user to move back and forth between steps).
I was originally using MediaResponses for the "main content" sections as those do not require user input to move onto the next step, but due to various reasons I won't go into here as this is already getting quite long, the project manager has decided that MediaResponses should not be used in this project.
The short answer is the one you already encountered - trying to make conversational actions not-so-conversational doesn't work very well. However, there are a few things you can look into.
Recipe Structured Data
Since you're working on a recipe action, specifically, it may be worthwhile to use the standard recipe support that comes with the Assistant.
On the upside - people will be familiar with it, and you don't need to do much code, just provide markup on a webpage.
On the downside - if you have other requirements for how you want the interaction to go, it isn't that flexible. (For example, if you're asking questions at some of the recipe points, or if you want to offer measurement adjustments based on number of people to serve.)
Misuse the "No Input" event
You can configured dynamic reprompts so you get an event if the user doesn't say anything after a few seconds. If they want to speed a reply, they could ask for the next context specifically, or you can catch the actions_intent_NO_INPUT event in Dialogflow and advance yourself.
There are a few downsides here:
Not all devices support no-input. In particular, for example, mobile devices won't generate this.
This may only be valid for two no-input events in a row. On the third event, the Assistant may automatically close the conversation. (The documentation is unclear on this, and the exact behavior has changed over time.)
Media Response
You're not clear why using Media Response "shouldn't be used", but this is one of the only ways way to trigger an event when speaking is completed.
There are several downsides, however:
There are a number of bugs with Media Response around quitting
On devices with screens, there is a media player. Since the media itself is incidental to what you're doing, having the player doesn't make sense
It isn't supported on all surfaces
Interactive Canvas
A similar approach, however, would be to use the Interactive Canvas. This gives you an HTML page with JavaScript that you control, including being able to generate responses to the server as if the user spoke them (or as if they touched a suggestion chip). You can also listen to events for when the generated speech has finished.
There are, however, a number of downsides which probably prevent you from using this right now:
The biggest is that the Interactive Canvas can only be used for games right now. (But this seems to be a policy decision, rather than a technical one. So perhaps it will be lifted in the future.)
It does not work on smart speakers - only some devices with screens.
Combining the above approaches
One way to get around the device limitations of the Interactive Canvas and the poor visuals that accompany Media Response might be to mix the two. For devices that support IC, use that. If not, try using Media Response. (You may even wish to consider the no-input reprompt for some platforms.)
But this still won't work on all devices, and still has the limitation that Interactive Canvas is only for games right now.
Summary
There is no one, clear, way to handle this... and this isn't a feature they are likely to add given the conversational nature of the platform. However, there may be some workarounds which might work for your scenario.

What is the best method to extract live data from an application? (details inside)

Since I have little experience programming, I first tried posting this "job" on a freelance website. Then, 4 programmers who seemed to what they were doing failed to solve it (maybe they didn't know what they were doing). After this, I decided to attempt it myself, and that is why I came to Stack Overflow, which I believe will point me in the right direction.
The problem appears quite simple: the program in question gives me rows and columns of data, just like a spreadsheet. As time goes by, new rows are added on top. It looks like this:
I just need to replicate this data inside an Excel spreadsheet, so that I can perform analysis.
I will keep it short, as I don't know what further detail I could give. Perhaps looking at the program files could help in establishing what sort of program it is. Download link: http://xpproupdate.xpi.com.br/xppro.zip
Thanks!
Some loose ideas:
Method 1 (assuming this is an app connected to the Internet):
Try packet-sniffing. Instead of extracting the data from the app download a packet sniffing app and look and the data-flow. See on what port the app is exchanging data. If the data is not encryped the tasks should be fairly easy.
As a reference see this packet sniffer in C#:
http://www.codeproject.com/Articles/17031/A-Network-Sniffer-in-C
Method 2 (assuming no connection to the Internet, or if there is encryption involved):
If the data is encrypted or the app simply does not interact with the Internet then try to access the app's Win32 window handle and traverse it's internal controls.
Method 3 (last resort):
Frequent window image screenshot and scraping the data from the image using a simple OCR.

Similar paradigms as readers-writers prob and producer-consumer prob

for my Operating Systems Networking class, we must come up with a common paradign that is faced when developing operating systems and propose a solution to the paradigm. Some possible topics suggested in class were: Producer/Consumer problem, reader/writer problem and there was a third one, idk the name for it. It was something like this:
Protection of Private Data – When a user has personal or private data he must share with a network in order to get information, but doesn’t want his information to be released.
-If a user must request a server for information and must provide some private data, the user tells his OS to request the information by sending a series or an array of data to the server. Some are actual data and some are fake. The server responds, handling each set of data as a separate request, it then returns the results and the OS picks the right one.
Example:
1) Person A uses his phone which uses his GPS coordinates to locate the closest bank. The OS on the phone knows that if the GPS coordinates were to get out, this could be bad. The phone instead, sends a request to AT&T asking for the closest bank for the following locations: the actual location person A is at, as well as 3 other fake locations. AT&T has no way of telling which location is the true location and which are the fake ones and is therefore forced to treat each one as a separate request. The results are sent back to the phone and the phone uses only the result for the location that is correct.
Another problem one of my friends did last semester was DDoS. I was wondering if you guys new of any other problems,issues or paradigms that are still lurking about.
Thank you for your suggestions.

Unreal Development Kit Portals/Teleporters

I'm trying to use UTTeleporterCustomMeshes in UDK in a way that is similar to the behavior in the game Portal (the actual portal itself, not the portal gun).
That is:
You can shoot through them as well as walk through them (DONE)
It appears as if you can see through the portal as if it were a window (DONE)
You can stand half way through a portal and be at both sides simultaneously
Shadows and light will also carry through the teleporter
My problem is that I can't seem to find any detailed documentation for UTTeleporters.
Do the teleporters have some built in functionality for this kind of stuff or do I need to implement it all myself?
Ie, create a custom camera to capture the scene to be rendered onto my teleporter and create my own custom teleporter script?
I'm just after SOME sort of direction or ideas ast to how I could achieve this.
Cheers
EDIT:
I've since managed to get steps 1 and 2 working (UTPortal did the trick), though 3 and 4 still remain a challenge.
My first mistake was trying to use UTTeleporterCustomMeshes.
No matter where I went through, it would always spit me out at the middle of the target teleporter. Plus I'd have to do something like #Sebastian said with the TextureRenderTarget.
I still have a long way to go before it starts to look seamless!!
Then, onto stage 2!
Unreal does as far as I know does not support that type of behavior.
You can however build it yourself.
I would look at the TextureRenderTarget classes for rendering the material on the portal.
Standing in the portal with one foot on each "side" will be... very tricky to do but shooting and walking through them should be simple enough by extending the touch event in actor.

Resources