why does PhotoCamera need a VideoBrush? - c#-4.0

When using the PhotoCamera one must create an instance of the PhotoCamera as well as a VideoBrush - and then assign that PhotoCamera instance to the source of the VideoBrush instance before the camera can be initialized. example:
PhotoCamera camera;
VideoBrush brush;
camera = new PhotoCamera();
camera.Initialized += CameraInitialized;
brush = new VideoBrush();
brush.SetSource(camera);
The VideoBrush is clearly useful in scenarios where the developer wishes to create a viewfinder for the camera video stream by associating the VideoBrush instance with the brush of a visual object like a Canvas.Background or Rectangle.Fill. However, when that is not the case, requiring the developer to still go through the motions of creating a VideoBrush seems somewhat random at first glance.
So two questions, why does the PhotoCamera always need to be associated with the VideoBrush?
What is the performance impact associating with attaching the PhotoCamera to a VideoBrush? Specifically how are calls to GetPreviewBuffer(Argb|Y|YCbCr) impacted by the associated VideoBrush?
Thanks!
PS. hopefully this doesn't come off as pointed in anyway, I'd just like to have a better understanding of why this requirement exists - and how it impacts performance.
PPS. the improvements in the WP7 SDK for Mango are amazing - I'm looking forward to seeing what people come up with now that access to the sensors have been opened up.

In mango you simply have two options, either do as you suggested above to use a frame in your app (a Video Frame) to take pictures, essentially grabbing a single frame from the video brush.
Or you can use the old NoDo method of using the PhotoChooser Task, which will launch the framework camera app separately and return an image.
Obviously pro's and cons of both methods so just choose the one that suit you.

Related

Having several GoogleResponses in a row without user input or interaction

I am working on a cooking recipe app for google home and I need a way to string several GoogleResponses (SimpleResponse etc..) together without requiring user interaction between them.
I have searched for other answers pertaining to this, and while I have found a few similar questions to mine, the replies tend to be along the lines of "the system was designed for dialogues so what would be the point?".
I fully understand this point of view, however because of the nature and behaviour requirements of the app that I am developing I find myself in need of this particular possibility.
The recipes are divided into steps (revolutionary, I know..) and there is roughly a 1 to 1 correspondence between steps and GoogleResponses.
To give an example of how a typical recipe unfolds it is usually like this (this is a simplification of course):
main content -> question -> main content -> question -> etc..
With each instance of "main content" being a step of the recipe and each "question" requiring user input.
If if was just like this all the time then there would not be a problem, I could just bundle each "main content -> question" section into one GoogleResponse and be done.
However there are often times where the recipe flows more like:
main content -> main content -> main content -> question
With each "main content" being a step in the recipe, it does not make sense in this context to bundle them together into the same response (there is a system for the user to move back and forth between steps).
I was originally using MediaResponses for the "main content" sections as those do not require user input to move onto the next step, but due to various reasons I won't go into here as this is already getting quite long, the project manager has decided that MediaResponses should not be used in this project.
The short answer is the one you already encountered - trying to make conversational actions not-so-conversational doesn't work very well. However, there are a few things you can look into.
Recipe Structured Data
Since you're working on a recipe action, specifically, it may be worthwhile to use the standard recipe support that comes with the Assistant.
On the upside - people will be familiar with it, and you don't need to do much code, just provide markup on a webpage.
On the downside - if you have other requirements for how you want the interaction to go, it isn't that flexible. (For example, if you're asking questions at some of the recipe points, or if you want to offer measurement adjustments based on number of people to serve.)
Misuse the "No Input" event
You can configured dynamic reprompts so you get an event if the user doesn't say anything after a few seconds. If they want to speed a reply, they could ask for the next context specifically, or you can catch the actions_intent_NO_INPUT event in Dialogflow and advance yourself.
There are a few downsides here:
Not all devices support no-input. In particular, for example, mobile devices won't generate this.
This may only be valid for two no-input events in a row. On the third event, the Assistant may automatically close the conversation. (The documentation is unclear on this, and the exact behavior has changed over time.)
Media Response
You're not clear why using Media Response "shouldn't be used", but this is one of the only ways way to trigger an event when speaking is completed.
There are several downsides, however:
There are a number of bugs with Media Response around quitting
On devices with screens, there is a media player. Since the media itself is incidental to what you're doing, having the player doesn't make sense
It isn't supported on all surfaces
Interactive Canvas
A similar approach, however, would be to use the Interactive Canvas. This gives you an HTML page with JavaScript that you control, including being able to generate responses to the server as if the user spoke them (or as if they touched a suggestion chip). You can also listen to events for when the generated speech has finished.
There are, however, a number of downsides which probably prevent you from using this right now:
The biggest is that the Interactive Canvas can only be used for games right now. (But this seems to be a policy decision, rather than a technical one. So perhaps it will be lifted in the future.)
It does not work on smart speakers - only some devices with screens.
Combining the above approaches
One way to get around the device limitations of the Interactive Canvas and the poor visuals that accompany Media Response might be to mix the two. For devices that support IC, use that. If not, try using Media Response. (You may even wish to consider the no-input reprompt for some platforms.)
But this still won't work on all devices, and still has the limitation that Interactive Canvas is only for games right now.
Summary
There is no one, clear, way to handle this... and this isn't a feature they are likely to add given the conversational nature of the platform. However, there may be some workarounds which might work for your scenario.

Detecting Handedness from Device Use

Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId

Unreal Development Kit Portals/Teleporters

I'm trying to use UTTeleporterCustomMeshes in UDK in a way that is similar to the behavior in the game Portal (the actual portal itself, not the portal gun).
That is:
You can shoot through them as well as walk through them (DONE)
It appears as if you can see through the portal as if it were a window (DONE)
You can stand half way through a portal and be at both sides simultaneously
Shadows and light will also carry through the teleporter
My problem is that I can't seem to find any detailed documentation for UTTeleporters.
Do the teleporters have some built in functionality for this kind of stuff or do I need to implement it all myself?
Ie, create a custom camera to capture the scene to be rendered onto my teleporter and create my own custom teleporter script?
I'm just after SOME sort of direction or ideas ast to how I could achieve this.
Cheers
EDIT:
I've since managed to get steps 1 and 2 working (UTPortal did the trick), though 3 and 4 still remain a challenge.
My first mistake was trying to use UTTeleporterCustomMeshes.
No matter where I went through, it would always spit me out at the middle of the target teleporter. Plus I'd have to do something like #Sebastian said with the TextureRenderTarget.
I still have a long way to go before it starts to look seamless!!
Then, onto stage 2!
Unreal does as far as I know does not support that type of behavior.
You can however build it yourself.
I would look at the TextureRenderTarget classes for rendering the material on the portal.
Standing in the portal with one foot on each "side" will be... very tricky to do but shooting and walking through them should be simple enough by extending the touch event in actor.

Player does not properly get odometry data for Create in multithreaded application

I am using Player (Player/Stage) on the iRobot Create. The interface for getting odometry data from the robot is fairly simple: call playerc_client_read, and then if you've properly subscribed a playerc_position2d proxy, you should be able to access the proxy's members px, py, pa for distance traveled in x and y (in meters); and rotation (in radians).
I have no issue with doing this in a single threaded application -- all the odometry data is perfectly where I need it to be.
However, when I try to move the robot controller to its own thread (with pthreads), I run into some issues. The issue is that only px seems to be updated. py and pa always remain 0.
Here's the gist of the robot thread
//declare everything (including the playerc_client_t* object and playerc_position2d_t* object)
//connect to server (in pull mode or push mode, it doesn't seem to matter)
//subscribe to position2d proxy
while(!should_quit) {
playerc_client_read(client)
double xPosition = position2d->px;
double yPosition = position2d->py;
double radians = position2d->pa;
//do some stuff
sleep(10 milliseconds)
}
cleanup and unsubscribe
and sure enough, only xPosition is ever set while yPosition and radians remain 0 no matter how the robot moves.
I couldn't find anything else online, is this a known bug? Has anybody else had this issue? Can someone provide insight as to why this may be happening? Thank you.
Full disclosure: I'm a graduate student and this is for a class project.
The issue here is not necessarily with threading.
What we found is that the Create's internal odometry is very inconsistent, especially when a netbook is sitting atop it.
To get any semblance of an accurate reading, one has to set the angular velocity high enough (higher than 0.11 rads/s in our case).
This site helped explain a few things -- namely that the Creates use motor power to determine odometry, and not wheel counters or any kind of analog.
To get accurate odometry for dead reckoning tasks, one either needs to build their own accurate estimator, or use some external sensors that give better information about positional changes.
Our specific problem was caused by a thresholding in the multithreaded case that set angular velocity to low to register a change, whereas the sequential code did not have such thresholding.

How does Nike's website do this Flash effect when the user selects a choice

I was wondering how does Nike website make the change you can see when selecting a color or a sole. At first I thought they were only using images and when the user picked a color you just replaced that part, but when I selected a different sole I noticed it didn't changed like an image it looked a bit more as if it was being rendered. Does anybody happens to know how this is made? Or where can I get further info about making this effect :)?
It's hard to know for sure, but my guess would be that they're using a rendering service similar to that provided by Adobe's Scene7.
It's a product that is used to colorize/customize a base product image based on user choices.
If you're interested in using the service, I'd suggest signing up for their weekly webinar. I attended one a while back and was very impressed with their offering. They showed the Converse site (which had functionality almost identical functionality to the Nike site) as a demo.
A lot of these tools are built out in Flash using a variety of techniques:
1) You can use Flash's BitmapData object to directly shift the hues of the pixels in your item. This is probably the simplest technique but often limits you to simple color transformations.
2) You can pre-render transparent PNG's (or photos, I guess) containing the various textures you would want to show on your object (for instance patterns or textures) and have them dynamically added to your stage at runtime. This, I think, offers the highest fidelity but means you need all of your items rendered upfront.
3) You can create 3D collada files and load them via a library like Papervision3D. Then dynamically change the texture at runtime. This is the most memory intensive technique and tends to result in far worse fidelity, but for that you get a full 3D object that you can view in space.
I'm sure there are other techniques but those are the top 3 I can think of. I hope that helps!

Resources