I have been having trouble trying to understand how to use the Spatial Awareness user guide for the latest MRTK release to get access to the spatial mapping meshes to use in a multi-user app. I cannot find a way to serialize the meshes to be able to send them to a remote device as was previously possible in the older toolkit. I have tried to add the meshes to a list and used the old simplemeshserializer but that did not seem to work at all. Any help would be greatly appreciated in trying to understand the current capabilities in current MRTK and how the same functionality can be replicated.
I have been facing problems with mrtk v2 spatial awareness too. Have you tried using surface types? I couldn't make it work yet.
You mean that you want to transfer the mesh between multiple devices, but when you use the simplemeshserializer to serialize the mesh in MRTKv2 and transfer it, the remote device becomes unresponsive?
According to some previous cares of transferring mesh between multiple clients, we hope that you can follow the steps below to troubleshoot:
Is data received correctly when transferring data using WebRTC?
Is the data still reliable after deserialization? You can try to save it locally and verify it.
After receiving the data and deserializing it, how did you handle these deserialized meshes? Can you provide the small sample to reproduce the problem?
Related
I am new to bacnet automation and to bacnet4j, We have a bacnet server that broadcast bacnet points and I could see that in yabe manually by adding the device. In this case, my laptop and the server are connected to the same network. How can i automate the same? I need to read and write the values. From where should i start? could anyone help
I found the initial learning curve to get started with BACnet4J somewhat steep, however I was able to get started with BAC0 far quicker (which is in Python). You may find that easier.
I've been looking through the Libfreenect2 repo if there is the possibility to capture just 1 point cloud frame out of my Kinect V2, by using Ubuntu 16.04lt, but I cannot find anything relevant to do so.
How would that possible?
libfreenect and libfreenect2 are mostly just drivers for Kinect devices. Post-processing is best applied in a middleware layer such as pointclouds.org or AForge.Net; it depends on the goals of your application.
If you really want to get your hands dirty, check out this C++ point cloud example. It's written for the Kinect v1, but it might give you some ideas. If you have trouble getting the hardware to work, please also visit the repositories linked above for documentation and bug reports.
I have a coredata app that I would love to be able to share the same data with multiple devices, possibly with iCloud/cloud kit. I am not sure where to start, or how to go about it? The only thing I can think of, but still not sure how to do, would be to sync the SQLite files with iCloud? Not sure if thats a good idea or not? I just recently converted my app over to swift 3 and iOS10 core data code. The only way I am able to share data between devices currently is thru iTunes files sharing.
For whatever reason this topic is hard to find modern info on.
Core Data doesn't have support for this. Except for the built-in iCloud sync, but that's deprecated as of iOS 10.
You could use CloudKit to sync data, but you'll have to write your own code to convert between Core Data's persistent store and CloudKit's online store. It's not impossible but it's certainly not automatic.
Syncing the SQLite file is not a good idea unless you really want to corrupt the data.
I want to do some stuff using kinect and my research took me to two libs, libfreenect and OpenNi, the first one apparently just extract video data, am I right? The second one was acquired by Apple and dissolved, however some of the binary data and documentation was recovered by structure.io and this library does give the complete Kinect data. My idea is to use a socket.io server to process the Kinect input data and send it to the browser, then use JavaScript to process it on the client. My question is, does anyone here has achieved such thing? And if so, could you give me some guidance on how to achieve this or where to start please?
For Kinect for Windows V2 =>
https://www.npmjs.com/package/kinect2 [I've used it]
For kinect v1 =>
https://github.com/nguyer/node-kinect
http://metaduck.com/09-kinect-browser-node.html
http://blog.whichlight.com/post/53241512333/streaming-kinect-data-into-the-browser-with-nodejs
http://depthjs.media.mit.edu/
This library achieves something similar to what you were looking to do. It uses Kinect2 (mentioned in another response) to get the Kinect data, but also lets you stream it to another browser.
https://github.com/kinectron/kinectron
I am developing an social app on iOS that have many-to-many relation, local persistency, and user interaction. I have tried using native Parse API in iOS and find it too cumbersome to do all the client-server logic. So my focus shifted to finding a syncing solution.
After some research I found AFIncrementalStore quite easy to use and it's highly integrated in CoreData. I just started to work on this and I have two questions to ask:
1) How to do the authentication process? Is it in AFRESTClient?
2) How to set up AFRESTClient to match Parse's REST API? (an example would be great!)
P.S. I also found FTASync, which seems to be another solution. Any thought on this framework?
Any general suggestion on client-server syncing solutions will be highly appreciated!
Thanks,
Lei Zhang
Back with iOS 5 Apple silently rolled out NSIncrementalStore to manage connection between APIs and persistent stores. Because I couldn't word it better myself:
NSIncrementalStore is an abstract subclass of NSPersistentStore designed to "create persistent stores which load and save data incrementally, allowing for the management of large and/or shared datasets". And while that may not sound like much, consider that nearly all of the database adapters we rely on load incrementally from large, shared data stores. What we have here is a goddamned miracle.
Source: http://nshipster.com/nsincrementalstore/
That being said, I've been working on my own NSIncrementalStore (built specifically for Parse and utilizing the Parse iOS/OS X SDK) and you're welcome to check out/use/contribute to the project at https://github.com/sbonami/PFIncrementalStore.
Take a look at this StackOverflow question and at Chris Wagner's article on raywenderlich.com.
The linked SO question has examples for how to include the authentication token with each request to Parse. So you'll just need to have the user log in first, and store their token to include it with each subsequent request.
Chris Wagner's tutorial has a sample AFHTTPClient named SDAFParseApiClient to communicate with the Parse REST API. You'd have to adapt it to be an AFRESTClient subclass, but it should give you a start.
Some other thoughts between the two solutions you're considering:
AFIncrementalStore does not allow the user to make any changes without a network connection, while FTASync keeps a full Core Data SQLite store locally and syncs changes to the server when you tell it to.
FTASync requires you to make all your synched managed objects subclasses of FTASyncParent, with extra properties for sync metadata. AFIncrementalStore keeps its metadata behind the scenes, not in your model.
FTASync appears not to be widely used and hasn't been updated in over a year; if you use it you will likely be maintaining it.