I am trying to implement geofencing through xamarin.forms. First i have started my work in iOS then i found a library i.e. CoreLocation, in this i am using CLCircularRegion class for making virtual boundary.
CLCircularRegion region = new CLCircularRegion(new CLLocationCoordinate2D(+28.5003615,+77.0718658), 1.0, "MDS");
But i am not sure about the central points(latitude,longitude) which is passed in CLLocationCoordinate2D. How can i get exact latitude and longitude of my place and how i can implement geofencing in correct manner in iOS through xamarin.forms.
How can I get exact latitude and longitude of my place
Once you set the Geofencing the device start to track you location to see if you entered the defined fence. You can also use the MKMapView to identify and follow the user.
I have put together a small sample project couple of weeks ago to demonstrate the Geofencing in iOS
how i can implement geofencing in correct manner in iOS through xamarin.forms.
The Geofencing is platform-specific code that you cannot really abstract out in shared project. Instead, you can use DependencyService to access the method/events from iOS project to Shared project
Related
For more context, I'm developing a Augmented Image Android app. Because of a series of unfortunate events, I ended up trying to develop this having absolute 0 Android experience, but here I am. The thing is, I can't find good tutorials on this topic (ARCore in Android Studio), so I am taking Google example apps and trying to understand how they work.
It seems that it enters in detail about OpenGL, but I don't have the time to learn it properly. I found this thing called SceneViewer, which seemed just what I need. An easy way to charge and display a model/scene to my ARCore anchors. But, it seems discontinued. Or for what I have found, it isn't compatible any longer with Android Studio.
Is there anything out there that could serve this purpose? Or Scene Viewer can still do this job?
I am attempting to develop a Wear OS app which is dependent on a paired Android phone to perform some higher complexity computations. To this end I have implemented on the wearable side the proper infrastructure to pass a PutDataMapRequest message to the phone app, where I am having trouble is extending the WearableListenerService class on the phone side. When I alt+enter to see the suggested actions menu, the option to add the requisite library is there. However when I select that option nothing happens and the option is still there afterwards (the error is not rectified). I will caveat this by saying I have only been developing for Android for about 2 weeks so some of this Android Studio and its' quirks are still a little new to me. Prior to this point I had attempted various incarnations of building this app. The first where I had built the apps separately, and on that attempt this same extension caused problems (the IDE didn't even offer any suggestions at that point). I also tried loading the data layer api sample to find an example of the wearable listener service but unfortunately it is only present on the wear side of the app. The original source of this approach was from this tutorial, which I know is a little old at this point (at least one of the calls on the wear side are deprecated which I already worked around). At about 2:00 in the presenter is able to extend WearableListenerService without any issue within his phone side app and I have no idea what I am missing to be able to do that. I also did look into just trying to add the support library manually but to no avail.
Ok so, for anyone who runs into a similar issue down the road. The solution appears to be that when you create a wearable app through the new wizard and attempt to add an application module to the project, you will need to manually add the following lines to your phone app side gradle file under dependencies.
implementation 'com.google.android.gms:play-services-wearable:17.1.0'
implementation 'androidx.wear:wear:1.1.0'
This allowed the IDE to recognize the requisite classes and import them accordingly into the companion phone side application.
I believe these statement are true:
1) All Universal Apps Work As Holograms
2) Universal Apps can be built using HTML/JS
Does this mean I can build a holographic universal app using web technologies? For example a holographic visualizations dashboard in D3.js?
It's still too early to say definitively, but here is some info I could find.
UPDATE: There is now a library called HoloJS which allows devs to write apps in html.
First your assumptions 1 and 2 are correct. There are ways to build UWP (Universal Windows Platform) apps in javascript/html. This means you could write a UWP JS app which can run webgl in a 2D window placed somewhere in your environment. You could also run your app on Microsoft Edge.
So if all you want to do is display a 2D dashboard in a 3D room, yes it looks very possible. If you want the application to render 3D objects all around the user, there are some problems you will need to work around.
Quoted from https://forums.hololens.com/discussion/80/is-it-possible-to-use-webgl-with-hololens-repost#latest:
"Holographic apps are powered by the same graphics stack as the rest of the Windows 10 ecosystem. That means that just like the Xbox and Win32 games, apps for HoloLens are built on top of DirectX."
So you're kind of stuck with either Unity or DirectX if you want 3D visualizations that surround the user. BUT there could be a way...
A user at the bottom of this page http://forums.hololens.com/discussion/80/is-it-possible-to-use-webgl-with-hololens-repost said:
"That is interesting idea. If I understand correctly, you are trying to hook your Edge browser with your HoloLens and project 3D graphics with WebGL on your Edge browser based on the REST APIs available from HoloLens"
So, you could perhaps fullscreen your app or find some way to ensure it is in front of your user's face and then use a server to direct API calls from the hololens to your web-app in order to transform your geometry around the user.
It might be worth it to look into integrating D3 visualizations inside a threejs app if you want the holographic visualizations. https://www.youtube.com/watch?v=bWjn1N4SJsk
If you just want a 2D screen in the environment then develop as normal and use Edge inside the hololens.
The team I'm on created a cross-platform application that runs on iOS, Android, and Windows Mobile that was created using Xamarin's tools and MonoCross. We're looking at MvvmCross as a possible MonoCross replacement but don't want to write the application from scratch.
Does anyone have experience with or thoughts on migrating a Xamarin/MonoCross cross-platform application to Xamarin/MvvmCross? Is it possible for the two frameworks to coexist in the same app (the ideal solution would have us migrate the app one screen at a time).
Thanks in advance.
Following Stuart's advice below I confirmed that it is possible to integrate MvvmCross into an existing MonoCross application.
In the original code a selection on View 1 initiates a call to Controller 2 using MonoCross URI navigation. Controller 2 displays View 2, passing it the data from Model 2.
Following the example in this video I created an MvvmCross View and ViewModel. A selection on View 1 still navigates to Controller 2 but Controller 2 now displays the new MvvmCross View 2. View 2 is data bound to ViewModel 2 which takes over Controller 2's functions of getting the Model data.
I don't know of anyone who's done this recently, but I originally ported several of the MonoCross samples over when I first created MvvmCross. The overall idea of one page to one "ViewModel" stays the same, although the mvvm binding offers more continuous view-viewmodel interactions than the more discrete Controller-Action model.
At a practical level:
MvvmCross itself is very modular and can be used in "CrossLight" mode where it simply provides data-binding and plugins - see CrossLight in http://mvvmcross.wordpress.com/. You might be able to use this for migrating pages one-by-one
MonoCross isn't really very interface/IoC based - so you may find that your resulting MvvmCross migration would also not be interface based either
MonoCross apps tend to use file-linking and #defines rather than PCLs - so you may find it easier to not use PCLs in MvvmCross
I suspect the best option for this migration is to let your team experiment - they already have lots of knowledge about your app and about what they do and don't need and what benefits they do and don't get from a framework.
I have been looking into using MvvmCross as our solution to cross platform development, with previous development being solely targeted at iOS. I have come to really like how storyboards encompass all the views together along with the flow between them.
I know Monotouch supports their usage with the storyboard projects which I have been able to work with, however I have not been able to find any reference/example to it being using with MvvmCross.
Is this currently supported? or can someone provide me some tips as to how I can get this setup. The initiation seems to be the issue as in the storyboard projects the FinishedLaunching method in the AppDelegate is usually empty
Is this currently supported?
I don't believe it is.
I've never used Storyboards to build anything other than a demo app - so I'm not an expert.
However, from what I know I think there are 3 problems that you would need to overcome.
1. Storyboards don't have code in FinishedLaunching
This is easy to solve I think - you can just add an override to FinishedLaunching which calls an MvvmCross Setup class in order to initialise IoC, Plugins, your App, etc.
2. MvvmCross vNext requires you to override the constructors to forms like
public DetailViewController (MvxShowViewModelRequest request) {
}
while Storyboards require the use of forms like:
public DetailViewController (IntPtr handle) {
}
Overcoming this is harder... but the good news is that it should be a lot easier in v3 - one of the stated aims of v3 is to somehow support storyboards - see http://slodge.blogspot.co.uk/2013/02/mvvmcross-v3.html
3. Clash of concepts
If you are using Storyboards, then the navigation logic is tied to the Storyboard and to the UIViewController.
If you are using MvvmCross, then the navigation logic is tied to the ViewModels.
Overcoming this would be relatively straight-forward - you can easily mix and match concepts - but you might find your ViewModels and Views feeling 'a bit odd' as a result.
Summary
Doing this today is possible but would require some hours of hacking.
A beta of v3 is due very soon (within weeks - just depends on my spare time). Once that is available I think you'd be able to get started much quicker.