Xamarin and cross-platform webservice access + json parsing - xamarin.ios

I'm searching for a way to retrieve data (formatted in json) from an API and parse them.
I really want to use the code both for android and for IOS. I already saw examples but they didn't work for both platforms.
If you can provide me examples for connection, retrieving and for json, it is the best for me because I didn't find great docs about cross-platform (quite simple) implementation.
Comments Welcome !
Thanks in advance!

I've used Newtonsoft's json library in a monotouch solution
Find the source code here.
As far as retrieving the data - that depends on your API, I suspect it's a web API with HTTP calls? If that's the case you can further elaborate on this, obviously exception handling and threading is up to you:
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create (url);
...
request.BeginGetResponse ((r) =>
{
string res = null;
using (StreamReader srd = new StreamReader(response.GetResponseStream())) {
res = srd.ReadToEnd ();
}
T jres = Newtonsoft.Json.JsonConvert.DeserializeObject<T> (res);
}, null);

Instead of using nuget to download Newtonsoft, you need to download it from here:http://components.xamarin.com/view/json.net/

Related

Can't load Features.Diagnostics

I'm creating a web client for joining Teams meetings with the ACS Calling SDK.
I'm having trouble loading the diagnostics API. Microsoft provides this page:
https://learn.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/call-diagnostics
You are supposed to get the diagnostics this way:
const callDiagnostics = call.api(Features.Diagnostics);
This does not work.
I am loading the Features like this:
import { Features } from '#azure/communication-calling'
A statement console.log(Features) shows only these four features:
DominantSpeakers: (...)
Recording: (...)
Transcription: (...)
Transfer: (...)
Where are the Diagnostics??
User Facing Diagnostics
For anyone, like me, looking now...
ATOW, using the latest version of #azure/communication-calling SDK, the documented solution, still doesn't work:
const callDiagnostics = call.api(Features.Diagnostics);
call.api is undefined.
TL;DR
However, once the call is instantiated, this allows you to subscribe to changes:
const call = callAgent.join(/** your settings **/);
const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics);
userFacingDiagnostics.media.on("diagnosticChanged", (diagnosticInfo) => {
console.log(diagnosticInfo);
});
userFacingDiagnostics.network.on("diagnosticChanged", (diagnosticInfo) => {
console.log(diagnosticInfo);
});
This isn't documented in the latest version, but is under this alpha version.
Whether this will continue to work is anyone's guess ¯\(ツ)/¯
Accessing Pre-Call APIs
Confusingly, this doesn't currently work using the specified version, despite the docs saying it will...
Features.PreCallDiagnostics is undefined.
This is actually what I was looking for, but I can get what I want by setting up a test call asking for the latest values, like this:
const call = callAgent.join(/** your settings **/);
const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics);
console.log(userFacingDiagnostics.media.getLatest())
console.log(userFacingDiagnostics.network.getLatest())
Hope this helps :)
Currently the User Facing Diagnostics API is only available in the Public Preview and npm beta packages right now. I confirmed this with a quick test comparing the 1.1.0 and beta packages.
Check the following link:
https://github.com/Azure-Samples/communication-services-web-calling-tutorial/
Features are imported from the #azure/communication-calling,
for example:
const {
Features
} = require('#azure/communication-calling');

How to get attached file by URL from local CouchBase-Lite?

We are trying to get an image file that has been attached to a Doc in a local CouchBase-Lite.
I'd like to be able to get this files by using the same URL syntax used for the CouchDB Remote server. (Just point to local server instead of remote)(http://wiki.apache.org/couchdb/HTTP_Document_API#Attachments)
I can seem to find how to do this.
Anyone know how? Thanks
Assuming that you are actually asking about Couchbase-Lite (aka TouchDB, ie. the embedded iOS/Android SDK) then you should look here http://couchbase.github.io/couchbase-lite-ios/docs/html/interfaceCBLAttachment.html (or the Android equivalent if that's what you're working on).
I'm not sure why you're also asking about URL syntax if that's what you're doing, so I'm not sure that is what you're doing, but hey - there's your answer. You can clarify if you were talking about something else.
This is not really an answer because I'm also looking for a solution to this. I am using CBL iOS and i want to get the local database URL to be able to get info from it.
This is what i have so far :
string[] paths = NSSearchPath.GetDirectories(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomain.User);
var request = new NSMutableUrlRequest();
request.Url = new NSUrl(paths[0], false);
request.HttpMethod = "GET";
var data = NSUrlConnection.SendSynchronousRequest(request, out response, out error);
if(error != null)
{
Console.WriteLine("Error in SendRequest=" + error.LocalizedDescription);
throw new HttpRequestException(error);
}
And this prints out in the console : Error in SendRequest=The requested URL was not found on this server.
the URL is something like this :
file:///Users/ME/Library/Application%20Support/iPhone%20Simulator/7.0/Applications/ABFDF-ADBAFB-SFGNAFAF/Documents/users/user

Monotouch/iPhone - Call to HttpWebRequest.GetRequestStream() connects to server when HTTP Method is DELETE

My Scenario:
I am using Monotouch for iOS to create an iPhone app. I am calling ASP.NEt MVC 4 Web API based http services to login/log off. For Login, i use the POST webmethod and all's well. For Logoff, i am calling the Delete web method. I want to pass JSON data (serialized complex data) to the Delete call. If i pass simple data like a single string parameter as part of the URL itself, then all's well i.e. Delete does work! In order to pass the complex Json Data, here's my call (i have adjusted code to make it simple by showing just one parameter - UserName being sent via JSON):
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create("http://localhost/module/api/session/");
req.ContentType = "application/json";
req.CookieContainer = jar;
req.Method = "Delete";
using (var streamWrite = new StreamWriter(req.GetRequestStream()))
{
string jSON = "{\"UserName\":\"" + "someone" + "\"}";
streamWrite.Write(jSON);
streamWrite.Close();
}
HttpWebResponse res = (HttpWebResponse)req.GetResponse();
on the server, the Delete method looks has this definition:
public void Delete(Credentials user)
where Credentials is a complex type.
Now, here's the issue!
The above code, gets into the Delete method on the server as soon as it hits:
req.GetRequestStream()
And hence the parameter sent to the Delete method ends up being null
And here's the weird part:
If i use the exact same code using a test VS 2010 windows application, even the above code works...i.e it does not call Delete until req.GetResponse() is called! And in this scenario, the parameter to the Delete method is a valid object!
QUESTION
Any ideas or Is this a bug with Monotouch, if so, any workaround?
NOTE:
if i change the Delete definition to public void Delete(string userName)
and instead of json, if i pass the parameter as part of the url itself, all's well. But like i said this is just a simplified example to illustrate my issue. Any help is appreciated!!!
This seems to be ill-defined. See this question for more details: Is an entity body allowed for an HTTP DELETE request?
In general MonoTouch (based on Mono) will try to be feature/bug compatible with the Microsoft .NET framework to ease code portability between platforms.
IOW if MS.NET ignores the body of a DELETE method then so will MonoTouch. If the behaviour differs then a bug report should be filled at http://bugzilla.xamarin.com

Information about IronJS

Can any one point out as where can I get some tutorials about IronJS and how to call a method written in IronJS from C# 4.0
Thanks
C#4.0, IronJS
There is now some good information from the author on the GitHub project wiki:
https://github.com/fholm/IronJS/wiki
There is a 'first steps' blog post here:
http://blog.dotsmart.net/2011/04/20/first-steps-with-ironjs-0-2/
And I have written several blog posts on IronJS including one that stej has linked. The post stej linked is actually current, but it only covers some basic aspects of embedding. IronJS has undergone a radical rewrite since my first posts so I have put notices on those posts directing to newer updates.
This post specifically covers the original poster's question about how to call JS code from C#:
http://newcome.wordpress.com/2011/03/13/embedding-ironjs-part-ii/
Here is a quick summary:
IronJS.Hosting.Context ctx = IronJS.Hosting.Context.Create();
ctx.Execute("hello = function() { return 'hello from IronJS' }");
IronJS.Box obj = ctx.GetGlobal("hello");
Func<IronJS.Function,IronJS.Object,IronJS.Box> fun =
obj.Func.Compiler.compileAs<Func<IronJS.Function,IronJS.Object,IronJS.Box>>(obj.Func);
IronJS.Box res = fun.Invoke(obj.Func, obj.Func.Env.Globals);
Console.WriteLine( res.String );
Checkout https://github.com/fholm/IronJS/wiki for guides on using IronJS
If you have a Context, you can call Context.CompileSource() and pass its results to Context.InvokeCompiled(), or just call Context.Execute() and pass it the source code. Roughly, this:
IronJS.Hosting.Context ijsCtx;
ijsCtx = IronJS.Hosting.Context.Create();
ijsCtx.Execute("(function(){return 42;})()");
You might have a look at Embedding IronJs. But it looks outdated as well as the answer by #Gabe.
Currently it should be called like this:
var o = new IronJS.Hosting.Csharp.Context
o.Execute('var a = 10; a');

Is Adobe Media Encoder scriptable with ExtendScript?

Is Adobe Media Encoder (AME) Scriptable? I've heard people mention it was "officially scriptable" but I can't find any reference to its scriptable object set.
Has anyone had any experience scripting AME?
Adobe media encoder is 'officially' not scriptable but we can use extend script API for scripting AME.
Below functions are available through extend script
1.Adding a file to batch
Encode progress
host = App.GetEncoderHost ();
enc = EHost.CreateEncoderForFormat ( "QuickTime");
flag = Enc.LoadPreset ( "HD 1080i 29.97, H.264, AAC 48 kHz");
an if (flag) {
f = enc.encodeEncodeProgress
= function (progress) {
$ .writeln (progress);
}
eHost. enc.encode ("/ Users / test / Desktop / 00000.MTS", "/Users/test/Desktop/0.mov");
} else {
alert ("The preset could not be loaded ");
}
encode end
ehost = App.GetEncoderHost ();
enc = EHost.CreateEncoderForFormat ( "QuickTime");
flag = Enc.LoadPreset ( "HD 1080i 29.97, H.264, AAC 48 kHz");
an if (flag) {
f = enc.onEncodeFinished
= function (success) {
if (success) {
alert ("Successfully encoding has ended ");
} Else {
Alert (" failed to encode ");
}
}
EHost.RunBatch ();
} Else {
Alert (" preset could not be read ");
}
2.Start batch
eHost = app.getEncoderHost ();
eHost.runBatch ();
3.Stop batch
eHost = app.getEncoderHost ();
eHost.stopBatch ();
4.Pause batch
eHost = app.getEncoderHost ();
eHost.pauseBatch ();
5.Getting preset formats
EHost = App.GetEncoderHost ();
List = EHost.GetFormatList ();
6.getting presets
eHost = app.getEncoderHost ();
enc = eHost.createEncoderForFormat ("QuickTime");
list = enc.getPresetList ();
and many more...
The closest bits of info I've found are:
http://www.openspc2.org/book/MediaEncoderCC/
The latter resource is actually good, if you can read japanese, or at least use the Chrome built-in translate function, then you can see it has resources such as this one:
http://www.openspc2.org/book/MediaEncoderCC/easy/encodeHost/009/index.html
We can perform almost all basic functionalities through script.
I had a similar question about Soundbooth.. I haven't tried scripting Adobe Media Encoder though, it doesn't show up in the list of applications I could potentially connect to and script with the ExtendScript Toolkit.
I did find this article that might come in handy if you're on a Windows. I guess using something similar written in AppleScript could do the job on a OSX. I haven't tried it, but this Sikuli thing looks nice, maybe it could help with the job.
Adobe Media Encoder doesn't seem to be scriptable. I was wondering, for batch converting, could you use ffmpeg ? There seem to be a few scripts out there for that, if you google for ffmpeg batch flv.
HTH,
George
Year 2021
Yes, AME is scriptable in ExtendScript. AME API doc can be found at
https://ame-scripting.docsforadobe.dev/index.html.
The API methods can be invoked locally inside AME or remotely through BridgeTalk.
addCompToBatch and other alternatives in the API doc seem to be safe to use. This is working:
app.getFrontend().addCompToBatch(project, preset, destination);
The method requires project to be structured so that 1 and only 1 comp is at the root of the project.
encoder.encode – references of which can be found in Web, supposed to support encode progress callbacks - is not available in AME 2020 and 2021. As a result, this is not working:
var encoder = app.getEncoderHost().createEncoderForFormat(encoderFormat);
var res = encoder.loadPreset(encoderPreset);
if(res){
encoder.encode(project, destination); // error: encode is not a function
}
The method seems to have been removed in AME 2017.1, according to the post reporting the issue https://community.adobe.com/t5/adobe-media-encoder-discussions/media-encoder-automation-system-with-using-extendscript/td-p/9344018
The official stance at the moment is "no", but if you open the Adobe Extend Script Toolkit, and set the target app to Media Encoder, you will see in the Data Browser that a few objects and methods are already exposed in the app object, like app.getFrontend(), app.getEncoderHost() etc. There is no official documentation though, and no support, so you are free to experiment with them at your own risk.
You can use the ExtendScript reflection interface like this:
a = app.getFrontend()
a.reflect.properties
a.reflect.methods
a.reflect.find("addItemToBatch").description
But as far as I can see, no meaningful information can be found this way beyond list of methods and properties.
More about the ExtendScript reflect interface can be found in the JavaScript Tools Guide CC document.
Doesn't seem to be. There're some reference to it being somewhat scriptable via using FCP XML yet it's not "scriptable" in its accepted form.
Edit, it looks like they finally got their finger out and made ME scriptable: https://stackoverflow.com/a/69203537/432987
I got here after it came second in the duckduckgo results for "extendscript adobe media encoder". First was a post on the Adobe forums where an adobe staffer wrote:
Scripting in Adobe Media Encoder is not a supported feature.
and, just to give the finger to anyone seeking to develop solutions for adobe users using adobe's platform:
Also, this is a user-to-user forum, not an official channel for support from Adobe personnel.
I think the answer is "Adobe says no"

Resources