We are using xamarin forms to create an Azure backed mobile app. We are using offline data sync to allow offline data storage and processing. This is all hooked up as per the many walkthroughs on the web and all seems to be working fine.
One of our tables contains ~5000 rows which is causing us a few issues. When you do the initial sync with the app using the code;
this.listTable = client.GetSyncTable<Entities.List>();
await this.listTable.PullAsync("allLists", listTable.CreateQuery());
the azure .NET service seems to get his thousands of times. I have debugged the services' GetAllList() method on azure - containing the code;
public IQueryable<List> GetAllList()
{
return Query();
}
and the breakpoint get’s hit a lot. I have checked out what Query() is returning and it looks to be returning the whole dataset (~5000 rows) as I would expect.
Does anybody know what I could be doing wrong to have it called so many times? It looks to be returning the whole dataset for maybe each row in the dataset - causing the sync to be very slow.
I stepped through the controller process when debugging and it doesn’t just look to be calling the GetAllList() method loads of times, it initialises the controller then calls the GetAllList method over and over again.
I’m sure I’ve set something up incorrectly and it’s a simple mistake as I can’t believe this is by design, but can’t for the life of me spot what I have done wrong :(
Any help would be very welcome!
Thanks,
Al.
Turns out the pagesize for the requests is 50 by default. This results in loads of calls to the service to the number of rows I was calling taking loads of time.
I can up the MaxPageSize on the client and the PageSize on the service and it is significantly quicker.
Came across the suggestions here for reference - https://social.msdn.microsoft.com/Forums/sqlserver/en-US/ff5b6ba7-76c7-42fe-847d-9898256e3249/problem-syncing-large-table?forum=azuremobile
There are, of course things that need to be taken into account when upping this value - but at least it explains what was happening.
Related
I have created a azure function on powershell which works on http hit . It is writing a JSON file at its root folder after processing it. But if multiple hit occurs at same time then it throws file in use error. I know azure function doesn't work good on multi threading and variables can be modified while running one process and 2nd process occurs. I don't want to use queue storage so any good suggestion how to do it?
DO NOT WRITE ANYTHING TO THE AZURE FUNCTIONS FILESYSTEM THAT YOU DO NOT WANT TO LOSE.
Use Cosmos DB or some other external data store to store your data.
Without a code sample, it is hard to say what you might be doing wrong. You should be able to do this fine, from a code point of view, but you need to check if the file is in use and handle errors when it is (i.e. wait or throw a 429 error/etc.)
I've been working on a Xamarin.Forms application in Visual Studio using Azure for the backend for a while now, and I've come across a really strange issue.
Please note, that I am following the methods mentioned in this blog
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution. What I mean by that is that if I create another solution that accesses the exact same backend, it can perform it's own create/sync data, but will not bring over the data generated by the other solution, even though they both seem to have the exact same access. This appears to be some kind of a security feature/issue, but I can't quite make sense of it.
Has anyone else encountered this at all? Was there a work-around at all? This could potentially cause problems down the road if I were to ever want to create another solution that accesses the same system/data for whatever reason.
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution.
According to your provided tutorial, I found that the related PullAsync is using Incremental Sync.
await coffeeTable.PullAsync("allCoffees", coffeeTable.CreateQuery());
Incremental Sync:
the first parameter to the pull operation is a query name that is used only on the client. If you use a non-null query name, the Azure Mobile SDK performs an incremental sync. Each time a pull operation returns a set of results, the latest updatedAt timestamp from that result set is stored in the SDK local system tables. Subsequent pull operations retrieve only records after that timestamp.
Here is my test, you could refer to it for a better understanding of Incremental Sync:
Client : await todoTable.PullAsync("todoItems-02", todoTable.CreateQuery());
The client SDK would check if there has a record with the id equals deltaToken|{table-name}|{query-id} from the __config table of your SQLite local store.
If there has no record, then the SDK would send a request as following for pulling your records:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'1970-01-01T00%3A00%3A00.0000000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Note: the $filter would be set as (updatedAt ge datetimeoffset'1970-01-01T00:00:00.0000000+00:00')
While there has a record, then the SDK would pick up the value as the latest updatedAt timestamp and send the request as follows:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'2017-06-26T02%3A44%3A25.3940000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Per my understanding, if you handle the same logical query with the same query id (non-null) in different mobile client, you need to make sure the local db is newly created by each client. Also, if you want to opt out of incremental sync, pass null as the query ID. In this case, all records are retrieved on every call to PullAsync, which is potentially inefficient. For more details, you could refer to How offline synchronization works.
Additionally, you could leverage fiddler for capturing the network traces when you invoke the PullAsync, in order to troubleshoot your issue.
I am building a sports data visualization application with server-side rendering in React (ES6)/Redux/React-Router-Redux. At the top, there is a class-based App component, and there are two different class-based component routes. (everything under those is a stateless functional component), structured as follows:
App
|__ Index (/)
|__ Match (/match/:id)
When a request is made for a given route, one API call is dispatched, containing all information for the given route. This is hosted on a different server, where we're using Restify and Sequelize ORM. The JSON object returned is roughly 12,000 to 30,000 lines long and takes anywhere from 500ms to 8500ms to return.
Our application, therefore, takes a long time to load, and I'm thinking that this is the main bottleneck. I have a couple options in mind.
Separate this huge API call into many smaller API calls. Although, since JS is single-threaded, I'd have to measure the speed of the render to find out if this is viable.
Attempt lazy loading by dispatching a new API call when a new tab is clicked (each match has several games, all in new tabs)
Am I on the right track? Or is there a better option? Thanks in advance, and please let me know if you need any more examples!
This depends on many things including who your target client is. Would mobile devices ever use this or strictly desktop?
From what you have said so far, I would opt for "lazy loading".
Either way you generally never want any app to force a user to wait at all especially not over 8 seconds.
You want your page send and show up with something that works as quick as possible. This means you don't want to have to wait until all data resolves before your UI can be hydrated. (This is what will have to happen if you are truly server side rendering because in many situations your client application would be built and delivered at least a few seconds before the data is resolved and sent over the line.)
If you have mobile devices with spotty networks connections they will likely never see this page due to timeouts.
It looks like paginating and lazy loading based on accessing other pages might be a good solution here.
In this situation you may also want to look into persisting the data and caching. This is a pretty big undertaking and might be more complicated than you would want. I know some colleagues who might use libraries to handle most of this stuff for them.
I am working on a small hobby project, where I would really like some input and advice.
This is my first "real" node project, and I hope it will teach me a lot about node.js development. I am a .net developer by day, and have been for about 15 years professionally. I have had periods of doing Java as well. I have created small node.js projects to be used as micro services.
But this project can no longer be classified as a micro service ;-)
The purpose of the project is to sample some sensor data, and do some reporting. An idea I got from playing around with a PLC at university. I do that by sampling from a PLC, and emitting the data using ZeroMQ. My node.js server then listens for this sensor data, and stores it in a MongoDB.
I expose that data in a REST api. The REST api also exposes resources like batches and other stuff like authentication etc. On top of that I have an AngularJS app, that creates defines the UI.
The one thing that really annoys me, is that I want to globally assign what batch is running. I have a collection of batches, and one of them is the running one. There are a two ways I see to do this, and both illustrate my novice status in the node.js world. All users should be able to see what batch is running, and I want to be able to easily tell from anywhere in the code as well.
1) Set a flag on the object in Mongo. This has a number of problems. The obvious one being performance. I receive sensor data 10 times a second, and I don't want to ask the database every time what batch to save it under.
2) Save the info on the global object. I really don't like this either. I don't like global state in my code.
What is a good pattern for doing something like this? Does my question make any sense?
Thanks in advance
You can make a simple REST call to set the active batch and call it inside the batch when is started up and ready to accept requests. For example:
app.put('/active-batch', function(req, res, next){
// Make sure req.body is defined
app.set('active-batch', req.body);
res.end();
});
Then everywhere in the code you can use:
app.get('active-batch');
The app.set let you save data globally accessible in your app and app.get let you read previously stored data.
HI i am new to the the whole programming thing, i have been given a task to multithread 4 stored procedures where each thread runs asynchronously so that the user can get output real quick i have to do it using WCF can anyone help me out with this. Initially what i am trying to do is taking each procedure and getting how much time it takes to execute using parametrizedthreadstart, but i am not sure how to go about it.
Considering you are new to the whole programming thing, you can follow these very basic steps to get thing done.
Create a new WCF service.
Add 4 methods each calling one stored procedure.
Add parameters to the methods which are required by stored procedures.
For Example if your stored procedure is - MySP(varchar name) then your WCF method will
be - MySP(string name);
Now depoly your service in IIS or windows service or Console App or wherever you want.
Create a client application, again it could be anything ConsoleApp or Win Form etc.
Add a reference to your service.
Instantiate service class and call there Async version. By Async I mean there you'll
see all of the four methods with Async attached.
For Example you will find your MySP(string name) method as MySPAsync(string name)
Also there will be MySPCompleted event, subscribe to it.
Now all of your methods are running asynchronously whenever they finish execution they'll call your subscribed methods.
I hope this helps you get started :)
There are a couple of different ways to do this. At the highest level, you can place each service request in it's own service endpoint. This could be defining endpoints for each method, or if you are hosting in IIS, placing each service it's own website. At the lower level, you could define callbacks for each method so that WCF will not block while the method calls are taking place.