Switching production azure tables powering cloud service - azure

Would like to know what would be the best way to handle the following scenario.
I have an azure cloud service uses a Azure storage table to lookup data against requests. The data in the table is generated offline periodically (once a week).
When new data is generated offline I would need to upload it into a separate table and make config changes (change table name) to the service to pick up data from the new table and re-deploy the service. (Every time data changes I change the table name - stored as a constant in my code - and re-deploy)
The other way would be to keep a configuration parameter for my azure web role which specifies the name of the table which holds current production data. Then, within the service I read the config variable for every request - get a reference to the table and fetch data from there.
Is the second approach above ok - or would it have a performance hit because I read the config, create a table client on every request that comes to the service. (The SLA for my service is less than 2 seconds)

To answer your question, 2nd approach is definitely better than the 1st one. I don't think you will take a performance hit because the config settings are cached on 1st read (I read it in one of the threads here) and creating table client does not create a network overhead because unless you execute some methods on the table client, this object just sits in the memory. One possibility would be to read from config file and put that in a static variable. When you change the config setting, capture the role environment changing event and update the static variable to the new value from the config file.
A 3rd alternative could be to soft code the table name in another table and have your application read the table name from there. You could update the table name as part of your upload process by first uploading the data and then updating this table with the new table name where data has been uploaded.

Related

Creating a Dashboard with a Livestream option

As the title says I am trying to am creating a Dashboard.
The Dashboard should include an option to view Data inserted in a Database, live or at least "live" with minimal delay.
I was thinking about 2 approaches:
When the option is used the Back-End creates a Trigger in the Database(its only certain Data so i would have to change the Trigger according to the Data). Said trigger should then send the new Data via http to the Back-End.
What i see as a problem is that the delay of sending the Data and possible errors could block the whole database.
1.1. Same as 1. but the trigger puts the new Data in a seperate Table where i can then query and delete the Data.
Just query for the newest data every 1-5 sec. or so. This just seems extremly bad and avoidable.
Which of those is the best way to do this? Am i missing something? How is this usually done?
The Database is a pgsql Database,Back and Front-end are in NodeJs.

Archiving Azure Search Service

Need suggestion on archiving unused data from search service and reload it back when needed(reload to be done later).
Initial design draft looks like this:
Find the keys from search service based on some conditions(like take inactive, how old) that need to be archived.
Run achiever job(need suggestion here, could be a web job, function app)
Fetch the data and insert to blob storage and delete it from the search service.
Now the real way is to run the job in the pool and should be asynchronous
There's no right / wrong answer for this question. What you need to do is perform batch queries (up to 1000 docs), and schedule it to archive past data (eg. run an Azure function which will trigger and search for docs where createdDate > DataTime.Now).
Then persist that data somewhere (can be a cosmos db or as blob into storage account). Once you need to upload it again, I would consider it as a new insert, so it should follow your current insert process.
You can also take a look on this tool which helps to copy data from your index pretty quick:
https://github.com/liamca/azure-search-backup-restore

liferay service builder clean and rebuild

I have a service builder module whose table definitions need to be changed. In my case I've modified the portlet-model-hints.xml file in the service's /src/main/resources directory to increase the length of a String field from 75 to a higher number. When I run blade gw cleanServiceBuilder, the old tables are dropped. When I then run blade gw buildService and then deploy the module with blade deploy, the new sql scripts are not executed (or something similar -- I can't find the new tables in my database). Has anyone else had this problem?
It can be fixed by manually deleting some rows in the servicecomponent and release_ tables. In particular, after cleaning the service builder, the servicecomponent table will still have a row with the service's buildNamespace and buildNumber. In the release_ there will be a row with servletContextName and schemaVersion of the module in question. These two rows can be deleted by hand and the next deploy will create the new tables.

Azure App service dependent calls

I have a class structure as following:
UserDepartments(1)->(n)Categories(1)->(n)Templates(1)->(n)reports
I am using Azure offline data sync with incremental sync. There are 2 major issues we are facing with this.
The code is here
Issues:
Is there any better way of downloading all this related content then doing foreach under foreach?
Intermittently we see that not all the content that has been changed on the server by another Web App downloads & syncs fine when incremental sync is on. Is there a way we can flush the cache list created by the key (the first parameter in PullAsync) used in Incremental Sync? Or do you see something we need to change in order to make sure that we download correct data on each sync?
Is there any better way of downloading all this related content then doing foreach under foreach?
Pull is performed on a per-table basis, we can’t downloading all the related content at once.
Is there a way we can flush the cache list created by the key (the first parameter in PullAsync) used in Incremental Sync?
Incremental sync is default support by PullAsync method if you pass the non-null value as the value of queryId parameter. But there are two points we need to pay attention to.
The queryId must be unique for difference pull method.
The field filter in later parameter must support sorting.

Retrieving to-be-pushed entries in IMobileServiceSyncTable while offline

Our mobile client app uses IMobileServiceSyncTable for data storage and handling syncing between the client and the server.
A behavior we've seen is that, by default, you can't retrieve the entry added to the table when the client is offline. Only when the client table is synced with the server (we do an explicit PushAsync then a PullAsync) can the said entries be retrieved.
Anyone knows of a way to change this behavior so that the mobile client can retrieve the entries added while offline?
Our current solution:
Check if the new entry was pushed to the server
If not, save the entry to a separate local table
When showing the list for the table, we pull from both tables: sync table and regular local table.
Compare the entries from the regular local table to the entries from the sync table for duplicates.
Remove duplicates
Join the lists, order, and show to the user.
Thanks!
This should definitely not be happening (and it isn't in my simple tests). I suspect there is a problem with the Id field - perhaps you are generating it and there are conflicts?
If you can open a GitHub Issue on https://github.com/azure/azure-mobile-apps-net-client/issues and share some of your code (via a test repository), we can perhaps debug further.
One idea - rather than let the server generate an Id, generate an Id using Guid.NewGuid().ToString(). The server will then accept this as a new Id.

Resources