I have an existing Orchard CMS site.
I would simply like to create a copy of the site and I have carried out the following.
Copied all items in the filesystem to a new directory
Changed the Daatprefix label within the app_data\Sites\Default\Settings.txt to reflect the new table prefixes
Created duplicate DB tables with a different prefix ecc to hcc
Copied all the data from initial DB tables to new DB tables including identity columns.
Changed the baseUrl in a DB table hcc_Settings_SiteSettings2PartRecord
Set things up in IIS use the same application pool
I simply get a request timeout error when attempting the second website
HttpException (0x80004005): Request timed out
Delete cache.dat and sites/default/mappings.bin under app_data.
Related
I have a service builder module whose table definitions need to be changed. In my case I've modified the portlet-model-hints.xml file in the service's /src/main/resources directory to increase the length of a String field from 75 to a higher number. When I run blade gw cleanServiceBuilder, the old tables are dropped. When I then run blade gw buildService and then deploy the module with blade deploy, the new sql scripts are not executed (or something similar -- I can't find the new tables in my database). Has anyone else had this problem?
It can be fixed by manually deleting some rows in the servicecomponent and release_ tables. In particular, after cleaning the service builder, the servicecomponent table will still have a row with the service's buildNamespace and buildNumber. In the release_ there will be a row with servletContextName and schemaVersion of the module in question. These two rows can be deleted by hand and the next deploy will create the new tables.
How do I delete a single record from the local store on multiple phones? The initiating phone correctly deletes the record from its local store (sqlite) and Azure (SQL Server).
However, I incorrectly assumed that other phones would delete the record from their local store after performing a pull, they don’t. Instead the ‘should be’ deleted record becomes orphaned until its entire table is purged and then pulled. This seems like overkill to delete a single record. How do I easily delete local store records between multiple devices?
Use 'soft-delete' on the server.
In a node-based server, set table.softDelete = true; in the table definition.
In an ASP.NET based server, set enableSoftDelete: true in the constructor of the EntityDomainManager.
This adds a Deleted column to the model. When the client pulls, any records that are marked deleted will be pulled down as well, and the client will delete the records from the SQLite store. When a record is deleted on the client, it is marked deleted instead.
On the server, you will need to clean up the marked-deleted records on a regular basis.
Our mobile client app uses IMobileServiceSyncTable for data storage and handling syncing between the client and the server.
A behavior we've seen is that, by default, you can't retrieve the entry added to the table when the client is offline. Only when the client table is synced with the server (we do an explicit PushAsync then a PullAsync) can the said entries be retrieved.
Anyone knows of a way to change this behavior so that the mobile client can retrieve the entries added while offline?
Our current solution:
Check if the new entry was pushed to the server
If not, save the entry to a separate local table
When showing the list for the table, we pull from both tables: sync table and regular local table.
Compare the entries from the regular local table to the entries from the sync table for duplicates.
Remove duplicates
Join the lists, order, and show to the user.
Thanks!
This should definitely not be happening (and it isn't in my simple tests). I suspect there is a problem with the Id field - perhaps you are generating it and there are conflicts?
If you can open a GitHub Issue on https://github.com/azure/azure-mobile-apps-net-client/issues and share some of your code (via a test repository), we can perhaps debug further.
One idea - rather than let the server generate an Id, generate an Id using Guid.NewGuid().ToString(). The server will then accept this as a new Id.
Would like to know what would be the best way to handle the following scenario.
I have an azure cloud service uses a Azure storage table to lookup data against requests. The data in the table is generated offline periodically (once a week).
When new data is generated offline I would need to upload it into a separate table and make config changes (change table name) to the service to pick up data from the new table and re-deploy the service. (Every time data changes I change the table name - stored as a constant in my code - and re-deploy)
The other way would be to keep a configuration parameter for my azure web role which specifies the name of the table which holds current production data. Then, within the service I read the config variable for every request - get a reference to the table and fetch data from there.
Is the second approach above ok - or would it have a performance hit because I read the config, create a table client on every request that comes to the service. (The SLA for my service is less than 2 seconds)
To answer your question, 2nd approach is definitely better than the 1st one. I don't think you will take a performance hit because the config settings are cached on 1st read (I read it in one of the threads here) and creating table client does not create a network overhead because unless you execute some methods on the table client, this object just sits in the memory. One possibility would be to read from config file and put that in a static variable. When you change the config setting, capture the role environment changing event and update the static variable to the new value from the config file.
A 3rd alternative could be to soft code the table name in another table and have your application read the table name from there. You could update the table name as part of your upload process by first uploading the data and then updating this table with the new table name where data has been uploaded.
I have backed up the Kentico Database and website from one of our Live servers and placed it within our Dev server and configured the website in IIS.
When I navigate to the website, it currently asks for New installation.. Whereas it should just show me the current website.
How do I get it to show the website?
EDIT
The following error occurs when going ahead and creating a new instance with the restored Database:
Restore the backed up database and add database connection string to the web.config of the application. Simple as that :)
Who or what is set as the DB objects owner/schema in the DB? Does this match the setting in Site Manager (or, CMS_SettingsKey table)? I would make sure these two match.
Another option is that the connection string was not initialized by .Net - I would do a dummy change in the web.config file to force the app restart and/or also clearing the .Net cache.