liferay service builder clean and rebuild - liferay

I have a service builder module whose table definitions need to be changed. In my case I've modified the portlet-model-hints.xml file in the service's /src/main/resources directory to increase the length of a String field from 75 to a higher number. When I run blade gw cleanServiceBuilder, the old tables are dropped. When I then run blade gw buildService and then deploy the module with blade deploy, the new sql scripts are not executed (or something similar -- I can't find the new tables in my database). Has anyone else had this problem?

It can be fixed by manually deleting some rows in the servicecomponent and release_ tables. In particular, after cleaning the service builder, the servicecomponent table will still have a row with the service's buildNamespace and buildNumber. In the release_ there will be a row with servletContextName and schemaVersion of the module in question. These two rows can be deleted by hand and the next deploy will create the new tables.

Related

Is there a way to remove blob storage credential from azure database to allow bacpac local restore?

I am trying to export a bacpac from Azure and restore it locally on SQLEXPRESS 2016. When I try to restore it though I get the following errors from the Import Data-tier Application wizard in SSMS:
Could not import package.
Warning SQL72012: The object
[TestBacPacDB_Data] exists in the target, but it will not be dropped
even though you selected the 'Generate drop statements for objects
that are in the target database but that are not in the source' check
box.
Warning SQL72012: The object [TestBacPacDB_Log] exists in the
target, but it will not be dropped even though you selected the
'Generate drop statements for objects that are in the target database
but that are not in the source'
Error SQL72014: .Net
SqlClient Data Provider: Msg 33161, Level 15, State 1, Line 1 Database
master keys without password are not supported in this version of SQL
Server. Error SQL72045: Script execution error. The executed script:
CREATE MASTER KEY;
After some digging I found that a credential and master key have been added to the database. The credential name references a blob storage container, so I'm thinking maybe auditing was set up at some point with the container as an external resource or something similar.
I would like to delete this credential so I can restore the database locally, but the database throws an error stating that it is in use. I've tried disabling the logging in Azure, but the credential still can't be deleted.
I know sometimes it takes time for Azure to shut down resources, so maybe that's the cause, but I was wondering if anyone else has had a similar problem.
I'm trying to avoid having to set a password for the master key, since I don't care about the credential locally as in this question: SSMS 2016 Error Importing Azure SQL v12 bacpac: master keys without password not supported
Ultimately, we ended up creating a master key. In order to restore our databases locally in this way, we create the database by hand first in SSMS, then add a master key to it. This allows the data import to work correctly.
I had exactly the same problem, and tried a myriad of potential fixes found all over the place. Most were relating to rekeying the system, making a copy first, etc... and absolutely nothing worked.
As insane as this is, the only way I could finally get around it was manually editing the internal structure:
Take the bacpac from original source or copy, anywhere
Rename to .zip, and uncompress the folder structure
Edit "model.xml", search for anything to do with "master key" and / or "shared access signature" and delete the corresponding nodes
Calculate the Sha-256 checksum for the now modified model.xml
Replace the checksum at the bottom of "Origin.xml"
Rezip all the files and rename back to xxx.bacpac
Import onto a local system as you normally would

Retrieving to-be-pushed entries in IMobileServiceSyncTable while offline

Our mobile client app uses IMobileServiceSyncTable for data storage and handling syncing between the client and the server.
A behavior we've seen is that, by default, you can't retrieve the entry added to the table when the client is offline. Only when the client table is synced with the server (we do an explicit PushAsync then a PullAsync) can the said entries be retrieved.
Anyone knows of a way to change this behavior so that the mobile client can retrieve the entries added while offline?
Our current solution:
Check if the new entry was pushed to the server
If not, save the entry to a separate local table
When showing the list for the table, we pull from both tables: sync table and regular local table.
Compare the entries from the regular local table to the entries from the sync table for duplicates.
Remove duplicates
Join the lists, order, and show to the user.
Thanks!
This should definitely not be happening (and it isn't in my simple tests). I suspect there is a problem with the Id field - perhaps you are generating it and there are conflicts?
If you can open a GitHub Issue on https://github.com/azure/azure-mobile-apps-net-client/issues and share some of your code (via a test repository), we can perhaps debug further.
One idea - rather than let the server generate an Id, generate an Id using Guid.NewGuid().ToString(). The server will then accept this as a new Id.

Azure Mobile Services and Code First Migrations update

I have created an Azure Mobile Service project. From the beginning of the project I created my entities and enabled Code First Migrations. During the development process I never had any problem creating new entities, modifying existing ones and updating the database through data migrations. All sweet and nice.
I published my solution to Azure Mobile Services. My database schema was created automatically and everything was playing nice.
After few days I needed to update a field in a table. So updated the entity locally and run the service locally. My local version of my database was updated with my new addition. I uploaded the service to Azure and I was expecting my online database to be updated also. But I get this error
The model backing the 'xxxxx' context has changed since the database was created. Consider using Code First Migrations to update the database.
That is strange, since code first migrations are already enabled. My database was initially created using them. After many days of trying almost everything I deleted the schema of my database at the online version. I run again the service online and it created again the database schema with the last change I did. So I figure out the Azure Mobile Service has no problem to create the schema from the beginning but cannot figure out how to apply schema updates.
I do not recommend this as an answer (so please don't accept it as such), but I ended up so frustrated with the code-first migrations (which, like you, I just could not get to work) that I did this as a work-around while I await enlightenment.
1) Update the data model
For me this was simply adding this line to my Item class:
public bool IsPublic { get; set; }
2) Manually update the SQL server
You'll find the connection details in the publish profile you can download from the mobile service's dashboard in the Azure Portal. My tSQL command was simply
ALTER TABLE vcollectapi.Items
ADD IsPublic BIT NOT NULL DEFAULT(0)
3) Stop the service checking whether the model backing the context has changed since the last successful migration
There are several answers on how to do this, I followed this one and added the following static constructor to my data context class VCollectAPIContext
static VCollectAPIContext()
{
Database.SetInitializer<VCollectAPIContext>(null);
}
Now my service is back up-and-running and my data remained intact.

Switching production azure tables powering cloud service

Would like to know what would be the best way to handle the following scenario.
I have an azure cloud service uses a Azure storage table to lookup data against requests. The data in the table is generated offline periodically (once a week).
When new data is generated offline I would need to upload it into a separate table and make config changes (change table name) to the service to pick up data from the new table and re-deploy the service. (Every time data changes I change the table name - stored as a constant in my code - and re-deploy)
The other way would be to keep a configuration parameter for my azure web role which specifies the name of the table which holds current production data. Then, within the service I read the config variable for every request - get a reference to the table and fetch data from there.
Is the second approach above ok - or would it have a performance hit because I read the config, create a table client on every request that comes to the service. (The SLA for my service is less than 2 seconds)
To answer your question, 2nd approach is definitely better than the 1st one. I don't think you will take a performance hit because the config settings are cached on 1st read (I read it in one of the threads here) and creating table client does not create a network overhead because unless you execute some methods on the table client, this object just sits in the memory. One possibility would be to read from config file and put that in a static variable. When you change the config setting, capture the role environment changing event and update the static variable to the new value from the config file.
A 3rd alternative could be to soft code the table name in another table and have your application read the table name from there. You could update the table name as part of your upload process by first uploading the data and then updating this table with the new table name where data has been uploaded.

Orchard CMS Create a copy of existing site on the same server

I have an existing Orchard CMS site.
I would simply like to create a copy of the site and I have carried out the following.
Copied all items in the filesystem to a new directory
Changed the Daatprefix label within the app_data\Sites\Default\Settings.txt to reflect the new table prefixes
Created duplicate DB tables with a different prefix ecc to hcc
Copied all the data from initial DB tables to new DB tables including identity columns.
Changed the baseUrl in a DB table hcc_Settings_SiteSettings2PartRecord
Set things up in IIS use the same application pool
I simply get a request timeout error when attempting the second website
HttpException (0x80004005): Request timed out
Delete cache.dat and sites/default/mappings.bin under app_data.

Resources