Currently, I use Liferay 6, my data is stored in DDL and the data store format is XML. While, the data format in Liferay 7 is JSON. How can I do this upgrade?
When your data is persisted in your DB, Liferays DXP (7.0) upgrade-client program (DXP Upgrade) should transform your data or taking care that persisted data is still usable with DXP.
You can find some more information about portal upgrade on the Developer Network: UPGRADING TO LIFERAY PORTAL. Please also note the Upgrade-Preparing steps.
Related
As per the release notes there is a native support for Hive View in HDIndight 4.0. I don't get this, it's there in HDInsight 3.6 as well -- so why is it called out explicitly in the release notes for HDI 4.0 ? And yes in HDI 3.6 also it appears natively , as soon as we spin a cluster and start Ambari, it's there already very much accessible. So how is it different in HDI 4.0 ?
Update: Previously Hive view is not built-in for 4.0 clusters, and now product team added it back. That’s the reason it is being explicitly called out "Hive View is supported natively for HDInsight 4.0 clusters starting from this release. It doesn't apply to existing clusters. You need drop and recreate the cluster to get the built-in Hive View."
Thanks for bringing this to our attention. This question seems like a feedback on the document and not a specific question on the product.
In order to get your feedback addressed. I would request you to Submit feedback on the document.
How to submit a Azure document feedback?
Step1: Open the document which you are referring to.
Step2: Scroll down to the bottom of the page and click on "This page".
Step3: Enter your feedback title and issue.
I am working in an environment that contain many Liferay Newbies along with experts, now I want to make some configuration to prevent any one from modifying the structure of the Liferay Database, i.e. one developer started a liferay 6.2 server while connecting to a Liferay 6.1 configuration database causing the database to get corrupted ... I know that I can't make LR users read-only because any change is reflected to the database, but I want to put some limitations to prevent a scenario like the above.... is any related configuration available ?
Regular permissions that are required for accessing Liferay's database are SELECT, INSERT, UPDATE and DELETE. Only when you're developing new plugins you need CREATE TABLE, ALTER TABLE, CREATE INDEX and similar DDL permissions on the database you're developing on. Just don't give them the full permissions.
Update routines (that run when you have 6.2 code running on 6.1 structures) will require the DDL permissions or fail. And, of course, you can also remove / unconfigure the Upgrade routines.
I am new to IBM Cognos, I have created a project using DB2 as a data source. When I added some extra rows to my table it's not reflecting into my project. Is there any way to automatically update the database data into Cognos Report studio.
Please tell me the solution.
If its rows you are adding to your Query subject, they will automatically update in your framework manager metadata. You can check it by using Test tab.
Let's say I'm adding data to different portlets inside of the Liferay Portal. Where is all these data saved to?
Most of the out of the box portlets that Liferay ships with like Blogs, Forums, Wiki, Web content gets stored in Liferay database. You can see the tables and the actual data properly if you have a production ready database configured with Liferay like MySQL or Oracle.
Liferay by default ships with Hypersonic database for quick demo purpose. It save all the information in a file. All the hypersonic related files are saved in "data\hsql" folder inside wherever you have extracted Liferay. You can view the tables using DbVisualizer if you want to see hypersonic db data.
You can also create your custom portlets. Where the data of these has to be stored is completely upto you. You can save the data in the Liferay's database if you want but that is not mandatory. It's upto you where you want to persist data for the custom portlets
I have a Liferay portal that was configured to use filesystem persitence for jackrabbit.
It seems like this persistence mode creates a lot of files on the filesystem (so far something like 113'000) and I'm slowly reaching the file count quota of the server.
I would like then to switch to database persistence. I know how to configure it but I don't know how to migrate the existing content.
Exporting and importing the various libraries (document, images, etc.) sounds like a lot of work and very error-prone, especially because it's a multi-homed deployment. Plus, I don't know if it will recreate the same exact URL for the documents, which is important to me.
Short update:
I managed to upgrade to Liferay 6. There is however no way to migrate the jackrabbit data from file system to database from within Liferay; what the Data Migration panel offers is to migrate from jcr hook to another persistence hook.
My initial issue was not to have the data in a database but to reduce the number of files on the filesystem (quota limit). I then switched to the FileSystemHook.
Here is the file count number (find . | wc -l).
JCRHook: 107566
FileSystemHook: 2810.
Don't know why Jackrabbit creates so much files...
In Liferay 6, there is a new dedicated page in the portal administration that is intended to facilitate migrations like that. You have to log in as an administrator (omniadmin if you have multiple portal instances in your server)and go to the Control Panel.
In the Server Administration pannel, click on the Data Migration menu and you will be offered to migrate from FileSystem to database.
It appears that you are not yet in Liferay 6 (Glassfish WebSpace Server is a Liferay 5.2), so there are several options :
upgrade the portal itself to from 5.x to 6.0.5, as explained in the Liferay Wiki and the use the migration page.
stay with your version, and create dedicated class inspired by the ones provided by Liferay in version 6
export the community pages (Liferay ARchive), create a new portal with DB persistance and import the pages and their content.
The migration would be my pick, either with the whole portal (but chances are that it's not something on your roadmap) or with ad hoc migration classes.
Arnaud
There are several ways to migrate, most of them are documented in the Jackrabbit Wiki:
Export to XML may not work for large repositories, because it uses too much memory (you need to try). I have never used the other migration tools, so I can't comment on them.