Hive View native support for HDInsight 4.0 - azure

As per the release notes there is a native support for Hive View in HDIndight 4.0. I don't get this, it's there in HDInsight 3.6 as well -- so why is it called out explicitly in the release notes for HDI 4.0 ? And yes in HDI 3.6 also it appears natively , as soon as we spin a cluster and start Ambari, it's there already very much accessible. So how is it different in HDI 4.0 ?

Update: Previously Hive view is not built-in for 4.0 clusters, and now product team added it back. That’s the reason it is being explicitly called out "Hive View is supported natively for HDInsight 4.0 clusters starting from this release. It doesn't apply to existing clusters. You need drop and recreate the cluster to get the built-in Hive View."
Thanks for bringing this to our attention. This question seems like a feedback on the document and not a specific question on the product.
In order to get your feedback addressed. I would request you to Submit feedback on the document.
How to submit a Azure document feedback?
Step1: Open the document which you are referring to.
Step2: Scroll down to the bottom of the page and click on "This page".
Step3: Enter your feedback title and issue.

Related

Databricks featurestore and unity catalog

I created a databricks feature table, but saw that it by default went under hive_metastore. I was expecting to see it under the unity catalog that I have created. Is featurestore not integrated with unity catalog yet?
Right now (November 2022nd), feature store doesn't support unity catalog. This integration is expected early next year, but it's hard to say right now. You can reach your Databricks solution architect or customer success engineer to get more information about expected dates.

How to do data migration data from Liferay 6 to Liferay 7

Currently, I use Liferay 6, my data is stored in DDL and the data store format is XML. While, the data format in Liferay 7 is JSON. How can I do this upgrade?
When your data is persisted in your DB, Liferays DXP (7.0) upgrade-client program (DXP Upgrade) should transform your data or taking care that persisted data is still usable with DXP.
You can find some more information about portal upgrade on the Developer Network: UPGRADING TO LIFERAY PORTAL. Please also note the Upgrade-Preparing steps.

Can I extend an Oracle ADF (or any Java EE) war/ear without having access to the source?

There is a packaged application created in Oracle ADF (lets generalise and say any Java EE framework) that I would like to customise/extend. I want to make changes like add a new JSF page or modify a JSF page in there and change the data which appears on the page.
I do not have the source code, just the war/ear file. Can I import this in JDeveloper for ADF (or Eclipse/NetBeans/IntelliJ in case of other EE frameworks) and create new objects extending the jar files in there without having the source code?
You can run your Jdev in customisation Developer role , open your EAR file , go to your JSF , jsff and try to have customisation put on top of it. You will need to deploy this as MAR file. Similar concept also applies to Oracle's Webcenter and it's taskflow customisations. Also , for this feature Seeded customisation should be enabled in your deployment profile.
Read more here : Customization
Also found this - how customisable the ADF app is depends on how the app is written in the first place. Use of MDS enables higher degree of customisability:
https://docs.oracle.com/middleware/1212/adf/ADFFD/customize.htm#ADFFD2085

what is the meaning of morpheus in portal level and skin level

Can anyone explain the use of the Morpheus portal and skin in Sakai? What I mean to ask is, how do I enable those things and what are the differences to the neo skin and neo portal code in Sakai?
Thanks in advance.
Morpheus (Mobile Optimized Responsive Portal for Higher Education Using Sass) is the new responsive design portal (the primary UI) which will be available in Sakai 11 (and is in a preview state for Sakai 10). The neo portal is the portal which was developed and released for Sakai 2.9. Before that the portal was known as the Charon portal.
If you want to test it you should really be using Sakai trunk (instead of 10.x). Either way, the following setting needs to be enabled in your sakai config file (typically sakai.properties). Make sure you check that they are not duplicated in the file.
portal.templates=morpheus
portal.neoprefix=
skin.default=morpheus-default
Also, if that doesn't work then check to make sure the morpheus dirs exist in the reference/library/src/webapp/skin directory. If they don't then you need to update to a newer version of the code.
Morpheus is the new responsive design portal (the primary UI) which will be available in Sakai 11 (and is in a preview state for Sakai 10). The neo portal is the portal which was developed and released for Sakai 2.9. Before that the portal was known as the Charon portal.
MORPHEUS=Mobile Optimized Responsive Portal for Higher Education Using Sass
Here were some slides
http://www.slideshare.net/alienresident/sakai-meet-morpheus-slides
And a presentation on this topic from Apereo 2014
https://www.youtube.com/watch?v=BQyGgwUPeqU

How to switch jackrabbit persistence from filesystem to database?

I have a Liferay portal that was configured to use filesystem persitence for jackrabbit.
It seems like this persistence mode creates a lot of files on the filesystem (so far something like 113'000) and I'm slowly reaching the file count quota of the server.
I would like then to switch to database persistence. I know how to configure it but I don't know how to migrate the existing content.
Exporting and importing the various libraries (document, images, etc.) sounds like a lot of work and very error-prone, especially because it's a multi-homed deployment. Plus, I don't know if it will recreate the same exact URL for the documents, which is important to me.
Short update:
I managed to upgrade to Liferay 6. There is however no way to migrate the jackrabbit data from file system to database from within Liferay; what the Data Migration panel offers is to migrate from jcr hook to another persistence hook.
My initial issue was not to have the data in a database but to reduce the number of files on the filesystem (quota limit). I then switched to the FileSystemHook.
Here is the file count number (find . | wc -l).
JCRHook: 107566
FileSystemHook: 2810.
Don't know why Jackrabbit creates so much files...
In Liferay 6, there is a new dedicated page in the portal administration that is intended to facilitate migrations like that. You have to log in as an administrator (omniadmin if you have multiple portal instances in your server)and go to the Control Panel.
In the Server Administration pannel, click on the Data Migration menu and you will be offered to migrate from FileSystem to database.
It appears that you are not yet in Liferay 6 (Glassfish WebSpace Server is a Liferay 5.2), so there are several options :
upgrade the portal itself to from 5.x to 6.0.5, as explained in the Liferay Wiki and the use the migration page.
stay with your version, and create dedicated class inspired by the ones provided by Liferay in version 6
export the community pages (Liferay ARchive), create a new portal with DB persistance and import the pages and their content.
The migration would be my pick, either with the whole portal (but chances are that it's not something on your roadmap) or with ad hoc migration classes.
Arnaud
There are several ways to migrate, most of them are documented in the Jackrabbit Wiki:
Export to XML may not work for large repositories, because it uses too much memory (you need to try). I have never used the other migration tools, so I can't comment on them.

Resources