I have a self hosted gitlab-ce in a docker environment.
As upgrading GitLab is a pain, I did not do it for a while and now it looks way too complex.
Also there are no good material on how to do this step wise.
Has anyone done this before: what process are you following?
The main advice from the official documentation is to avoid skiping a major version.
In your case, upgrade first to 13.11.
Then 14.1.0.
That way, you can detect any specific setting to update fro one version to the next.
Related
I need to upgrade my solr search from 4.7 version to 5.3.1 .
I am working on a linux platform.
Can you please provide me the steps that i need to follow .
Thank you!
I do not think that there is a definitive step by step guide for the upgrade that you are looking for.
I have a 4.3.x SOLR running in a production environment and I am contemplating the leap to upgrade to 5.x. However its clear that a lot has changed and that my upgrade is not going to be straight forward.
Also other priorities in my project have kept me from doing the upgrade.
So rest of the discussion is more a thought process than actual upgrade experience.
Last I researched I found the below links useful
https://support.lucidworks.com/hc/en-us/articles/203776523-How-to-upgrade-between-major-Solr-Versions
https://cwiki.apache.org/confluence/display/solr/Major+Changes+from+Solr+4+to+Solr+5
From the Major changes link you will notice that a lot has changed ..
Most notably there are changes to the index format, SolrJ removal of deprecated API and that the deployment is now as a standalone server instead of a war file.
So I would suggest that you ask yourself the following questions ...
Is it possible to recreate the index from scratch ? How much time does it take to create your complete index ? If your index can be recreated quickly then , I would suggest that you do that using 5.x engine on a separate machine, while your production environment is served by your existing server. Then plan a complete upgrade from 4.x to 5.x by simply pointing your Production instance to the new SOLR engine. This approach will give you a clean slate to start with and a brand new index (but with existing data).
If you have a very large index (e.g. it takes several days to recreate it from scratch), then you may want to perform an upgrade of the live index. In that case I suggest that you consider the following.
The SOLR upgrade guide mentions 4.10 as a version that is 4.x (so I assume its is easy to upgrade from any 4.x to 4.10) and has some features built in to help with the move to 5.x. So first upgrade to 4.10 ensure that your index continues to work properly. Then use the guides mentioned above to upgrade to 5.x
I've been trying to work through an issue that developed while attempting to upgrade our testing environment from 12.04 to 14.04 ubuntu on aws. Prior to this, the Package repository version of solr was 1.4.1 which matched our 1.4.1 solrj client integrated with our application.
Changing the base AMI to the 14.04 latest and running our default deploy caused solr 3.6.2 server to be installed. It appears it was accepting our configs without issue, however when our client tried to connect we received different errors:
The first was an unknown custom field, which we traced back to our deployment scripts not moving our schema.xml and solrconfig.xml to /etc/solr/conf/ but keeping it in the base directory.
We corrected this issue, and then ran into the following:
'exception: Invalid version or the data in not in 'javabin' format'
This was generated by a wrapper ontop of solrj, but I'll be honest and say I know nothing regarding Solr and that this may be on our end. I've asked our dev team to look at 2 options:
1) enabling: 'server.setParser(new XMLResponseParser());'
Which is the recommendation on the backwards compatibility for an older client.
2) updating our client in the application to 3.6.2
-I know less about the requirements on this.
My fall back is to revert to 1.4.1, but it appears it hasn't been touched since 2011, which makes me hesitant.
Any thoughts / suggestions would be appreciated!
Thanks!
I think the best option is to maintain the same version of Solr and Solrj.
I used for a lot of time Solr 1.4.1 and, while as you said, the most part of it works with newer versions without any problem, actually a lot of things have been changed since 1.4.*
I did your same porting last year, (from 1.4.1 to 3.6.1) and I can confirm you that the 2nd way is the right one: all changes you must do in your client code are just "formal" and very very quick.
Any workaround you could do for being able to communicate with a different version (between Solrj and Solr) is just, as the word says, a "workaround" and it could lead to unexpected (hidden) side-effects later.
I want to know how to apply the issue changes done in the Liferay Issues can be applied in our portal.
For example my issues are cleared in the following links,
https://issues.liferay.com/browse/LPS-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
https://issues.liferay.com/browse/LPS-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
I want to apply the issue changed in the above links in my portal.
Please some one help me to achieve this.
Thanks in Advance.
Regards,
Dinesh M K
You have to see the resolution and fix version stated.
the second issue says :
Fix Version/s: --Sprint 12/11, 6.1.0 CE RC1
and that means that the issue was solved in all Portals after version 6.1.0 CE.
the first issue , is no longer reproducible, and duplicate of https://issues.liferay.com/browse/LPS-14220 which is again solved in 6.1.0 CE RC1
In other words, If your portal is older than this version, you can't do anything. You'll have to upgrade to version 6.1.0 RC1 or later
It seems that the fixed-in version is a bit weirdly set in those issues - e.g. LPS-14220 is a subtask of a story in LPS-14414, which is stated as completed for 6.2.0. Careful: I did not fully read through all of the issue's descriptions, links and mix what I read with the answer (and comment to) #yannicuLar gave
Basically, this seems to be a new feature. The way to backport it to your installation is to identify the relevant commits (e.g. download the repository from https://github.com/liferay/liferay-portal or https://github.com/liferay/liferay-plugins, identify the relevant commits (they all contain the LPS number), "backport" them (e.g. see if they can just be applied to your codebase or if they need manual adaptation because the whole code changed.
Some features are easier to backport than others, I can't tell about the complexity for this one.
In order to separate your changes from Liferay's core changes, you should try to implement this in plugins (or patched plugins) rather than changing the original code and recompile. Most likely it's only the kaleo-web plugin that's effected, but if there are core changes, you'd be better of having them isolated in plugins.
The simplest (and most futureproof) possibility is to wait a bit for 6.2 (RCs are already out) and upgrade your portal to this version. If you want to stay on a version that gets updates, you should do this soon anyway.
I'm going to upgrade my company's subversion server from version 1.6 to 1.7. The server runs on linux (Ubuntu AFAIK).
I've read all those:
Subversion 1.7 release notes
I've also read those posts:
subversion-client-version-confusion
how-to-upgrade-svn-server-from-1-6-to-1-7
Here and now, I know how to perform this. It's not a big deal. What concerns me the most is the current hooks infrastructure. There are several scripts in bash and perl.
As for now I've found no information referring hooks infrastructure changes, but maybe there are some known issues I missed? Is there anything against the upgrade I should know?
PS: Try and see what comes method is absolutely unavailable. I'd like the upgrade to be as fluent as possible. Repository users shouldn't even notice any changes. I can't allow myself any failure in that matter.
The Subversion compatibility guarantees promise that your hook scripts are called exactly the same in 1.6 as in 1.7. In 1.7 (and future versions) more arguments can be passed to scripts, but the old arguments still match the old behavior. So if you created your scripts like the templates, to ignore 'extra' arguments you shouldn't see a difference.
Subversion 1.7 didn't change the repository format since 1.6, so you can even (accidentally) use the svnlook from 1.6 to access the repository after upgrading.
Try and see what comes method is absolutely unavailable...
Yes, the try and see what comes method is available. You build a copy of your Subversion 1.6 environment, make the Subversion 1.7 changes, and test until everything is correct.
I don't see how you can accomplish your goal of a quiet upgrade unless you copy and test.
I guess it depends what you do with your hooks...
If your hooks are using svnlook, you should have no issues. If you're using an API (like the Python API), you probably are also okay as long as you're doing svnlook type of stuff.
Where you might start heading into problems is if you poked and prodded where you weren't suppose to poke and prod. For example, instead of doing svnlook, you do svn. There are a couple of places where the parameters have changed. Also, if you did an svn checkout (an absolute no-no in a hook) and then looked in the .svn directories, you'll get a surprise. Follow the rules, color in the lines, and your hooks won't have any issues.
I don't know of any issues from Revision 1.1 to revision 1.7 that should affect well behaved hooks hooks, and I suspect that you will not have any issues as long as we are still in Subversion 1.x. When Subversion 2.x comes out, all bets are off.
Yes, there have been some changes in how hooks work. The start-commit hook has an extra field that wasn't in versions 1.4 and earlier (The capabilities field), but nothing that would affect current hooks. And, in either Subversion 1.5 or 1.6, users now can set revision properties when doing a commit. These don't affect current hooks, but might be features that you want to incorporate in your current hooks.
The upgrade has been performed and succeeded. Subversion server was updated without issues. Hooks were designed without any hacks or slashes, respecting the rules and common sense. It was risky but promising and came out profitable (checkouts are light-speed now).
Just for sake of completeness: there was a consecutive centrally managed client upgrade. And there were issues, however not critical and predictable. After transition svn client 1.6 -> 1.7.7 working copy format changed. Every existing working copy had to be manually upgraded (or wiped out and checked out clean again).
Server upgrade is safe though.
We have a liferay portal running on a hosting company, and We want to bring it to our own structure. So, I've downloaded the excellent bitnami stack and loaded it in our vmware server.
I've no experience on liferay whatsoever, all I know its that it uses mysql as database. Is there any docs on how to do it?
Tks!
Use the Liferay's Wiki:
5.0 to 5.1: http://www.liferay.com/community/wiki/-/wiki/Main/Upgrade+Instructions+from+5.0+to+5.1
5.1. to 5.2: http://www.liferay.com/community/wiki/-/wiki/Main/Upgrade+Instructions+from+5.1+to+5.2
I recommend to do a 2-step upgrade since direct upgrade from 5.0 to 5.2 is more troublesome.
There have been reports that it's some work to upgrade older versions to the latest and greatest, so you should be prepared for some efforts.
That said, the way you should go is to backup the previous installation (e.g. all directories, database entries etc) and deploy that on your own server. This installation then is updated to the latest version by installing the latest version and pointing it to the data from the previous installation. During the first startup, liferay will (given sufficient privileges on mysql) update the database structure and everything it needs. Keep your backup ready and test thoroughly if everything is upgraded the way you intended it to be.
Also you need to keep an eye on your customized stuff - if you have portlets or other components that use the liferay api, you might need to upgrade those manually to take changed APIs into account.
Theoretically that should be it. I've heard of people having had some problems with this - but it all depends on your level of customization and utilization of features in liferay.
The liferay folks intend to circumvent this in future with their EE environment, where you get better defined upgrade paths and long term support with minor upgrades to your environment, keeping APIs and database requirements stable. I'd hope that even upgrades between major versions will benefit from this, but have not yet tried it.