I have installed a version (0.12.24) of Terraform which is later than the required version (0.12.17) specified in our configuration. How can I downgrade to that earlier version? My system is Linux Ubuntu 18.04.
As long as you are in linux, do the following in the terminal:
rm -r $(which terraform)
Install the previous version:
wget https://releases.hashicorp.com/terraform/1.3.4/terraform_1.3.4_linux_amd64.zip
unzip terraform_1.3.4_linux_amd64.zip
mv terraform /usr/local/bin/terraform
terraform --version
That's it, my friend.
EDIT: I've assumed people now use v1.3.5 so the previous version is v1.3.4.
You could also checkout Terraform Switcher - this will allow you to switch between different versions easily.
First, download latest package information using:
sudo apt-get update
The simplest way to downgrade is to use apt-get to install the required version - this will automatically perform a downgrade:
Show a list of available versions - sudo apt list -a terraform
terraform/xenial 0.13.5 amd64
terraform/xenial 0.13.4-2 amd64
... etc
or use sudo apt policy terraform to list available versions
Install the desired version:
sudo apt-get install terraform=0.14.5
Or, for a 'clean' approach, remove the existing version before installing the desired version:
sudo apt remove terraform
There are other valid answers here. This may be useful if you have a situation, like I do, where you need multiple Terraform versions during a migration from an old version to a new version.
I use tfenv for that:
https://github.com/tfutils/tfenv
It provides a modified terraform script that does a lookup of the correct terraform executable based on a default or based on the closest .terraform-version file in the directory or parent directories. This allows us to use a version of Terraform 0.12 for our migrated stuff and keep Terraform 0.11 for our legacy stuff.
You shouldn't be installing terraform in ubuntu any more. Generally speaking, the industry has moved on to docker now. You can install docker like this:
sudo apt install -y curl
curl -LSs get.docker.com | sh
sudo groupadd docker
sudo usermod -aG docker $USER
Once installed you can run terraform like this:
docker run -v $PWD:/work -w /work -v ~/.aws:/root/.aws hashicorp/terraform:0.12.17 init
Assuming that your .aws directory contains your aws credentials. If not, you can leave that mount binding (-v ~/.aws:/root/.aws) out of the command and it'll work with whatever scheme you choose to use. You can change the version of terraform you are using with ease, without installing anything.
There are significant benefits in this approach over the accepted answer. First is the ease of versioning. If you have installed terraform using a package manager you can either uninstall it and install the version you need, or you can play around with Linux alternatives (if your distro supports them, or you are using Linux, or a package manager of some sort -- you could be using Windows and have downloaded and run an installer). Of course, this might be a one-off thing, in which case you do it once and you're ok forever, but in my experience, that isn't often the case as most teams are required to update versions due to security controls, and those teams that aren't required to regularly update software probably should be.
If this isn't a one-off thing, or you'd not like to play around too much with versioning then you could just download the binary, as one comment on this post points out. It's pretty easy to come up with a scheme of directories for each version, or just delete the one you're using and replace it completely. This may suit your use-case pretty well. Go to the appropriate website (I've forgotten which one -- Hashicorp or the GitHub repo's releases page, you can always search for it, though that takes time too -- which is my point) and find the right version and download it.
Or, you can just type docker run hashicorp/terraform:0.12.17 and the right version will be automagically pulled for you from a preconfigured online trusted repo.
So, installing new versions is easier, and of course, docker will run the checksum for you, and will also have scanned the image for vulnerabilities and reported the results back to the developers. Of course, you can do all of this yourself, because as the comment on this answer states, it's just a statically compiled binary, so no hassle just install it and go.
Only it still isn't that easy. Another benefit would be the ease in which you could incorporate the containerised version into docker-compose configurations, or run it in K8S. Again, you may not need this capability, but given that the industry is moving that way, you can learn to do it using the standardised tools now and apply that knowledge everywhere, or you can learn a different technique to install every single tool you use now (get some from GitHub releases and copy the binary, others you should use the package manager, others you should download, unzip, and install, still others should be installed from the vendor website using an installer, etc. etc. etc.). Or, you can just learn how to do it with docker and apply the same trick to everything. The vast of modern tools and software are now packaged in this 'standard' manner. That's the point of containers really -- standardisation. A single approach more-or-less fits everything.
So, you get a standardised approach that fits most modern software, extra security, and easier versioning, and this all works almost exactly the same way no matter which operating system you're running on (almost -- it does cover Linux, windows, osx, raspbian, etc.).
There are other benefits around security other than those specifically mentioned here, that apply in an enterprise environment, but I don't have time to go into a lot of detail here, but if you were interested you could look at things like Aqua and Prisma Cloud Compute. And of course you also have the possibility of extending the base hashicorp/terraform container and adding in your favourite defaults.
Personally, I have no choice in work but to run windows (without wsl), but I am allowed to run docker, so I have a 'swiss army knife' container with aliases to run other containers through the shared docker socket. This means that I get as close to a real Linux environment as possible while running windows. I dispose of my work container regularly, and wouldn't want to rebuild it whenever I change the version of a tool that I'm using, so I use an alias against the latest version of those tools, and new versions are automatically pulled into my workspace. If that breaks when I'm doing, then I can specify a version in the alias and continue working until I'm ready to upgrade. If I need to downgrade a tool when I'm working on somebody else's code I just change the alias again and everything works with the old version. It seems to me that this workflow is the easiest I've ever used, and I've been doing this for 35 years.
I think that docker and this approach to engineering is simpler, cleaner, and more secure than any that has come before it. I strongly recommend that everyone try it.
I spent all day trying to make Host sFlow 2.0.6-1 from sources (https://github.com/sflow/host-sflow/releases) for XenServer 7.0 using the XenServer DDK from this site: http://xenserver.org/overview-xenserver-open-source-virtualization/download.html
First I had to make 2 changes to the file hsflowd-xen.spec:
Changed line 3 to: "Version: 2.0.6" (it was still 2.0.1)
Changed line 20 to: "%setup -n hsflowd-2.0.6-1" (added the name because the default one was wrong).
Now my problem is that I dont have the xenstore.h file. After long searches I found that it's in the package libxen-dev (or libxen-devel) but I couldn't find it with its dependecies anywhere.
The four most probable solutions I think are :
1. (The lazy one) Get the iso file for Host sFlow already built for XenServer 7.0 (the official site stopped building at 6.5)
Set up a proper yum repository that will contain libxen-dev and its dependencies. I can't even connect to the official CentOS repositories because the files in /etc/yum.repos.d/ have a bad URL.
This is the content of /etc/centos-release: "XenServer DDK release 7.0.0-125770c (xenenterprise)"
Somehow manage to use 'xenstore.a' instead of 'xenstore.h'. I changed the code in src/Linux/mod_xen.c to include 'xenstore.a' instead of 'xenstore.h' but when I build it, it creates a new file with the old code and ignores my changes. I probably changed the wrong files because there are different copies of the whole code. I'm not even sure it would work though even if I did manage to include 'xenstore.a'.
Make xenstore from sources. I didn't try it because I only found old sources and I figured I'd miss the dependencies too.
PS: I'm n00b at CentOS and Makefiles in general so the solution might be obvious and I just don't know it.
With gratitude to lagange, I updated the host-sflow project with a XenServer 7 build. I also added a Docker recipe so you can replace all these steps with just "./docker_build_on xenserver". Please raise issues on https://github.com/sflow/host-sflow.
I finally succeeded in building it. That's what I had to do step by step:
Import the XenServer DDK 7.0.0 into XenCenter.
Extend xvda1 following these steps: https://support.citrix.com/article/CTX125405
Make these changes to hsflowd-xen.spec:
3rd line: Version: 2.0.6
20th line: %setup -n hsflowd-2.0.6-1
Add these two lines before %description:
%define debug_package %{nil}
%define _unpackaged_files_terminate_build 0
Change file /etc/yum.repos.d/CentOS-Base
Change all occurrences of "$releasever" to "7".
Change all occurrences of "$basearch" to "x86_64".
Change "enabled=0" to "enabled=1" for each repository.
Uncomment baseurl lines for each repository.
Mount the Development packages (binpkg.iso available on the xenserver.org download page) and add a file for it in /etc/yum.repos.d/
Mine looks like it:
[binpkg]
name=CitrixXenServer7
enabled=1
baseurl=file:///mnt/binpkg/
gpgcheck=0
Install the two following packages with Yum (dependencies should install correctly now):
xen-libs-devel.x86_64
xen-dom0-libs-devel.x86_64
Make the file and install it using this tutorial: https://raw.githubusercontent.com/sflow/host-sflow/v2.0.4/INSTALL.XenServer
I'm looking to do some minimal GitLab CE customization by using my own image assets:
brand_logo.png, favicon.ico, logo-black.png, logo-white.png
I ran into:
https://kovah.me/customize-gitlab-installation/
http://axilleas.me/en/blog/2014/custom-gitlab-login-page/
I want to avoid the approach in the former as I'd prefer not to mess with any files other than the image fails. I tried the approach on the latter, but couldn't get it to work with my omnibus install (Ubuntu 12.04). I get a flurry of errors when trying to recomplie assets.
Any tips?
Currently, gitlab-ce allows to modify the text and logos on the page see branded login for more info.
When editing, there was a discussion about allowing to change the favicon.
If you need to make more aggressive modifications, see below.
Old answer:
If you don't have any plans on upgrade the gitlab (or you don't mind on repeating this process everytime you upgrade) try the following:
Modify the desired assets, the path is:
/opt/gitlab/embedded/service/gitlab-rails/app/assets/images/
after it, clean the assets cache:
gitlab-rake assets:clean RAILS_ENV=production
and generate them (I had some permissions errors with this one, nothing that a chmod 777 couldn't fix, just try to revert them back to its original state)
gitlab-rake assets:precompile RAILS_ENV=production
and finally a restart
sudo gitlab-ctl restart
the second link is mine :)
I was gonna write a post about custom login in omnibus but if you really want to do this the right way, you'd have to build your own omnibus package. Basically, it sums up to this:
Follow my post to make any custom changes and push to a branch in github or your gitlab repo.
Clone omnibus-gitlab, edit config/software/gitlab-rails.rb to reflect your custom login commit and git repo from step 1 and follow the instructions to build the package.
You can see the what I changed here.
I have a test VM with Debian Wheezy and no ruby installed. Gitlab 6.9.2 has been installed using the provided installer which brings an embedded ruby. Now, I want to import some old repos into Gitlab, but I cannot find the correct procedure. I think it should be this way:
su - git
export PATH=$PATH:/opt/gitlab/embedded/bin
cd ~
bundle exec rake gitlab:import:repos RAILS_ENV=production
Though, I only get the error "Could not locate Gemfile". I have tried several other ways, also installing Debians ruby, searched multiple Google and StackOverflow results, but I couldnĀ“t get it to work.
You should first place the bare repos in the repo dir. The default path for omnibus is /var/opt/gitlab/git-data/repositories/<namespace>. Then you just run the rake task:
sudo -u git -H cp -r my-project/.git /var/opt/gitlab/git-data/repositories/<namespace>/my-project.git
sudo gitlab-rake gitlab:import:repos
See invoking rake tasks and the import mechanism.
Edit: Sent an MR upstream to include this info in the readme.
I have run into same issue with "Could not locate Gemfile". So I searched for Gemfile and tried several folders. Until it worked.
The solution is related to Gitlab from source (or in my case it run inside offical docker container).
Place your .git bare repository (or several of them) in
"/var/opt/gitlab/git-data/repositories//my-project.git"
switch to user "git".
su git
Try if you have correct PATH by just "rake". If not available, extend your PATH:
export PATH=$PATH:/opt/gitlab/embedded/bin
after that switch to the shell were the rake command to import your bare projects will work and do the import.
cd /opt/gitlab/embedded/service/gitlab-rails/
bundle exec rake gitlab:import:repos RAILS_ENV=production
Output will be similar to this:
Processing raspberry/apollo-web.git
* Created apollo-web (raspberry/apollo-web.git)
Processing raspberry/apollo-web.wiki.git
* Skipping wiki repo
Processing dhbw/dhbw-prototyping-node-rest-course.git
...
EDIT:
Ok I was happy a bit too early. Although the ouput says it was imported. On the Web GUI no new projects.
I will investiage further...
Environment:
OS: Centos 6.5
GitLab: gitlab-6.6.5_omnibus-1.el6.x86_64.rpm (installed as root user)
Hello all,
I am attempting to figure out how to launch rails console for the gitlab application so I can take a look a the data as needed.
Based on what I read in the documentation it should be stored in /home/git/gitlab but when I change to my git user there isn't a gitlab directory. I did see that there is something familiar in /opt/gitlab/embedded/service/gitlab-rails but since I am logged in as root I don't seem to have anything in my path to execute.
Should I not have installed this as root since all of the documentation for installation is using sudo? If I do have to use something other than root for the install, is simply uninstalling the RPM good enough or do I need to re-install the entire OS?
If my system is ok being installed as root, can anyone tell me where I can find the documentation related to administering gitlab or at the very least the documentation on how to view the data? The documentation that I have found is as of version 5 and it doesn't look like it applies to 6. Again I could be wrong if I installed this incorrectly.
Thanks in advance.
I'm a little disappointed that there doesn't appear to be any documentation for how to install gitlab from RPM, however, I think I've managed to figure it out.
Download and install the .rpm for Centos from https://www.gitlab.com/downloads/
Run: /opt/gitlab/bin/gitlab-ctl reconfigure
Browse to http://example.com:80 and log in with the following creds:
username: admin#local.host
password: 5iveL!fe
I would be cautious of using the above steps for deploying a production site (especially one exposed to the internet) because many of the services seem to be running as root. I'll be doing a bit more reading to see if I can restrict this further.
Also, configuration looks like it's located under: /var/opt/gitlab/gitlab-rails/etc.
Good luck!
Bowen
Reviving an old question, but the answer is:
sudo gitlab-rake rails console