Add additionalScrapeConfigs to existing kube-prometheus-stack without data loss - kube-prometheus-stack

I deployed a kube-prometheus-stack instance (Prometheus, AlertManager, Grafana) and it's running quite sometimes with data, some custom dashboards, data sources and users.
Now I want to add additionalScrapeConfigs for a newly deploy blackbox-exporter to the existing kube-prometheus-stack to extend the monitoring.
Is there anyway to update existing kube-prometheus-stack file without apply via helm chart? Cuz I'm afraid that helm upgrade new_values.yaml would wipe out all existing data, dashboards, data source and users.
I installed kube-prometheus-stack with these :
helm show values prometheus-community/kube-prometheus-stack > custome-values.yaml
helm install -f ./custome-values.yaml prometheus prometheus-community/kube-prometheus-stack -n monitoring
And deploy blackbox-exporter with this:
helm install prometheus-blackbox-exporter prometheus-community/prometheus-blackbox-exporter -n monitoring

Helm upgrade of Prometheus release with additional additionalScrapeConfigs does not wipe-out your existing scrape config.
To be more sure of what exactly is being changed (added with your upgrade, I'd suggest you use helm diff plugin.
https://github.com/databus23/helm-diff
Once you install it, run the below command to see a diff of what's being added or being removed then you run the upgrade.
helm diff upgrade -f ./custome-values.yaml prometheus prometheus-community/kube-prometheus-stack -n monitoring
The above command does not run the actual upgrade, it'll only show the diff of change.
Once you've reviewed the changes, run the actual upgrade.

Related

How can I deploy arbitrary files from an Azure git repo to a Databricks workspace?

Databricks recently added support for "files in repos" which is a neat feature. It gives a lot more flexibility to the projects, since we can now add .json config files and even write custom python modules that exists solely in our closed environment.
However, I just noticed that the standard way of deploying from an Azure git repo to a workspace does not support arbitrary files. First off, all .py files are converted to notebooks, breaking the custom modules that we wrote for our project. Secondly, it intentionally skips files ending in one of the following: .scala, .py, .sql, .SQL, .r, .R, .ipynb, .html, .dbc, which means our .json config files are missing when the deployment is finished.
Is there any way to get around these issues or will we have to revert everything to use notebooks like we used to?
You need to stop doing deployment the old way as it depends on the Workspace REST API that doesn't support arbitrary files. Instead you need to have a Git checkout in your destination workspace, and update that checkout to a given branch/tag when doing release. This is could be done via Repos API, or databricks cli. Here is an example of how to do that with cli from DevOps pipeline.
- script: |
echo "Checking out the releases branch"
databricks repos update --path $(STAGING_DIRECTORY) --branch "$(Build.SourceBranchName)"
env:
DATABRICKS_HOST: $(DATABRICKS_HOST)
DATABRICKS_TOKEN: $(DATABRICKS_TOKEN)
displayName: 'Update Staging repository'

Installing Tomcat as different user using yum

I'm installing tomcat using yum package manager, but I want to run the service under different user (not tomcat).
Is there an easy way to do that on installation or am I always forced to change owner of all directories, service etc.?
If tomcat is managed by systemd, you can add a custom file /etc/systemd/system/tomcat.service.d/custom-user.conf containing just the following lines
[Service]
User=myUser
For older OSes you should be able to do it by setting the TOMCAT_USER variable in /etc/sysconfig/tomcat.
This is an extra configuration of course. It's not possible to modify a rpm-provided configuration file without rebuilding the rpm. If you have an internal yum repository you can build a rpm package providing this file but I think the easier way is using a configuration management tool like ansible or saltstack.

How to downgrade Terraform to a previous version?

I have installed a version (0.12.24) of Terraform which is later than the required version (0.12.17) specified in our configuration. How can I downgrade to that earlier version? My system is Linux Ubuntu 18.04.
As long as you are in linux, do the following in the terminal:
rm -r $(which terraform)
Install the previous version:
wget https://releases.hashicorp.com/terraform/1.3.4/terraform_1.3.4_linux_amd64.zip
unzip terraform_1.3.4_linux_amd64.zip
mv terraform /usr/local/bin/terraform
terraform --version
That's it, my friend.
EDIT: I've assumed people now use v1.3.5 so the previous version is v1.3.4.
You could also checkout Terraform Switcher - this will allow you to switch between different versions easily.
First, download latest package information using:
sudo apt-get update
The simplest way to downgrade is to use apt-get to install the required version - this will automatically perform a downgrade:
Show a list of available versions - sudo apt list -a terraform
terraform/xenial 0.13.5 amd64
terraform/xenial 0.13.4-2 amd64
... etc
or use sudo apt policy terraform to list available versions
Install the desired version:
sudo apt-get install terraform=0.14.5
Or, for a 'clean' approach, remove the existing version before installing the desired version:
sudo apt remove terraform
There are other valid answers here. This may be useful if you have a situation, like I do, where you need multiple Terraform versions during a migration from an old version to a new version.
I use tfenv for that:
https://github.com/tfutils/tfenv
It provides a modified terraform script that does a lookup of the correct terraform executable based on a default or based on the closest .terraform-version file in the directory or parent directories. This allows us to use a version of Terraform 0.12 for our migrated stuff and keep Terraform 0.11 for our legacy stuff.
You shouldn't be installing terraform in ubuntu any more. Generally speaking, the industry has moved on to docker now. You can install docker like this:
sudo apt install -y curl
curl -LSs get.docker.com | sh
sudo groupadd docker
sudo usermod -aG docker $USER
Once installed you can run terraform like this:
docker run -v $PWD:/work -w /work -v ~/.aws:/root/.aws hashicorp/terraform:0.12.17 init
Assuming that your .aws directory contains your aws credentials. If not, you can leave that mount binding (-v ~/.aws:/root/.aws) out of the command and it'll work with whatever scheme you choose to use. You can change the version of terraform you are using with ease, without installing anything.
There are significant benefits in this approach over the accepted answer. First is the ease of versioning. If you have installed terraform using a package manager you can either uninstall it and install the version you need, or you can play around with Linux alternatives (if your distro supports them, or you are using Linux, or a package manager of some sort -- you could be using Windows and have downloaded and run an installer). Of course, this might be a one-off thing, in which case you do it once and you're ok forever, but in my experience, that isn't often the case as most teams are required to update versions due to security controls, and those teams that aren't required to regularly update software probably should be.
If this isn't a one-off thing, or you'd not like to play around too much with versioning then you could just download the binary, as one comment on this post points out. It's pretty easy to come up with a scheme of directories for each version, or just delete the one you're using and replace it completely. This may suit your use-case pretty well. Go to the appropriate website (I've forgotten which one -- Hashicorp or the GitHub repo's releases page, you can always search for it, though that takes time too -- which is my point) and find the right version and download it.
Or, you can just type docker run hashicorp/terraform:0.12.17 and the right version will be automagically pulled for you from a preconfigured online trusted repo.
So, installing new versions is easier, and of course, docker will run the checksum for you, and will also have scanned the image for vulnerabilities and reported the results back to the developers. Of course, you can do all of this yourself, because as the comment on this answer states, it's just a statically compiled binary, so no hassle just install it and go.
Only it still isn't that easy. Another benefit would be the ease in which you could incorporate the containerised version into docker-compose configurations, or run it in K8S. Again, you may not need this capability, but given that the industry is moving that way, you can learn to do it using the standardised tools now and apply that knowledge everywhere, or you can learn a different technique to install every single tool you use now (get some from GitHub releases and copy the binary, others you should use the package manager, others you should download, unzip, and install, still others should be installed from the vendor website using an installer, etc. etc. etc.). Or, you can just learn how to do it with docker and apply the same trick to everything. The vast of modern tools and software are now packaged in this 'standard' manner. That's the point of containers really -- standardisation. A single approach more-or-less fits everything.
So, you get a standardised approach that fits most modern software, extra security, and easier versioning, and this all works almost exactly the same way no matter which operating system you're running on (almost -- it does cover Linux, windows, osx, raspbian, etc.).
There are other benefits around security other than those specifically mentioned here, that apply in an enterprise environment, but I don't have time to go into a lot of detail here, but if you were interested you could look at things like Aqua and Prisma Cloud Compute. And of course you also have the possibility of extending the base hashicorp/terraform container and adding in your favourite defaults.
Personally, I have no choice in work but to run windows (without wsl), but I am allowed to run docker, so I have a 'swiss army knife' container with aliases to run other containers through the shared docker socket. This means that I get as close to a real Linux environment as possible while running windows. I dispose of my work container regularly, and wouldn't want to rebuild it whenever I change the version of a tool that I'm using, so I use an alias against the latest version of those tools, and new versions are automatically pulled into my workspace. If that breaks when I'm doing, then I can specify a version in the alias and continue working until I'm ready to upgrade. If I need to downgrade a tool when I'm working on somebody else's code I just change the alias again and everything works with the old version. It seems to me that this workflow is the easiest I've ever used, and I've been doing this for 35 years.
I think that docker and this approach to engineering is simpler, cleaner, and more secure than any that has come before it. I strongly recommend that everyone try it.

How to make backup of gitlab without running?

I've used gitlab omnibus installation version but my PC had broken so couldn't boot my PC now.
So I couldn't run gitlab and have to make the backup from this condition.
From Gitlab documentation, there is a description how to make a backup on gitlab running state but there isn't any description way to make a backup on not-running state.
(https://docs.gitlab.com/ee/raketasks/backup_restore.html)
The repository is already backuped and what I really want to make a backup is about gitlab support functions (e.g. issue, merge request and etc)
How could do this?
If possible, you would need to backup the data mounted by your GitLab omnibus image, and copy that data on a working PC, in order to run GitLab there.
Once you have a running GitLab on a new workstation, you can make a backup there.
This is my self-answer.
There was no way to backup without running gitlab because all of database data is related on progresql.
So I've installed another gitlab in docker on my PC and attached all of things to it.(config, repositories, database data)
Below is What I did
install gitlab on docker (MUST install specific version matched with original version)
https://docs.gitlab.com/omnibus/docker/
modify docker run script to connect your original data to gitlab in docker.
e.g.)
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume [USER_DIR]/gitlab/config:/etc/gitlab \
--volume [USER_DIR]/gitlab/logs:/var/log/gitlab \
--volume [USER_DIR]/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
run gitlab in docker
run backup in docker through omnibus package installed backup method
https://docs.gitlab.com/ee/raketasks/backup_restore.html#restore-for-omnibus-installations
e.g.)
docker exec -t gitlab-rake gitlab:backup:create
After done backup, find your backup file which specified in your
e.g.)
[USER_DIR]/etc/gitlab/gitlab.rb
I don't agree with all of your conclusions even if it holds as a solution. It all depends on your setup, and if you have all data on the same machine it is a setup with room for improvements.
My own setup provide both external PostgreSQL 9.x and Redis 5.x servers. The benefit with external servers and docker make it possible to backup / restore using only external servers and root access to a docker volume on a docker host. This solution involves less steps since DBs are external.
I have done this a number of times and it works, but should only be used if you know what you're doing. Some parts is same as you discovered, like reinstall the same version etc.
I just wanted to point out that more than one solution exist for this problem. However, one thing that would be more beneficial is if the Gitlab team focused on PostgreSQL 11.x compatibility as opposed to only 10.x compatibility. I have already tested 11.x successfully in a build from sources, but waiting for a release by the Gitlab Team.
I am happy you made it work!

Gitlab upgrade but gitlab.rb unchange

I am using GitLab CE. I upgrade GitLab CE from 7.14.3 to 8.9.6 through apt-get upgrade. After successfully upgrade, I found that the gitlab configuration file which locates in /etc/gitlab/gitlab.rb keeps the same.
But why? I thought that GitLab upgrade would automatically update the new features to gitlab.rb. For now, I have to copy the newest configuration file and find the difference and then merge it to my current gitlab.rb.
Is there any way to auto upgrade configuration file to the newest and merge the configuration which I had changed?
You can easily check the diff with gitlab-ctl diff-config.
Then change what's new so the diff is minimal or if you have no changed done to gitlab.rb you can just copy /opt/gitlab/etc/gitlab.rb.template over it.

Resources