I'm using the Azure Resource Manager Template to generate an Azure TeamCity server with an agent on the same Linux CoreOS Azure VM. All the tools are there for building .NET Core projects, but for Xamarin projects, I need the Visual Studio Build Tools enabled on the box.
Following the instructions on http://www.mono-project.com/download/vs/#download-lin, I tried logging onto the agent by connecting to the VM via SSH, and then running:
docker exec -it [container id] bash
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
apt install apt-transport-https
echo "deb https://download.mono-project.com/repo/ubuntu vs-xenial main" | tee /etc/apt/sources.list.d/mono-official-vs.list
apt update
apt install mono-devel #Also apt-install mono-complete
However, even after rebooting the TeamCity VM, there is still an unmet requirement, listed as "Mono4.5_x86 exists".
Installing Mono on the agent works; however, every time I restart the agent, the installation is removed.
How can I install Mono on the build agent so that it persists when I restart the agent?
This took ages to fix, but I managed to find the solution eventually.
By default, the Azure Resource Manager for TeamCity does not contain Mono in its Docker image. You can install Mono on the Docker image, but then as soon as you restart the agent, the changes are not persisted.
You have to customise it following the instructions in https://hub.docker.com/r/jetbrains/teamcity-agent/.
Start off by creating a new image:
docker run -it -e SERVER_URL="http://<my-teamcity-server>.westeurope.cloudapp.azure.com" -v /mnt/data/teamcity-mono-agent:/data/teamcity_agent/conf --name="teamcity-mono-agent" jetbrains/teamcity-agent
Then start the agent using
docker start teamcity-mono-agent
Start a bash session in the agent with
docker exec -it teamcity-mono-agent bash
and in the bash terminal, install Mono using the sequence of instructions in the original question. (You may need to check the version of Linux that is running in the container, and modify these steps accordingly. There are detailed instructions on the Mono website.)
Once you have installed Mono, check the installation by typing mono, and then exit the bash session.
Commit the image using
docker commit teamcity-mono-agent mono-agent
And then restart the agent:
docker restart teamcity-mono-agent
In your TeamCity project, go to the build step and choose "MSBuild" from the dropdown. Choose "Mono xBuild 4.5" for the MSBuild version, "4.0" for the MSBuild Tools version, and x64 for the Platform.
Your new agent should now be able to pick up builds that require Mono.
Related
I need to install python 3 on my virtual machine (I have python 2.7) but I don't have access to internet from my VM. Is there any way to do that without using internet I have access to a private gitlab repository and private dokcer hub.
Using GitLab
Ultimately, you can put whatever resources you need to install Python3 directly in GitLab.
For example, you could use the generic packages registry to upload the files you need and download them from GitLab in your VM. For example, you can redistribute the files from python.org/downloads this way.
If you're using a debian-based Linux distribution like Ubuntu, you could even provide the necessary packages in the GitLab debian registry (disabled by default, but can be enabled by an admin) and just use your package manager like apt install python3-dev after configuring your apt lists to point to the gitlab debian repo.
Using docker
If you have access to dockerhub, technically you can access files from docker images as well. Here I'll assume you're using ubuntu or some debian-based distribution, but the same principle applies for any OS.
Suppose you build an image:
FROM ubuntu:<a tag that matches your VM version>
# downloads all the `.deb` files you need to install python3
RUN apt update && apt install --download-only python3-dev
You can push this image to your docker registry
Then on your VM, you can pull this image and extract the necessary install files from /var/cache/apt/archives/*.deb in the image then install using dpkg
Extract files from the image (in this case, to a temp directory)
image=myprivateregistry.example.com/myrepo/myimage
source_path=/var/cache/apt/archives
destination_path=$(mktemp -d)
docker pull "$image"
container_id=$(docker create "$image")
docker cp "$container_id:$source_path" "$destination_path"
docker rm "$container_id"
Install python3 using dpkg:
dpkg --force-all -i "${destination_path}/*.deb"
I have have unzipped newrelic files onto amazon linux and have both the installer.sh and config_defaults.sh
I have the license key in the parameter store which I am able to call
I install newrelic with the following command
sudo ./installer.sh GENERATE_CONFIG=true LICENSE_KEY=$APIKEY
sudo systemctl start newrelic-infra # to start the service
Where APPIKEY comes from the parameter store.
However when I do
sudo systemctl start newrelic-infra
I get error message
level=error msg="can't load configuration file" component="New Relic Infrastructure Agent" error="no license key, please add it to agent's config file or NRIA_LICENSE_KEY environment variable"
How can I make the agent recognize the license key?
It seems you are installing the infrastructure agent on a Amazon Linux 2 host by following the manual or assisted tarball installation flow. Note that not all features and integrations are available with that setup.
New Relic provides linux packages and a step by step installation in the web interface (see "Add more data" in the New Relic One web).
The standard installation steps for Amazon Linux 2 would be:
echo "license_key: YOUR_LICENSE_KEY" | sudo tee -a /etc/newrelic-infra.yml && \
sudo curl -o /etc/yum.repos.d/newrelic-infra.repo https://download.newrelic.com/infrastructure_agent/linux/yum/el/7/x86_64/newrelic-infra.repo && \
sudo yum -q makecache -y --disablerepo='*' --enablerepo='newrelic-infra' && \
sudo yum install newrelic-infra -y
Here's the documentation explaining the different install scenarios.
You can get additional support from the community on the Explorers Hub.
I have to install Argo Tunnel on my server, VM on Compute Engine (Image Debian, Debian GNU/Linux, 10 (buster), amd64 built on 20200902, supports Shielded VM features), but cannot pass the cloudflared installation step.
I followed the instructions on the developers portal:https://developers.cloudflare.com/argo-tunnel/downloads
And downloaded amd64 / x86-64 package for Linux,
I also used this code and installed cloudflared on my VM
git clone
https://github.com/cloudflare/cloudflared.git
cd cloudflared/
go clean
go get
github.com/cloudflare/cloudflared/cmd/cloudflared
make cloudflared
I see the directory, but I cannot check the version to verify if I install everything properly (documentation).
changerz_critical#cloudshell:~/cloudflared (global-
road-289110)$ cloudflared --version
-bash: cloudflared: command not found
I honestly read through all available docs and could not find anything that could help to solve this issue.
Would be very thankful for any help.
To install cloudflared on your VM instance please follow steps below:
Create VM instance:
$ gcloud beta compute instances create instance-1 --zone=europe-west3-a --machine-type=e2-medium --image=debian-10-buster-v20200910 --image-project=debian-cloud
Connect to VM instance via SSH:
$ gcloud compute ssh instance-1
Download and install cloudflared by using .deb package:
instance-1:~$ wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
instance-1:~$ sudo dpkg --install cloudflared-stable-linux-amd64.deb
Check the version:
instance-1:~$ cloudflared --version
cloudflared version 2020.9.0 (built 2020-09-14-2204 UTC)
Follow the instructions:
instance-1:~$ Please open the following URL and log in with your Cloudflare account:
https://dash.cloudflare.com/argotunnel?callback=https%3A%2F%2Flogin.argotunnel.com%2Fkob9m8T0PaRAFrkYjXjAI4vH1X4sqQ6IRtd8-D_THmYMaAM%3D
Leave cloudflared running to download the cert automatically.
Unfortunately, I don't have a domain to check the full setup. For further instructions I'd recommend you to post a new question at Cloudflare community.
Solved with
git clone https://github.com/cloudflare/cloudflared.git cd cloudflared/ go clean go get github.com/cloudflare/cloudflared/cmd/cloudflared make cloudflared
I am struggling to install Azure CLI on a Ubuntu machine without root access.
The instructions here assume that we have root access (or reasonable sudo access).
I am trying to run this on a Ubuntu machine (provided by IBM DevOps toolchain - root access will never be granted) where executing the sudo command results in:
AZ_REPO=$(lsb_release -cs)
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
sudo tee /etc/apt/sources.list.d/azure-cli.list
This results in sudo: no tty present and no askpass program specified
Why do I need a tty to run sudo? has some answers however I am not able to use them as I don't have control over the login to the shell via ssh.
I am using the IBM Cloud's DevOps toolchain to deploy applications to both IBM Cloud and Azure.
The DevOps toolchain provides me a shell for me to execute commands.
Are there other alternatives
Assuming that you have the required prerequisites installed, you can either use the script or plain 'ole pip to install the Azure CLI (pip install azure-cli). Personally, I often use the docker container as well.
Have you tried running the script as documented,
curl -L https://aka.ms/InstallAzureCli | bash
I do not see why sudo would be required to install az. The install script downloads a python script (this one) and runs it. This script basically
Downloads virtualenv
creates a virtual env at ~/lib/azure-cli
calls pip install azure-cli on that virtual env
writes a a shell script to ~/bin/az named az that runs python -m azure.cli and give it some permissions
adds completion (tab) to az
Based on my knowledge, this is possible. Root permission is required. You could check this answer to solve your issue, but I think you should have root permission to do this.
If possible, I suggest you could use Azure Cloud Shell to run cli command on Azure.
I am running Jenkins in docker from official docker hub .
I created job which runs my own shell script, however I see some binaries
are missing in docker e.g.file command.
They mention on docker hub that one can install additional binaries over Ubuntu's aptitude however I don't know which package to install to get e.g file command working.
Unless Ubuntu did something different than the base Debian environment, file is included in the file package.
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -f file