Azure batch job start tasks failed - azure

I'm using Azure batch python API. When I'm creating a new job, I see exit code 128 (image attached). How can I know what is the reason for that?
I'm creating a new job using this code :
def wrap_commands_in_shell(commands):
return "/bin/bash -c 'set -e; set -o pipefail; {}; wait'".format(';'.join(commands))
job_tasks = ['cd /mnt/batch/tasks/shared/ && git clone https://github.com/cryptobiu/OSPSI.git',
'cd /mnt/batch/tasks/shared/OSPSI && git checkout cloud',
'cd /mnt/batch/tasks/shared/OSPSI && cmake CMake',
'cd /mnt/batch/tasks/shared/OSPSI && mkdir -p assets'
]
job_creation_information = batch.models.JobAddParameter(job_id, batch.models.PoolInformation(pool_id=pool_id),
job_preparation_task=batch.models.JobPreparationTask(
command_line=wrap_commands_in_shell(
job_tasks),
run_elevated=True,
wait_for_success=True
)
)

To diagnose, you can look at the stderr.txt and stdout.txt for the Job Preparation task that has failed in the Azure Portal, using Azure Batch Explorer, or using an SDK via code. If you look at which node ran the job prep task, navigate to that node, then the job directory. Under the job directory, you should see a jobpreparation directory. In that directory will have the stderr.txt and stdout.txt.
With regard to the exit code, there are a few potential problems that could cause this:
Did you install git, cmake and any other dependencies as part of a start task?
I get a 404 when I try to navigate to: https://github.com/cryptobiu/OSPSI. Does this repo exist? If it's a private repository, are you providing the correct credentials?
A few notes about your job_tasks array:
You should not hardcode the paths /mnt/batch/tasks/shared. This path to the "shared" directory may not be the same between Linux distributions. You should use the environment variable $AZ_BATCH_NODE_SHARED_DIR instead. You can view a full list of Azure Batch pre-filled environment variables here.
You do not need to cd into the directory for each command, you only need to do it once. You can rewrite job_tasks as:
['cd $AZ_BATCH_NODE_SHARED_DIR',
'TODO: INSERT YOUR COMMANDS TO SETUP AUTH WITH GITHUB FOR PRIVATE REPO',
'git clone https://github.com/cryptobiu/OSPSI.git',
'cd OSPSI',
'cmake CMake',
'mkdir -p assets']

Related

Databricks init scripts not working sometimes

Ok, it is very strange. I have some init scripts that I would like to run when a cluster starts
cluster has the init script , which is in a file (in dbfs)
basically this
dbfs:/databricks/init-scripts/custom-cert.sh
Now , when I make the init script like this, it works (no ssl errors for my endpoints. Also, the event logs for the cluster shows the duration as 1 second for the init script
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
However, if I just put the init script in an bash script and upload it to DBFS through a pipeline, the init script does not do anything. It executes , as per the event log but the execution duration is 0 sec.
I have the sh script in a file named
custom-cert.sh
with the same contents as above, i.e.
#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt"
but when I check /usr/local/share/ca-certificates/ , it does not contain /dbfs/orgcertificates/orgcerts.crt, even though the cluster init script has run.
Also, I have compared the contents of the init script in both cases and it least to the naked eye, I can't figure out any difference
i.e.
%sh
cat /dbfs/databricks/init-scripts/custom-cert.sh
shows the same contents in both the scenarios. What is the problem with the 2nd case?
EDIT: I read a bit more about init scripts and found that the logs of init scripts are written here
%sh
ls /databricks/init_scripts/
Looking at the err file in that location, it seems there is an error
sudo: update-ca-certificates
: command not found
Why is it that update-ca-certificates found in the first case but not when I put the same script in a sh script and upload it to dbfs (instead of executing the dbutils.fs.put within a notebook) ?
EDIT 2: In response to the first answer. After running the command
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
the output is the file custom-cert.sh and then I restart the cluster with the init script location as dbfs:/databricks/init-scripts/custom-cert.sh and then it works. So, it is essentially the same content that the init script is reading (which is the generated sh script). Why can't it read it if I do not use dbfs put but just put the contents in bash file and upload it during the CI/CD process?
As we aware, An init script is a shell script that runs during startup of each cluster node before the Apache Spark driver or worker JVM start. case-2 When you run bash
command by using of %sh magic command means you are trying to execute this command in Local driver node. So that workers nodes is not able to access . But based on
case-1 , By using of %fs magic command you are trying run copy command (dbutils.fs.put )from root . So that along with driver node , other workers node also can access path .
Ref : https://docs.databricks.com/data/databricks-file-system.html#summary-table-and-diagram
It seems that my observations I made in the comments section of my question is the way to go.
I now create the init script using a databricks job that I run during the CI/CD pipeline from Azure DevOps.
The notebook has the commands
dbutils.fs.rm("/databricks/init-scripts/custom-cert.sh")
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/internal-certificates/certs.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
I then create a Databricks job (pointing to this notebook), the cluster is a job cluster which is just temporary . Of course , in my case , even this job creation is automated using a powershell script.
I then call this Databricks job in the release pipeline using again a Powershell script.
This creates the file
/databricks/init-scripts/custom-cert.sh
I then use this file in any other cluster that accesses my org's endpoints (without certificate errors).
I do not know (or still understand), why can't the same script file be just part of a repo and uploaded during the release process (instead of it being this Databricks job calling a notebook). I would love to know the reason . The other answer on this question does not hold true as you can see, that the cluster script is created by a job cluster and then accessed from another cluster as part of its init script.
It simply boils down to how the init script gets created.
But I get my job done. Just if it helps someone get their job done too.
I have raised a support case though to understand the reason.

Docker - unable to run script

What I'm doing
I am using AWS batch to run a docker container for a large compute job. I have configured the ECR/ECS successfully to the best of my knowledge but am having issues running the required commands for reasons that are beyond my level of understanding with docker ( newbie )
What I need to do is pass the below commands into my application and start my application to perform some heavy computing tasks; all commands listed below must be present.
The Issue(s)
The issue arises when I send the submit job to AWS batch; this service pulls the image from the ACR ( amazon container repository ) and spins up a compute environment. The issue comes from when I try to run the command I pass in, below I will go throgh it.
"command": [
"mkdir -p logging",
"chmod 777 logging/",
"docker run -t -i -e my-application", # container name
"-e APIKEY",
"-e BASEURI",
"-e APIUSER",
"-v WORKSPACE /logging:/src/log",
"DOCKERIMAGE",
"python my_app.py",
"-t APP_USER",
"-e APP_ENVIRONMENT",
"-u APP_USERNAME",
"-p APP_PASSWORD",
"-i IN_PATH",
"-o OUT_PATH",
"-b tmp/"
]
The command above generates the following error(s)
container_linux.go:370: starting container process caused: exec: "mkdir -p log": executable file not found in $PATH
I tried to pass in the command to echo the env var $PATH but was unsuccesfull getting a response and resulted in a similar error.
I have ran successfully "ls" and was able to see the directory contents of my application inside.
I am not however able to run any of these commands that I have included in the command [] section. I have tried just running python and such in hopes of getting a more detailed error but was unsuccessful.
Logic in plain English
Create a path called logging if it doesnt exist
set the permissions for logging
run the docker container and pass in the environment variables while doing so
Tell docker to run the python file my_app.py and pass in the expected runtime args
Execute and perform the required logic deligated in the python3 application
Questions
Why can I not create a directory here called "logging" where am I ?
Am I running these properly as defined by AWS batch? or docker
What am I missing or where am I going wrong?
AWS Batch high level doc
AWS Batch link specific to what i'm doing
Assuming that you're following the syntax described in the Container
Properties
section of the AWS docs, you have several problems with the syntax of
your command directive.
First
The command directive can only run a single command. You can't mash together a bunch of commands as you're trying to do in your example. If you need to run multiple commands you would need to embed them as an argument to a shell. For example, something like:
command: ["/bin/sh", "-c", "mkdir -p logging; chmod 777 logging; ..."]
Second
You must properly tokenize your
command lines -- that is, when you type mkdir -p logging at the
command prompt, the shell splits this into three parts (or "tokens"): ['mkdir', '-p', 'logging']. You need to do the same thing when building up the
list of arguments to command.
This is invalid:
command: ["mkdir -p logging"]
That would looking for a command named mkdir -p logging, and of course no such command exists. That would properly be written as:
command: ["mkdir", "-p", "logging"]
Third
I'm not very familiar with the AWS batch environment, but it's unlikely you can run a docker command inside a docker` container as you're trying to do. It's unclear why you're doing this, though: why not just configure your AWS batch job with the appropriate image, environment variables, etc?
Take a look at some of these example job definitions.

How to use Custom Hooks with GitLab CE on Ubuntu 20.04 - VPS?

RESUME
When I push from my local workspace to the target repo in GitLab on my remote VPS, I want GitLab run a script and ask a beta repo on the same VPS to git checkout and pull in order to check my changes.
Current configurations
Gitlab configurations
Suppose you already have a project in your admin area, you just need to get it's relative path
Admin area > Projects
Select your project
Get the path in the project profil (hashed for this case)
Create a new directory in this location called custom_hooks.
sudo su
cd /var/opt/gitlab/git-data/repositories/hashed/path/of/project
mkdir custom_hooks
Inside the new custom_hooks directory, create a file with a name matching the hook type. For a post-receive hook the file name should be post-receive with no extension.
cd custom_hooks
nano post-receive
Write the code to make the server hook function as expected. Hooks can be in any language. Ensure the ‘shebang’ at the top properly reflects the language type. In that case the script code used is :
#!/bin/sh
unset GIT_INDEX_FILE
cd /var/repo/beta.git && git checkout master && git up # --[see the note 2 below]
Make the hook file executable and make sure it’s owned by Git.
chmod +x post-receive
Note 1 :
You can find more informations about git hooks here : GitLab Documentation : Server hooks
VPS configurations
Create new alias with #RichardHansen recommandations
git config --global alias.up '!git remote update -p; git merge --ff-only #{u}'
Note 2 :
During my researches, I've found an interesting answer about git pull on the forum.
I've decided to follow that advice and I made an alias named git up.
What is important is that :
it works with all (non-ancient) versions of Git,
it fetches all upstream branches (not just the branch you're currently working on), and;
it cleans out old origin/* branches that no longer exist upstream.
You can find more informations about "In what cases could git pull be harmful ?" here :
Link to answer
Create a directory for git repos only and access it to process Hooks configurations
# Create a repo for the project in apache area
mkdir /var/www/beta
# Create the git repo only folder
cd /var
mkdir repo && cd repo
# Create git repo and init
mkdir beta.git && cd beta.git
git init --bare # --bare means that our folder will have no source files, just the version control.
# Add gitlab remote
git remote add gitlab
# Accessing hooks folder to create script
cd hooks
cat > pre-receive
# On the blank line, write this script then 'ctrl + d' to save and press enter to exit
#!/bin/sh
unset GIT_INDEX_FILE
git --work-tree=/var/www/beta --git-dir=/var/repo/beta.git checkout -f
# Make the file executable
chmod +x post-receive
Note 3 :
'git-dir' is the path to the git repository. With 'work-tree', we can define a different path to where the files will actually be transferred to.
The 'post-receive' hook will be looked into every time a push is completed and will set the path where the files will be transferred to /var/www/beta in that case.
Local Workspace configurations
# Create in your workspace a folder to hold the project
cd /path/to/workspace
mkdir project && cd project
# Initialize git and add gitlab remote
git init
git remote add gitlab ssh://user#mydomain.com/gitlab/path/of/project
# Create an index.html file and send the initial commit
nano index.html
# copy this into the file then 'ctrl + x' then 'y' then 'enter' to save
<html>
<head>
<title>Welcome to Beta domain!</title>
</head>
<body>
<h1>Success! The beta virtual host is working!</h1>
</body>
</html>
# prepare the changes and then send the commit
git status
git add index.html
git commit -m "chore: add index.html :tada: :rocket:"
git push gitlab master
EXPECTED RESULTS
The expected result of this process is that when the git push gitlab master is done, the hook inside the gitlab hashed directory of the project, run a script who make something like this :
# Access the beta.git directory
cd /var/repo/beta.git
# Run command for updating repo
git checkout master && git up
# If we access the beta folder in apache area we should see index.html
cd /var/www/beta
ls
--index.html
ACTUAL RESULTS
No result.
ERROR MESSAGES
No error messages.
REQUEST
How can I set up a process like this one ?
There is something in my process I did not take in consideration ?

Use pre-installed Terraform plugins instead of downloading them with terraform init

While running terraform init when using Terraform 0.11.3 we are getting the following error:
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
Error installing provider "template": Get
https://releases.hashicorp.com/terraform-provider-template/: read tcp
172.25.77.25:53742->151.101.13.183:443: read: connection reset by peer.
Terraform analyses the configuration and state and automatically
downloads plugins for the providers used. However, when attempting to
download this plugin an unexpected error occured.
This may be caused if for some reason Terraform is unable to reach the
plugin repository. The repository may be unreachable if access is
blocked by a firewall.
If automatic installation is not possible or desirable in your
environment, you may alternatively manually install plugins by
downloading a suitable distribution package and placing the plugin's
executable file in the following directory:
terraform.d/plugins/linux_amd64
I realized it's because of connectivity issues with https://releases.hashicorp.com domain. For some obvious reasons, we will have to adjust with this connectivity issue as there are some SSL and firewall issues between the control server and Hashicorp's servers.
Is there any way we could bypass this by downloading the plugins from Hashicorp's servers and copying them onto the control server? Or any other alternative to avoid trying to download things from Hashicorp's servers?
You can use pre-installed plugins by either putting the plugins in the same directory as the terraform binary or by setting the -plugin-dir flag.
It's also possible to build a bundle of every provider you need automatically using the terraform-bundle tool.
I run Terraform in our CI pipeline in a Docker container so have a Dockerfile that looks something like this:
FROM golang:alpine AS terraform-bundler-build
RUN apk --no-cache add git unzip && \
go get -d -v github.com/hashicorp/terraform && \
go install ./src/github.com/hashicorp/terraform/tools/terraform-bundle
COPY terraform-bundle.hcl .
RUN terraform-bundle package terraform-bundle.hcl && \
mkdir -p terraform-bundle && \
unzip -d terraform-bundle terraform_*.zip
####################
FROM python:alpine
RUN apk add --no-cache git make && \
pip install awscli
COPY --from=terraform-bundler-build /go/terraform-bundle/* /usr/local/bin/
Note that the finished container image also adds git, make and the AWS CLI as I also require those tools in the CI jobs that uses this container.
The terraform-bundle.hcl then looks something like this (taken from the terraform-bundle README):
terraform {
# Version of Terraform to include in the bundle. An exact version number
# is required.
version = "0.10.0"
}
# Define which provider plugins are to be included
providers {
# Include the newest "aws" provider version in the 1.0 series.
aws = ["~> 1.0"]
# Include both the newest 1.0 and 2.0 versions of the "google" provider.
# Each item in these lists allows a distinct version to be added. If the
# two expressions match different versions then _both_ are included in
# the bundle archive.
google = ["~> 1.0", "~> 2.0"]
# Include a custom plugin to the bundle. Will search for the plugin in the
# plugins directory, and package it with the bundle archive. Plugin must have
# a name of the form: terraform-provider-*, and must be build with the operating
# system and architecture that terraform enterprise is running, e.g. linux and amd64
customplugin = ["0.1"]
}
config plugin_cache_dir in .terraformrc
plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"
then move the pre-installed provider into the plugin_cache_dir,
terraform will not download the provider anymore
btw, use the ~/.terraform.d/plugin directory doesn't work
/.terraform.d/plugin/linux_amd64$ terraform -v
Terraform v0.12.15
The proper way to handle this since terraform 0.14, as also discussed on the terraform-bundle page mentioned in the currently accepted answer, is to use terraform providers mirror as described on https://www.terraform.io/cli/commands/providers/mirror. This command creates all the necessary index files etc so the folder can be used for plugins. Eg:
$ cd your-tf-root-module
$ terraform providers mirror path/to/tf-plugins
...
$ terraform init --plugin-dir path/to/tf-plugins
...
You can cd to each of your root modules (ie those that have terraform state) and run the mirror command; multiple versions of a plugin may be installed there, and that's ok. When you run the terraform init command, it will fetch the proper one. Same as without the --plugin-dir arg.
So the only difference is that the internet is not used to acquire the plugins, terraform init gets them from the plugin folder.
This is very useful also for creating a cache that can then be used by terraform in ci/cd. Eg in circleci you would have a manual job that calls mirror and does a save-cache; and your automated terraform init job would restore-cache, and use --plugin-dir arg; then the automated terraform apply job would behave as usual.
Starting 0.13.2 version of Terraform release, one could download plugins from a local webserver/http server via network mirror protocol.
For more details, check this link
It expects a .terraformrc file in $HOME path, pointing to the provider path of the plugins like below. If the file is in different directory, you could provide the path with TERRAFORM_CONFIG env var.
provider_installation {
network_mirror {
url = "https://terraform-plugins.example.net/providers/"
}
}
Then, you define providers in a custom tf like below.
providers.tf::
terraform {
required_providers {
azurerm = {
source = "registry.terraform.io/example/azurerm"
}
openstack = {
source = "registry.terraform.io/example/openstack"
}
null = {
source = "registry.terraform.io/example/null"
}
random = {
source = "registry.terraform.io/example/random"
}
local = {
source = "registry.terraform.io/example/local"
}
}
}
However, you have to upload the plugin file in .zip format along with index.json and the <version>.json files for terraform to discover the version of plugin to download.
Example index.json containing the version of plugin::
{
"versions": {
"2.3.0": {}
}
}
Again, 2.3.0.json contains hashes of the plugin file. In this case it's <version>.json
{
"archives": {
"linux_amd64": {
"hashes": [
"h1:nFL6uiwsQFLiP8QCr35sPfWe9LpXI3/c7gP9tYnih+k="
],
"url": "terraform-provider-random_2.3.0_linux_amd64.zip"
}
}
}
How do you get details of index.json and <version>.json files?
By running terraform providers on the directory containing tf files. Note, the machine running this command, needs to connect to public terraform registry. Terraform will download the information of these files. If you have different terraform configuration files, it makes sense to automate these steps otherwise, you could manually do :)
Upon, terraform init, terraform downloads the plugins from above web server rather from terraform registry. Make sure you don't use plugin-dir argument with terraform init as it will override all the changes you made.
Updated Dockerfile for #ydaetskcoR 's solution, because currently terraform-bundle doesn't work with 0.12.x (the problem was fixed at 0.12.2, but appeared on 0.12.18)
FROM hashicorp/terraform:0.12.18 as terraform-provider
COPY provider.tf .
RUN terraform init && \
mv .terraform/plugins/linux_amd64/terraform-provider* /bin/
FROM hashicorp/terraform:0.12.18
# Install terraform pre-installed plugins
COPY --from=terraform-provider /bin/terraform-provider* /bin/
And here is the content of provider.tf
provider "template" { version = "~>2.1.2" }
provider "aws" { version = "~>2.15.0" }
...
This took me awhile, had the same problem. I ended up having to download from source and use the image that this spits out. Its nasty, but it does what i need it do to to work with the Google provider.
FROM golang:alpine AS terraform-bundler-build
ENV TERRAFORM_VERSION=0.12.20
ENV GOOGLE_PROVIDER=3.5.0
RUN apk add --update --no-cache git make tree bash curl
ENV GOPATH=/go
RUN mkdir -p $GOPATH/src/github.com/terraform-providers
RUN cd $GOPATH/src/github.com/terraform-providers && curl -sLO https://github.com/terraform-providers/terraform-provider-google-beta/archive/v$GOOGLE_PROVIDER.tar.gz
RUN cd $GOPATH/src/github.com/terraform-providers && tar xvzf v$GOOGLE_PROVIDER.tar.gz && mv terraform-provider-google-beta-$GOOGLE_PROVIDER terraform-provider-google-beta
RUN cd $GOPATH/src/github.com/terraform-providers/terraform-provider-google-beta && pwd && make build
RUN cd $GOPATH/src/github.com/terraform-providers && curl -sLO https://github.com/terraform-providers/terraform-provider-google/archive/v$GOOGLE_PROVIDER.tar.gz
RUN cd $GOPATH/src/github.com/terraform-providers && tar xvzf v$GOOGLE_PROVIDER.tar.gz && mv terraform-provider-google-$GOOGLE_PROVIDER terraform-provider-google
RUN cd $GOPATH/src/github.com/terraform-providers/terraform-provider-google && pwd && make build
RUN mkdir -p $GOPATH/src/github.com/hashicorp
RUN cd $GOPATH/src/github.com/hashicorp && curl -sLO https://github.com/hashicorp/terraform/archive/v$TERRAFORM_VERSION.tar.gz
RUN cd $GOPATH/src/github.com/hashicorp && tar xvzf v$TERRAFORM_VERSION.tar.gz && mv terraform-$TERRAFORM_VERSION terraform
RUN cd $GOPATH/src/github.com/hashicorp/terraform && go install ./tools/terraform-bundle
ENV TF_DEV=false
ENV TF_RELEASE=true
COPY my-build.sh $GOPATH/src/github.com/hashicorp/terraform/scripts/
RUN cd $GOPATH/src/github.com/hashicorp/terraform && /bin/bash scripts/my-build.sh
ENV HOME=/root
COPY terraformrc $HOME/.terraformrc
RUN mkdir -p $HOME/.terraform.d/plugin-cache
########################################
FROM alpine:3
ENV HOME=/root
RUN ["/bin/sh", "-c", "apk add --update --no-cache bash ca-certificates curl git jq openssh"]
RUN ["bin/sh", "-c", "mkdir -p /src"]
COPY --from=terraform-bundler-build /go/bin/terraform* /bin/
RUN mkdir -p /root/.terraform.d/plugins/linux_amd64
COPY --from=terraform-bundler-build /root/.terraform.d/ $HOME/.terraform.d/
RUN cp /bin/terraform-provider-google $HOME/.terraform.d/plugin-cache/linux_amd64
RUN cp /bin/terraform-provider-google-beta $HOME/.terraform.d/plugin-cache/linux_amd64
COPY terraformrc $HOME/.terraformrc
COPY provider.tf $HOME/
COPY backend.tf $HOME/
# For Testing (This should be echoed or taken care of in the CI pipeline)
#COPY google.json $HOME/.google.json
WORKDIR $HOME
ENTRYPOINT ["/bin/bash"]
.terraformrc:
plugin_cache_dir = "$HOME/.terraform.d/plugins/linux_amd64"
disable_checkpoint = true
provider.tf
# Define which provider plugins are to be included
provider "google" {
credentials = ".google.json"
}
provider "google-beta" {
credentials = ".google.json"
}
my-build.sh
#!/usr/bin/env bash
#
# This script builds the application from source for multiple platforms.
# Get the parent directory of where this script is.
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )"
# Change into that directory
cd "$DIR"
echo "DIR=$DIR"
# Get the git commit
GIT_COMMIT=$(git rev-parse HEAD)
GIT_DIRTY=$(test -n "`git status --porcelain`" && echo "+CHANGES" || true)
# Determine the arch/os combos we're building for
XC_ARCH=${XC_ARCH:-"amd64 arm"}
XC_OS=${XC_OS:-linux}
XC_EXCLUDE_OSARCH="!darwin/arm !darwin/386"
mkdir -p bin/
# If its dev mode, only build for ourself
if [[ -n "${TF_DEV}" ]]; then
XC_OS=$(go env GOOS)
XC_ARCH=$(go env GOARCH)
# Allow LD_FLAGS to be appended during development compilations
LD_FLAGS="-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY} $LD_FLAGS"
fi
if ! which gox > /dev/null; then
echo "==> Installing gox..."
go get -u github.com/mitchellh/gox
fi
# Instruct gox to build statically linked binaries
export CGO_ENABLED=0
# In release mode we don't want debug information in the binary
if [[ -n "${TF_RELEASE}" ]]; then
LD_FLAGS="-s -w"
fi
# Ensure all remote modules are downloaded and cached before build so that
# the concurrent builds launched by gox won't race to redundantly download them.
go mod download
# Build!
echo "==> Building..."
gox \
-os="${XC_OS}" \
-arch="${XC_ARCH}" \
-osarch="${XC_EXCLUDE_OSARCH}" \
-ldflags "${LD_FLAGS}" \
-output "pkg/{{.OS}}_{{.Arch}}/${PWD##*/}" \
.
## Move all the compiled things to the $GOPATH/bin
GOPATH=${GOPATH:-$(go env GOPATH)}
case $(uname) in
CYGWIN*)
GOPATH="$(cygpath $GOPATH)"
;;
esac
OLDIFS=$IFS
IFS=: MAIN_GOPATH=($GOPATH)
IFS=$OLDIFS
#
# Create GOPATH/bin if it's doesn't exists
if [ ! -d $MAIN_GOPATH/bin ]; then
echo "==> Creating GOPATH/bin directory..."
mkdir -p $MAIN_GOPATH/bin
fi
# Copy our OS/Arch to the bin/ directory
DEV_PLATFORM="./pkg/$(go env GOOS)_$(go env GOARCH)"
if [[ -d "${DEV_PLATFORM}" ]]; then
for F in $(find ${DEV_PLATFORM} -mindepth 1 -maxdepth 1 -type f); do
cp ${F} bin/
cp ${F} ${MAIN_GOPATH}/bin/
ls -alrt ${MAIN_GOPATH}/bin/
echo "MAIN_GOPATH=${MAIN_GOPATH}"
done
fi
bucket.tf
terraform {
backend "gcs" {
bucket = "my-terraform-bucket"
prefix = "terraform/state"
credentials = ".google.json"
}
required_version = "v0.12.20"
}
You can use pre-installed plugins by either putting the plugins binaries in the same directory where Terraform binary is available by setting the "plugins-dir" flag.
By default, all plugins downloaded in .terraform folder. For example, Null resource plugin will be available at below location
.terraform\providers\registry.terraform.io\hashicorp\null\3.0.0.\windows_amd64.
Create new folder like "terraform-plugins" inside Terraform directory and copy all content including registry.terraform.io folder mentioned in above example in created folder.
Now run the terraform init command with plugins-dir flag
terraform init -plugin-dir="/terraform-plugins"
specify complete directory path with plugin-dir flag

Subversion post-commit hook to update 'staging' version not working

We have a staging version of our web application (it is basically a subversion working copy that no-one works on) that lives in '/apps/software'. Each developer has their own working copy in '~/apps/software'. I would like to utilise a simple post-commit hook script to update the staging copy every time a developer commits a change to the repository.
Sounds simple right? Well I've been banging my head against a brick wall on this for longer than I should. The hook script (called 'post-commit', located in /svn/software/hooks, permissions=777, user:group=apache:dev) is as follows (ignore the commented out bits for now):
#!/bin/sh
/usr/bin/svn update /apps/software >> /var/log/svn/software.log
# REPOS="$1"
# REV="$2"
# AUTHOR=`/usr/bin/svnlook author -r "$REV" "$REPOS"`
# LOG=`/usr/bin/svnlook log -r "$REV" "$REPOS"`
# EMAIL="test#example.com"
# echo "Commit log message as follows:-
#
# \"${LOG}\"
#
# The staging version has automatically been updated.
#
# See http://trac/projects/software/changeset/${REV} for more details." | /bin/mail -s "SVN : software : revision ${REV} committed by ${AUTHOR}" ${EMAIL}
That's it. The log file has the same permissions and user:group as the post-commit script and I have even given the staging copy the same user:group and permissions. Apache itself (we're using the apache subversion extension) is running under apache:dev as well. I know the hook is being executed, because the stuff that's commented out above sending an email works fine - it's just the update command that isn't.
I can also execute the post-commit hook script without environment variables using:
$ env - /svn/software/hooks/post-commit /svn/software <changeset>
and it runs fine, performing the 'svn update' no problems. I have even tried removing the '>>' to log file, but it doesn't make a difference.
Any help on this would be most appreciated...
Your only sending standard output to the log here, not error output:
/usr/bin/svn update /apps/software >> /var/log/svn/software.log
Do this instead to see what is going wrong:
/usr/bin/svn update /apps/software >> /var/log/svn/software.log 2>&1

Resources