I'm running Jenkins in a local trusted environment where I'm trying to run this pipeline. This Jenkinsfile is checked into git.
#!groovy
node('master') {
def ver = pomVersion()
echo "Building version $ver"
}
def pomVersion(){
def pomtext = readFile('pom.xml')
def pomx = new XmlParser().parseText(pomtext)
pomx.version.text()
}
The first few times I ran the build, I needed to manually approve changes (Jenkins->Mange Jenkins-> In-process Script Approval). Now I get this Exception and there is nothing to approve. All I want to do is parse an XML file. Can these security checks be bypassed completely for pipeline builds?
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: unclassified field groovy.util.Node version
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.unclassifiedField(SandboxInterceptor.java:367)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:363)
at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:241)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:238)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:23)
at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:17)
at WorkflowScript.pomVersion(WorkflowScript:10)
at WorkflowScript.run(WorkflowScript:3)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:62)
at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.fixName(PropertyishBlock.java:54)
at sun.reflect.GeneratedMethodAccessor479.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:58)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:29)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:29)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:164)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:276)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$000(CpsThreadGroup.java:78)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:185)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:183)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:47)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Finished: FAILURE
Currently it is not possible. There is an open ticket for this problem https://issues.jenkins-ci.org/browse/JENKINS-28178
You can solve the problem by following steps:
install Permissive Script Security plugin (version 0.3 or newer)
If you are using a pipeline script, make sure Use Groovy Sandbox is checked. This can be done in the configuration of the job.
add permissive-script-security.enabled command line parameter to Jenkins master with value:
true if you want to disable the need to approve scripts, but potentially dangerous signatures will be logged:
-Dpermissive-script-security.enabled=true
no_security if you want to disable the need to approve scripts and disable also logging of the potentially dangerous signatures:
-Dpermissive-script-security.enabled=no_security
Try the following jenkins plugin: https://wiki.jenkins-ci.org/display/JENKINS/Permissive+Script+Security+Plugin
It disables the sandbox. Works for me.
As answered above: in the newer Jenkins versions Script Security has been tightened. However for the specific use case of reading a version from Maven's pom.xml one could use readMavenPom from the Pipeline Utility Steps Plugin:
pom = readMavenPom file: 'pom.xml'
pom.version
With some other solutions in this StackOverflow question as well.
I'd like to offer up a hack that I ended up implementing after scouring the interwebs for a solution and trying some of the solutions proposed here.
A little background on my setup:
Jenkins master (no slaves)
Dockerized Jenkins instance with a persistent volume for the jenkins_home directory
Jenkins jobs are delivered via the Jenkins Job DSL plugin with jobs written in .groovy
My scenario:
Anytime someone modified an existing Jenkins pipeline (via groovy) and introduced new functionality that used some custom groovy, Jenkins would fail the job and flag the code snippet for approval. Approval was manual and tedious.
I have tried the solutions posted above and they did not work for me. So my hack was to create a Jenkins job that runs a shell job that takes the list of signatures that need approved and then adds them to the /var/jenkins_home/scriptApproval.xml file.
Some gotchas:
The offending job still has to fail once for me to find/copy the offending code/signature
To get the change to take effect, you cant "reload from disk" for the file to get picked up. You have to restart the Jenkins process (in our case delete the container and bring it back up). This was not a big pain for me since Jenkins is restarted every morning.
In our world, we trust the devs who modify our Jenkins jobs so they are free to add signatures that need approval as needed. Plus the job is in source control so we can see who added what.
My Jenkins container also has xmlstarlet baked in so my shell job uses that for the updating of the file
Example of my Jenkins job's shell command:
#!/bin/bash
echo ""
#default location of the Jenkins approval file
APPROVE_FILE=/var/jenkins_home/scriptApproval.xml
#creating an array of the signatures that need approved
SIGS=(
'method hudson.model.ItemGroup getItem java.lang.String'
'staticMethod jenkins.model.Jenkins getInstance'
)
#stepping through the array
for i in "${SIGS[#]}"; do
echo "Adding :"
echo "$i"
echo "to $APPROVE_FILE"
echo ""
#checking the xml file to see if it has already been added, then deleting. this is a trick to keep xmlstarlet from creatine duplicates
xmlstarlet -q ed --inplace -d "/scriptApproval/approvedSignatures/string[text()=\"$i\"]" $APPROVE_FILE
#adding the entry
xmlstarlet -q ed --inplace -s /scriptApproval/approvedSignatures -t elem -n string -v "$i" $APPROVE_FILE
echo ""
done
echo "##### Completed updating "$APPROVE_FILE", displaying file: #####"
cat "$APPROVE_FILE"
Related
Is it possible to get GitLab to report out what it's actually running in the output log. For example, the following .gitlab-ci.yml
variables:
MAVEN_CLI_OPTS: >-
-s $CI_PROJECT_DIR/.m2/settings.xml
--batch-mode
--errors
--fail-at-end
--show-version
-DinstallAtEnd=true
-DdeployAtEnd=true
compile-test-package:
stage: package
script:
- mvn ${MAVEN_CLI_OPTS} package
The run log then shows this
...
mvn ${MAVEN_CLI_OPTS} package
...
But I really would like it to give the specific details like
...
mvn -s path/to/my/project/.m2/settings.xml --batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true package
...
The best I've learned on this is that GitLab doesn't have a way to report the full details of what its executing. As Michael Delgado mentioned, putting an echo in to output will give you a method to do this. However use this with cation cause protected values could be exposed using this method.
I'm still struggling with variable replacement but at least now I can see what GitLab is replacing and not.
I am running a Node.js app on Google App Engine, using the following command to deploy my code:
gcloud app deploy --stop-previous-version
My desired behavior is for all instances running previous versions to be terminated, but they always seem to stick around. Is there something I'm missing?
I realize they are not receiving traffic, but I am still paying for them and they cause some background telemetry noise. Is there a better way of running this command?
Example output of the gcloud app instances list:
As you can see I have two different versions running.
We accidentally blew through our free Google App Engine credit in less than 30 days because of an errant flexible instance that wasn't cleared by subsequent deployments. When we pinpointed it as the cause it had scaled up to four simultaneous instances that were basically idling away.
tl;dr: Use the --version flag when deploying to specify a version name. An existing instance with the same version will be
replaced then next time you deploy.
That led me down the rabbit hole that is --stop-previous-version. Here's what I've found out so far:
--stop-previous-version doesn't seem to be supported anymore. It's mentioned under Flags on the gcloud app deploy reference page, but if you look at the top of the page where all the flags are listed, it's nowhere to be found.
I tried deploying with that flag set to see what would happen but it seemingly had no effect. A new version was still created, and I still had to go in and manually delete the old instance.
There's an open Github issue on the gcloud-maven-plugin repo that specifically calls this out as an issue with that plugin but the issue has been seemingly ignored.
At this point our best bet at this point is to add --version=staging or whatever to gcloud deploy app. The reference docs for that flag seem to indicate that that it'll replace an existing instance that shares that "version":
--version=VERSION, -v VERSION
The version of the app that will be created or replaced by this deployment. If you do not specify a version, one will be generated for you.
(emphasis mine)
Additionally, Google's own reference documentation on app.yaml (the link's for the Python docs but it's still relevant) specifically calls out the --version flag as the "preferred" way to specify a version when deploying:
The recommended approach is to remove the version element from your app.yaml file and instead, use a command-line flag to specify your version ID
As far as I can tell, for Standard Environment with automatic scaling at least, it is normal for old versions to remain "serving", though they should hopefully have zero instances (even if your scaling configuration specifies a nonzero minimum). At least that's what I've seen. I think (I hope) that those old "serving" instances won't result in any charges, since billing is per instance.
I know most of the above answers are for Flexible Environment, but I thought I'd include this here for people who are wondering.
(And it would be great if someone from Google could confirm.)
I had same problem as OP. Using the flex environment (some of this also applies to standard environment) with Docker (runtime: custom in app.yaml) I've finally solved this! I tried a lot of things and I'm not sure which one fixed it (or whether it was a combination) so I'll list the things I did here, the most likely solutions being listed first.
SOLUTION 1) Ensure that cloud storage deletes old versions
What does cloud storage have to do with anything? (I hear you ask)
Well there's a little tooltip (Google Cloud Platform Web UI (GCP) > App Engine > Versions > Size) that when you hover over it says:
(Google App Engine) Flexible environment code is stored and billed from Google Cloud Storage ... yada yada yada
So based on this info and this answer I visited GCP > Cloud Storage > Browser and found my storage bucket AND a load of other storage buckets I didn't know existed. It turns out that some of the buckets store cached cloud functions code, some store cached docker images and some store other cached code/stuff (you can tell which is which by browsing the buckets).
So I added a deletion policy to all the buckets (except the cloud functions bucket) as follows:
Go to GCP > Cloud Storage > Browser and click the link (for the relevant bucket) in the Lifecycle Rules column > Click ADD A RULE > THEN:
For SELECT ACTION choose "Delete Object" and click continue
For SELECT OBJECT choose "Number of newer versions" and enter 1 in the input
Click CREATE
This will return you to the table view and you should now see the rule in the lifecycle rules column.
REPEAT this process for all relevant buckets (the relevant buckets were described earlier).
THEN delete the contents of the relevant buckets. WARNING: Some buckets warn you NOT to delete the bucket itself, only the contents!
Now re-deploy and your latest version should now get deployed and hopefully you will never have this problem again!
SOLUTION 2) Use deploy flags
I added these flags
gcloud app deploy --quiet --promote --stop-previous-version
This probably doesn't help since these flags seem to be the default but worth adding just in case.
Note that for the standard environment only (I heard on the grapevine) you can also use the --no-cache flag which might help but with flex, this flag caused the deployment to fail (when I tried).
SOLUTION 3)
This probably does not help at all, but I added:
COPY app.yaml .
to the Dockerfile
TIP 1)
This is probably more of a helpful / useful debug approach than a fix.
Visit GCP > App Engine > Versions
This shows all versions of your app (1 per deployment) and it also shows which version each instance is running (instances are configured in app.yaml).
Make sure all instances are running the latest version. This should happen by default. Probably worth deleting old versions.
You can determine your version from the gcloud app deploy logs (at the start of the logs) but it seems that the versions are listed by order of deployment anyway (most recent at top).
TIP 2)
Visit GCP > App Engine > Instances
SSH into an instance. This is just a matter of clicking a few buttons (see screenshot below). Once you have SSH'd in run:
docker exec -it gaeapp /bin/bash
Which will get you into the docker container running your code. Now you can browse around to make sure it has your latest code.
Well I think my answer is long enough now. If this helps, don't thank me, J-ES-US is the one you should thank ;) I belong to Him ^^
Google may have updated their documentation cited in #IAmKale's answer
Note that if the version is running on an instance of an auto-scaled service, using --stop-previous-version will not work and the previous version will continue to run because auto-scaled service instances are always running.
Seems like that flag only works with manually scaled services.
This is a supplementary and optional answer in addition to my other main answer.
I am now, in addition to my other answer, auto incrementing version manually on deploy using a script.
My script contents are below.
Basically, the script auto increments version every time you deploy. I am using node.js so the script uses npm version to bump the version but this line could easily be tweaked to whatever language you use.
The script requires a clean git working directory for deployment.
The script assumes that when the version is bumped, this will result in file changes (e.g. changes to package.json version) that need pushing.
The script essentially tries to find your SSH key and if it finds it then it starts an SSH agent and uses your SSH key to git commit and git push the file changes. Else it just does a git commit without a push.
It then does a deploy using the --version flag ... --version="${deployVer}"
Thought this might help someone, especially since the top answer talks a lot about using the --version flag on a deploy.
#!/usr/bin/env bash
projectName="vehicle-damage-inspector-app-engine"
# Find SSH key
sshFile1=~/.ssh/id_ed25519
sshFile2=~/Desktop/.ssh/id_ed25519
sshFile3=~/.ssh/id_rsa
sshFile4=~/Desktop/.ssh/id_rsa
if [ -f "${sshFile1}" ]; then
sshFile="${sshFile1}"
elif [ -f "${sshFile2}" ]; then
sshFile="${sshFile2}"
elif [ -f "${sshFile3}" ]; then
sshFile="${sshFile3}"
elif [ -f "${sshFile4}" ]; then
sshFile="${sshFile4}"
fi
# If SSH key found then fire up SSH agent
if [ -n "${sshFile}" ]; then
pub=$(cat "${sshFile}.pub")
for i in ${pub}; do email="${i}"; done
name="Auto Deploy ${projectName}"
git config --global user.email "${email}"
git config --global user.name "${name}"
echo "Git SSH key = ${sshFile}"
echo "Git email = ${email}"
echo "Git name = ${name}"
eval "$(ssh-agent -s)"
ssh-add "${sshFile}" &>/dev/null
sshKeyAdded=true
fi
# Bump version and git commit (and git push if SSH key added) and deploy
if [ -z "$(git status --porcelain)" ]; then
echo "Working directory clean"
echo "Bumping patch version"
ver=$(npm version patch --no-git-tag-version)
git add -A
git commit -m "${projectName} version ${ver}"
if [ -n "${sshKeyAdded}" ]; then
echo ">>>>> Bumped patch version to ${ver} with git commit and git push"
git push
else
echo ">>>>> Bumped patch version to ${ver} with git commit only, please git push manually"
fi
deployVer="${ver//"."/"-"}"
gcloud app deploy --quiet --promote --stop-previous-version --version="${deployVer}"
else
echo "Working directory unclean, please commit changes"
fi
For node.js users if you call the script deploy.sh you should add:
"deploy": "sh deploy.sh"
In your package.json scripts and deploy with npm run deploy
We are working on integrating GitLab (enterprise edition) in our tooling, but one thing that is still on our wishlist is to create a merge request in GitLab via a command line (or batchfile or similar, for that matter). We would like to integrate this in our tooling. Searching here and on the web lead me to believe that this is not possible with native GitLab, but that we need additional tooling for that.
Am I correct? And what kind of tooling would I want to use for this?
As of GitLab 11.10, if you're using git 2.10 or newer, you can automatically create a merge request from the command line like this:
git push -o merge_request.create
More information can be found in the docs.
It's not natively supported, but it's not hard to throw together. The gitlab API has support for opening MR: https://github.com/gitlabhq/gitlabhq/blob/master/doc/api/merge_requests.md#create-mr
You can use following utility.
Disclosure : I developed it.
https://github.com/vishwanatharondekar/gitlab-cli
You can create merge request using this.
Some of the features it has are.
Base branch is optional. If base branch is not provided. Current branch is used as base branch.
target branch is optional. If target branch is not provided, default branch of the repo in gitlab will be used.
Created pull request page will be opened automatically after successful creation.
If title is not supported with -m option value. It will be taken from in place editor opened. First line is taken as title.
In the editor opened third line onwards takes as description.
Comma separated list of labels can be provided with its option.
Supports CI.
Repository specific configs can be given.
squash option is available.
remove source branch option is available.
If you push your branch before this command (git push -o merge_request.create) it will not work. Git will response with Everything up-to-date and merge request will not be created (gitlab 12.3).
When I tried to remove my branch from a server (do not remove your local branch!!!) then it worked for me in this form.
git push --set-upstream origin your-branch-name -o merge_request.create
In addition to answering of #AhmadSherif, You can use merge_request.target=<branch_name> for declaring the target branch.
sample usage:
git push -o merge_request.create -o merge_request.target=develop origin feature
Simple This:
According to the Gitlab documents, you can define an alias for this command to simpler usage.
git config --global alias.mwps "push -o merge_request.create -o
merge_request.target=master -o merge_request.merge_when_pipeline_succeeds"
I made a shell function which opens up the GitLab MR web page with desired parameters.
Based on the directory with the git repo you are currently in, it:
Finds the correct URL to your repo.
Sets the source branch to the branch you're currently on.
As a optional first argument you can provide the target branch. Otherwise, GitLab defaults to your default branch, which is typically master.
gmr() {
# A quick way to open a GitLab merge request URL for the current git branch
# you're on. The optional first argument is the target branch.
repo_path=$(git remote get-url origin --push | sed 's/^.*://g' | sed 's/.git$//g')
current_branch=$(git rev-parse --abbrev-ref HEAD)
if [[ -n $1 ]]; then
target_branch="&merge_request[target_branch]=$1"
else
target_branch=""
fi
xdg-open "https://gitlab.com/$repo_path/merge_requests/new?merge_request[source_branch]=$current_branch$target_branch"
}
You can set more default values in the URL, like removing the source branch after merge:
&merge_request[force_remove_source_branch]=true
Or assignee to someone:
&merge_request[assignee_ids][]=12345
Or add a reviewer:
&merge_request[reviewer_ids][]=54321
You can easily find the possible query string parameters by searching the source of the GitLab MR webpage for merge_request[.
As of now, GitLab sadly does not support this, however I recently saw it on their issue tracker. It appears one can expect a 'native tool' in the upcoming months.
GitLab tweeted out about numa08/git-gitlab some time ago, so I guess this would be worth a try.
In our build script we just pop up the browser with the correct URL and let the developer write his comments in the form hit save to create the merge request. You get this url with the correct parameters by creating a merge request manually and copying the url of the form.
#!/bin/bash
set -e
set -o pipefail
BRANCH=${2}
....
git push -f origin-gitlab $BRANCH
open "https://gitlab.com/**username**/**project-name**/merge_requests/new?merge_request%5Bsource_branch%5D=$BRANCH&merge_request%5Bsource_project_id%5D=99999&merge_request%5Btarget_branch%5D=master&merge_request%5Btarget_project_id%5D=99999"
You can write a local git alias to open a Gitlab Merge Request creation page in the default browser for the currently checked-out branch.
[alias]
lab = "!start https://gitlab.com/path/to/repo/-/merge_requests/new?merge_request%5Bsource_branch%5D=\"$(git rev-parse --abbrev-ref HEAD)\""
(this is a very simple alias for windows; I guess there are equivalent replacements for "start" on linux and fancier aliases that work with github and bitbucket too)
As well as being able to immediately see&modify the details of the MR, the advantage of this over using the merge_request.create push option is that you don't need your local branch to be behind the remote for it to work.
You might additionally want to store the alias in the repo itself.
I use https://github.com/mdsb100/cli-gitlab
I am creating the MR from inside of a gitlab CI docker container based on alpine linux, so I include the install command in before-script (that could also be included in your image). All commands in the following .gitlab-ci.yml file, are also relevant for normal command line usage (as long as you have the cli-gitlab npm installed).
variables:
TARGET_BRANCH: 'live'
GITLAB_URL: 'https://your.gitlab.net'
GITLAB_TOKEN: $PRIVATE_TOKEN #created in user profile & added in project settings
before-script:
-apk update && apk add nodejs && npm install cli-gitlab -g
script:
- gitlab url $GITLAB_URL && gitlab token $GITLAB_TOKEN
- 'echo "gitlab addMergeRequest $CI_PROJECT_ID $CI_COMMIT_REF_NAME \"$TARGET_BRANCH\" 13 `date +%Y%m%d%H%M%S`"'
- 'gitlab addMergeRequest $CI_PROJECT_ID $CI_COMMIT_REF_NAME "$TARGET_BRANCH" 13 `date +%Y%m%d%H%M%S` 2> ./mr.json'
- cat ./mr.json
This will echo true if the merge request already exists, and echo the json result of the new MR if it succeeds to create one (also saving to a mr.json file).
Since GitLab 15.7 (Dec. 2022), the GitLab CLI glab is officially integrated to GitLab.
Introducing the GitLab CLI
The command line is one of the most important tools in a software engineer’s toolkit and the majority of their process and work revolve around tools available there. They customize their CLI with styles and extend it through applications to ensure maximum efficiency while performing tasks. The CLI is the backbone of scripts and workflows developers depend on to complete their work.
To support more developers where they’re already working, we’ve adopted the open source project glab, which will form the foundation of GitLab’s native CLI experience.
The GitLab CLI brings GitLab together with Git and your code, with no application or tab switching required.
You can read about our adoption of glab, our partnership with 1Password, and how to contribute to the project in our blog post.
A special thank you to Clement Sam for creating glab and trusting us with its future.
That means you can create a MR with glab mr create:
glab mr create -a username -t "fix annoying bug"
Description
We are in a current project based on MVC4/Umbraco using Azure Websites to host it.
We are using SCM_BUILD_ARGS to change between different build setups depending on which site in Azure we deploy to (Test and Prod).
This is done by defining an app setting in the UI:
SCM_BUILD_ARGS = /p:Environment=Test
Earlier we used Bitbucket Integration to deploy and here this setting worked like a champ.
We have now switched to using Git Deployment, pushing the changes from our build server when tests have passed.
But when we do this, we get a lovely error.
"MSB1008: Only one project can be specified."
Trying to redeploy the same failed deployment from the UI on Azure works though.
After some trial and error I ended going into the deploy.cmd and outputting the %SCM_BUILD_ARGS% value in the script.
It looks like the / gets dropped from SCM_BUILD_ARGS but only when using Git deploy, not Bitbucket Integration or redeploy from UI.
Workaround
As workaround I have for now added a / to the deploy.cmd script in front of the %SCM_BUILD_ARGS%, but this of course breaks redeploy, since we then have //p:Environment=Test in the MSBuild command when the value of %SCM_BUILD_ARGS% has been inserted.
:: 2. Build to the temporary path
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
:: Added / to SCM_BUILD_ARGS
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
) ELSE (
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
)
Question
Anyone know of a better solution for this problem or is it possibly a bug in Kudu?
We would love to have both deploy from Git and Redeploy working.
Could you try changing from "/" to "-"? For instance, AppSettings from /p:Environment=Test to -p:Environment=Test, see if it helps.
-p:Environment=Test did not work for me, the setting which worked for me at the time of this writing (September 2015) was
-p:Configuration=Test
There is clearly a Kudu bug in there, and you should open an issue on https://github.com/projectkudu/kudu. But for now, I can give you a workaround.
Instead of using an App Setting, include a .deployment file at the root of your repo, containing:
[config]
SCM_BUILD_ARGS = /p:Environment=Test
I think this will work in all cases. I suspect the bug has to do with bash messing up the environment in post receive hook scenarios, which only apply to direct git push but not to Bitbucket and Redeploy scenarios.
UPDATE: In fact, it's easy to see such weird bash behavior. Try this:
Open cmd.exe
Run: set foo=/abc to set a variable
Run bash
From bash, run cmd to launch a new cmd on top of bash (so cmd -> bash -> cmd)
Run set foo to get the value of foo
Result:
FOO=C:/Program Files (x86)/git/abc
So the value gets completely messed up. The key also gets upper cases, though that's mostly harmless. Strange stuff...
We have a staging version of our web application (it is basically a subversion working copy that no-one works on) that lives in '/apps/software'. Each developer has their own working copy in '~/apps/software'. I would like to utilise a simple post-commit hook script to update the staging copy every time a developer commits a change to the repository.
Sounds simple right? Well I've been banging my head against a brick wall on this for longer than I should. The hook script (called 'post-commit', located in /svn/software/hooks, permissions=777, user:group=apache:dev) is as follows (ignore the commented out bits for now):
#!/bin/sh
/usr/bin/svn update /apps/software >> /var/log/svn/software.log
# REPOS="$1"
# REV="$2"
# AUTHOR=`/usr/bin/svnlook author -r "$REV" "$REPOS"`
# LOG=`/usr/bin/svnlook log -r "$REV" "$REPOS"`
# EMAIL="test#example.com"
# echo "Commit log message as follows:-
#
# \"${LOG}\"
#
# The staging version has automatically been updated.
#
# See http://trac/projects/software/changeset/${REV} for more details." | /bin/mail -s "SVN : software : revision ${REV} committed by ${AUTHOR}" ${EMAIL}
That's it. The log file has the same permissions and user:group as the post-commit script and I have even given the staging copy the same user:group and permissions. Apache itself (we're using the apache subversion extension) is running under apache:dev as well. I know the hook is being executed, because the stuff that's commented out above sending an email works fine - it's just the update command that isn't.
I can also execute the post-commit hook script without environment variables using:
$ env - /svn/software/hooks/post-commit /svn/software <changeset>
and it runs fine, performing the 'svn update' no problems. I have even tried removing the '>>' to log file, but it doesn't make a difference.
Any help on this would be most appreciated...
Your only sending standard output to the log here, not error output:
/usr/bin/svn update /apps/software >> /var/log/svn/software.log
Do this instead to see what is going wrong:
/usr/bin/svn update /apps/software >> /var/log/svn/software.log 2>&1