Azure Websites Git Deployment dropping "/" in SCM_BUILD_ARGS - azure

Description
We are in a current project based on MVC4/Umbraco using Azure Websites to host it.
We are using SCM_BUILD_ARGS to change between different build setups depending on which site in Azure we deploy to (Test and Prod).
This is done by defining an app setting in the UI:
SCM_BUILD_ARGS = /p:Environment=Test
Earlier we used Bitbucket Integration to deploy and here this setting worked like a champ.
We have now switched to using Git Deployment, pushing the changes from our build server when tests have passed.
But when we do this, we get a lovely error.
"MSB1008: Only one project can be specified."
Trying to redeploy the same failed deployment from the UI on Azure works though.
After some trial and error I ended going into the deploy.cmd and outputting the %SCM_BUILD_ARGS% value in the script.
It looks like the / gets dropped from SCM_BUILD_ARGS but only when using Git deploy, not Bitbucket Integration or redeploy from UI.
Workaround
As workaround I have for now added a / to the deploy.cmd script in front of the %SCM_BUILD_ARGS%, but this of course breaks redeploy, since we then have //p:Environment=Test in the MSBuild command when the value of %SCM_BUILD_ARGS% has been inserted.
:: 2. Build to the temporary path
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
:: Added / to SCM_BUILD_ARGS
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
) ELSE (
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
)
Question
Anyone know of a better solution for this problem or is it possibly a bug in Kudu?
We would love to have both deploy from Git and Redeploy working.

Could you try changing from "/" to "-"? For instance, AppSettings from /p:Environment=Test to -p:Environment=Test, see if it helps.

-p:Environment=Test did not work for me, the setting which worked for me at the time of this writing (September 2015) was
-p:Configuration=Test

There is clearly a Kudu bug in there, and you should open an issue on https://github.com/projectkudu/kudu. But for now, I can give you a workaround.
Instead of using an App Setting, include a .deployment file at the root of your repo, containing:
[config]
SCM_BUILD_ARGS = /p:Environment=Test
I think this will work in all cases. I suspect the bug has to do with bash messing up the environment in post receive hook scenarios, which only apply to direct git push but not to Bitbucket and Redeploy scenarios.
UPDATE: In fact, it's easy to see such weird bash behavior. Try this:
Open cmd.exe
Run: set foo=/abc to set a variable
Run bash
From bash, run cmd to launch a new cmd on top of bash (so cmd -> bash -> cmd)
Run set foo to get the value of foo
Result:
FOO=C:/Program Files (x86)/git/abc
So the value gets completely messed up. The key also gets upper cases, though that's mostly harmless. Strange stuff...

Related

Gitlab CI variables returns empty string?

It's been 2 days since one of my project' build starts failing on Gitlab CI. The main error was E_MISSING_APP_KEY and when I check another variable just by echoing $HOST and $PORT from my .gitlab-ci.yml config, like this
tests:
script:
- echo "${HOST} ${PORT}"
- node -e "console.log(process.env.HOST, process.env.PORT)"
- node_modules/.bin/nyc node ace test -t 0
I got nothing.
The build was failed because it can't read my environment variable that I set on its CI Settings.
Anyone experiencing same issue? & how to solve this?
Update:
I'm trying to create new project with only containing .gitlab-ci.yml file here and it's seems working just fine
But why the world it's still failing on my main project?
For anyone else having a similar problem:
check your variable, if it is protected your branch has to be protected as well or remove the protected option on your variable
The issue is solved by delete all of my variables I've had & set them back from the CI Setting. And the build pipeline is running without any errors. (except the actual testing is still failed, lol)
Honestly, I'm still wondering why this could happened? and hopefully no one will experiencing same kind of issue like me here..

Azure webapp deployment fails after removing Composer SiteExtension

I had Composer Site extension installed till now on azure php webapp.
I need custom deployment that can run grunt tasks also. So I created the .deployment and deploy.sh files in project root. But that deploy.sh is not being picked up.
.deployment file contents:
[config]
command = bash deploy.sh
Looking at the deployment logs, I find this
2017-05-04T06:21:03.9301086Z,Updating submodules.,8bc3029f-d77b-4c1e-860f-a3d439d7a354,0
2017-05-04T06:21:03.9926050Z,Preparing deployment for commit id 'e2b45fb52b'.,61c286b1-5c00-4c11-ae14-54e0711d6857,0
2017-05-04T06:21:04.2632947Z,Running custom deployment command...,e71c397e-bc63-4357-abc4-acd49bc2041d,0
2017-05-04T06:21:04.3101663Z,Running deployment command...,24db1c4f-8a51-463b-8c4a-ee040bc5dfd8,0
2017-05-04T06:21:04.3101663Z,Command: D:\home\SiteExtensions\ComposerExtension\Hooks\deploy.cmd,,0
2017-05-04T06:21:04.4039215Z,The system cannot find the path specified.,,1
2017-05-04T06:21:04.4195462Z,The system cannot find the path specified.\r\nD:\Program Files (x86)\SiteExtensions\Kudu\62.60430.2807\bin\Scripts\starter.cmd D:\home\SiteExtensions\ComposerExtension\Hooks\deploy.cmd,,2
Seems like somewhere the trigger for Composer site extension still remains which is being invoked during deployment.
How can I completely remove Composer site extension and use my custom deployment script deploy.sh? Thanks in advance.
Found the problem. After uninstalling Composer SiteExtension, this environment variable is still present APPSETTING_COMMAND = D:\home\SiteExtensions\ComposerExtension\Hooks\deploy.cmd. Deleted the environment variable using kudu console and then deployment succeeded.
After removing the Composer Extension the APPSETTING_COMMAND remains as an environment variable.
Use the Kudu PowerShell command Remove-Item Env:\APPSETTING_COMMAND to remove the variable online.
Alternatively, restarting the App Service via the overview tab will refresh the environment variables, though this could be a little invasive.

gcloud app deploy does not remove previous versions

I am running a Node.js app on Google App Engine, using the following command to deploy my code:
gcloud app deploy --stop-previous-version
My desired behavior is for all instances running previous versions to be terminated, but they always seem to stick around. Is there something I'm missing?
I realize they are not receiving traffic, but I am still paying for them and they cause some background telemetry noise. Is there a better way of running this command?
Example output of the gcloud app instances list:
As you can see I have two different versions running.
We accidentally blew through our free Google App Engine credit in less than 30 days because of an errant flexible instance that wasn't cleared by subsequent deployments. When we pinpointed it as the cause it had scaled up to four simultaneous instances that were basically idling away.
tl;dr: Use the --version flag when deploying to specify a version name. An existing instance with the same version will be
replaced then next time you deploy.
That led me down the rabbit hole that is --stop-previous-version. Here's what I've found out so far:
--stop-previous-version doesn't seem to be supported anymore. It's mentioned under Flags on the gcloud app deploy reference page, but if you look at the top of the page where all the flags are listed, it's nowhere to be found.
I tried deploying with that flag set to see what would happen but it seemingly had no effect. A new version was still created, and I still had to go in and manually delete the old instance.
There's an open Github issue on the gcloud-maven-plugin repo that specifically calls this out as an issue with that plugin but the issue has been seemingly ignored.
At this point our best bet at this point is to add --version=staging or whatever to gcloud deploy app. The reference docs for that flag seem to indicate that that it'll replace an existing instance that shares that "version":
--version=VERSION, -v VERSION
The version of the app that will be created or replaced by this deployment. If you do not specify a version, one will be generated for you.
(emphasis mine)
Additionally, Google's own reference documentation on app.yaml (the link's for the Python docs but it's still relevant) specifically calls out the --version flag as the "preferred" way to specify a version when deploying:
The recommended approach is to remove the version element from your app.yaml file and instead, use a command-line flag to specify your version ID
As far as I can tell, for Standard Environment with automatic scaling at least, it is normal for old versions to remain "serving", though they should hopefully have zero instances (even if your scaling configuration specifies a nonzero minimum). At least that's what I've seen. I think (I hope) that those old "serving" instances won't result in any charges, since billing is per instance.
I know most of the above answers are for Flexible Environment, but I thought I'd include this here for people who are wondering.
(And it would be great if someone from Google could confirm.)
I had same problem as OP. Using the flex environment (some of this also applies to standard environment) with Docker (runtime: custom in app.yaml) I've finally solved this! I tried a lot of things and I'm not sure which one fixed it (or whether it was a combination) so I'll list the things I did here, the most likely solutions being listed first.
SOLUTION 1) Ensure that cloud storage deletes old versions
What does cloud storage have to do with anything? (I hear you ask)
Well there's a little tooltip (Google Cloud Platform Web UI (GCP) > App Engine > Versions > Size) that when you hover over it says:
(Google App Engine) Flexible environment code is stored and billed from Google Cloud Storage ... yada yada yada
So based on this info and this answer I visited GCP > Cloud Storage > Browser and found my storage bucket AND a load of other storage buckets I didn't know existed. It turns out that some of the buckets store cached cloud functions code, some store cached docker images and some store other cached code/stuff (you can tell which is which by browsing the buckets).
So I added a deletion policy to all the buckets (except the cloud functions bucket) as follows:
Go to GCP > Cloud Storage > Browser and click the link (for the relevant bucket) in the Lifecycle Rules column > Click ADD A RULE > THEN:
For SELECT ACTION choose "Delete Object" and click continue
For SELECT OBJECT choose "Number of newer versions" and enter 1 in the input
Click CREATE
This will return you to the table view and you should now see the rule in the lifecycle rules column.
REPEAT this process for all relevant buckets (the relevant buckets were described earlier).
THEN delete the contents of the relevant buckets. WARNING: Some buckets warn you NOT to delete the bucket itself, only the contents!
Now re-deploy and your latest version should now get deployed and hopefully you will never have this problem again!
SOLUTION 2) Use deploy flags
I added these flags
gcloud app deploy --quiet --promote --stop-previous-version
This probably doesn't help since these flags seem to be the default but worth adding just in case.
Note that for the standard environment only (I heard on the grapevine) you can also use the --no-cache flag which might help but with flex, this flag caused the deployment to fail (when I tried).
SOLUTION 3)
This probably does not help at all, but I added:
COPY app.yaml .
to the Dockerfile
TIP 1)
This is probably more of a helpful / useful debug approach than a fix.
Visit GCP > App Engine > Versions
This shows all versions of your app (1 per deployment) and it also shows which version each instance is running (instances are configured in app.yaml).
Make sure all instances are running the latest version. This should happen by default. Probably worth deleting old versions.
You can determine your version from the gcloud app deploy logs (at the start of the logs) but it seems that the versions are listed by order of deployment anyway (most recent at top).
TIP 2)
Visit GCP > App Engine > Instances
SSH into an instance. This is just a matter of clicking a few buttons (see screenshot below). Once you have SSH'd in run:
docker exec -it gaeapp /bin/bash
Which will get you into the docker container running your code. Now you can browse around to make sure it has your latest code.
Well I think my answer is long enough now. If this helps, don't thank me, J-ES-US is the one you should thank ;) I belong to Him ^^
Google may have updated their documentation cited in #IAmKale's answer
Note that if the version is running on an instance of an auto-scaled service, using --stop-previous-version will not work and the previous version will continue to run because auto-scaled service instances are always running.
Seems like that flag only works with manually scaled services.
This is a supplementary and optional answer in addition to my other main answer.
I am now, in addition to my other answer, auto incrementing version manually on deploy using a script.
My script contents are below.
Basically, the script auto increments version every time you deploy. I am using node.js so the script uses npm version to bump the version but this line could easily be tweaked to whatever language you use.
The script requires a clean git working directory for deployment.
The script assumes that when the version is bumped, this will result in file changes (e.g. changes to package.json version) that need pushing.
The script essentially tries to find your SSH key and if it finds it then it starts an SSH agent and uses your SSH key to git commit and git push the file changes. Else it just does a git commit without a push.
It then does a deploy using the --version flag ... --version="${deployVer}"
Thought this might help someone, especially since the top answer talks a lot about using the --version flag on a deploy.
#!/usr/bin/env bash
projectName="vehicle-damage-inspector-app-engine"
# Find SSH key
sshFile1=~/.ssh/id_ed25519
sshFile2=~/Desktop/.ssh/id_ed25519
sshFile3=~/.ssh/id_rsa
sshFile4=~/Desktop/.ssh/id_rsa
if [ -f "${sshFile1}" ]; then
sshFile="${sshFile1}"
elif [ -f "${sshFile2}" ]; then
sshFile="${sshFile2}"
elif [ -f "${sshFile3}" ]; then
sshFile="${sshFile3}"
elif [ -f "${sshFile4}" ]; then
sshFile="${sshFile4}"
fi
# If SSH key found then fire up SSH agent
if [ -n "${sshFile}" ]; then
pub=$(cat "${sshFile}.pub")
for i in ${pub}; do email="${i}"; done
name="Auto Deploy ${projectName}"
git config --global user.email "${email}"
git config --global user.name "${name}"
echo "Git SSH key = ${sshFile}"
echo "Git email = ${email}"
echo "Git name = ${name}"
eval "$(ssh-agent -s)"
ssh-add "${sshFile}" &>/dev/null
sshKeyAdded=true
fi
# Bump version and git commit (and git push if SSH key added) and deploy
if [ -z "$(git status --porcelain)" ]; then
echo "Working directory clean"
echo "Bumping patch version"
ver=$(npm version patch --no-git-tag-version)
git add -A
git commit -m "${projectName} version ${ver}"
if [ -n "${sshKeyAdded}" ]; then
echo ">>>>> Bumped patch version to ${ver} with git commit and git push"
git push
else
echo ">>>>> Bumped patch version to ${ver} with git commit only, please git push manually"
fi
deployVer="${ver//"."/"-"}"
gcloud app deploy --quiet --promote --stop-previous-version --version="${deployVer}"
else
echo "Working directory unclean, please commit changes"
fi
For node.js users if you call the script deploy.sh you should add:
"deploy": "sh deploy.sh"
In your package.json scripts and deploy with npm run deploy

Compile less files in node.js project on Windows Azure

I have a node.js project that compiles less files to css when I start the app. I do this by modifying the start script in package.json like so:
{
// omitted for brevity
start: { lessc public/stylesheets/styles.less > public/stylesheets/styles.css; node app.js; }
}
This works nicely locally, but not at all on my Windows Azure instance. Either because less needs to be installed globally on the machine for this to work, or because Azure doesn't run npm start. Or both. Either way, I need another solution!
I thought custom deployments was the answer (I'm using git remote deployment) and I tried modifying the deploy.cmd to include
call "lessc public/stylesheets/styles.less > public/stylesheets/styles.css;"
No joy. I even tried
call "%SITE_ROOT%/node_modules/less/bin/lessc %SITE_ROOT%/public/stylesheets/styles.less > %SITE_ROOT%/public/stylesheets/styles.css;
Am I coming at this the wrong way? How can I keep the compiled css files out of my source control and compile them on the server after deployment to Azure?
Thanks!
OK, I finally have this going, I think.
For some reason, even though the physical file is on the disk (I can see them with my FTP client), Azure is not letting me run lessc in the \node_modules\less\bin folder, but it does let me run the version in the \node_modules\.bin folder.
In the end, I added the following lines to my deploy.cmd file, and it worked!
IF NOT DEFINED LESS_COMPILER (
SET LESS_COMPILER=%DEPLOYMENT_TARGET%\node_modules\.bin\lessc
)
call %LESS_COMPILER% %DEPLOYMENT_TARGET%\public\stylesheets\styles.less > %DEPLOYMENT_TARGET%\public\stylesheets\styles.css

TeamCity ".Net Process Runner" hangs

We have started migrating our one of several projects to team city as part of CI. Below is how we have setup teamcity build. We are trying to deploy WebSite.
1) Build Step 1 (Package installation)
Using "command line " runner type install required package.
2) Build Step 2 (Build)
Using Runner type "Visual Studio (sln)" (Visual Studio 2010) build website.
3) Build Step 3 (Deploy Web Site)
Using ".Net Process Runner", deployer.exe (x86 built with .Net Framework 4) deploy site.
Deployer.exe reads config file. Config file contains "BuildId", "Environment" and "Servers" where we want build to be pushed.
<buildType id="bt52">
<env name="Debug">
<server path="SERVER1" />
</env>
<env name="QA">
<server path="SERVER2" />
<server path="SERVER3" />
</env>
<env name="UAT">
<server path="SERVER4" />
<server path="SERVER5" />
</env>
</buildType>
Deployer.exe is called with required parameters as below. Which reads config and deploys site to Server2 and Server3.
Deployer.exe "bt52" "QA" "siteQA" "E:\BuildAgent\work\2483052e33e5e1e8\src\diy\" msdeploy.exe
Problem area is step #3.
When we run deployer.exe using .Net process runner as part of team city we see its hanging and not responsind sometime even for 45 minutes. When we try to execute same deployer.exe from build server using command line script executes within couple of seconds.
E:\TeamCity_custom_applications\deployer>Deployer.exe farm1-1 QA siteQA E:\BuildAgent\work\2483052e33e5e1e8\src\diy\ msdeploy.exe
Info
: Processing batch run ... Info : Processing command ...msdeploy.exe
-verb:sync -source:contentPath="E:\BuildAgent\work\2483052e33e5e1e8\src\diy\" -dest:contentPath="siteQA",wmsvc="SERVER2",userName="*****",password="******",authType="Basic"-skip:objectName=filePath,absolutePath=web.config -skip:objectName=dirPath,absolutePath="bin" -enableRule:DoNotDeleteRule -allowUntrusted Info : output >>Total changes: 0 (0 added, 0 deleted, 0 updated, 0 parameters changed, 0
bytes copied) Info : error >>(none) Info : ExitCode >> 0 Info :
Processing command ...msdeploy.exe -verb:sync
-source:contentPath="E:\BuildAgent\work\2483052e33e5e1e8\src\diy\" -dest:contentPath="siteQA",wmsvc="SERVER3",userName="******",password="******",authType="Basic"
-skip:objectName=filePath,absolutePath=web.config -skip:objectName=dirPath,absolutePath="bin" -enableRule:DoNotDeleteRule -allowUntrusted Info : output >>Total changes: 0 (0 added, 0 deleted, 0 updated, 0 parameters ch anged, 0
bytes copied) Info : error >>(none) Info : ExitCode >> 0
Info: Deploy Script Complete.
One more thing we observed is running deployer.exe through teamcity I see that site content gets copied but only for 1 server and teamcity build status stays in "Running" mode. I am wondering if someone can please put little bit of insight on how can I look into this issue.
Update 1:
Thanks for your time looking into it !! What we ended up doing is, Instead of running command "msdeploy.exe" from "cmd.exe" we added "msdeploy.exe" location as Environment variable and executed "msdeploy.exe" in loop for # of servers. This has resolved issue of hanging. Now I am just curious to know why would it behave in such manner where if you execute "msdeploy.exe" from "cmd.exe" it would hang while running directly "msdeploy.exe" it would execute successfully. Any insight into same would be greatly appreciated.
Update 2:
I have added image which explains behavior using process explorer. If we kill msdeploy.exe from process explorer than for next all deployments to that server will not have the issue of build hanging. Please see below image
To be honest, it sounds like you're running into issues with redirecting input/output streams. TeamCity is running your application in a totally headless environment and then you, in turn, are attempting to redirect and parse the output of msdeploy.exe
If that's the case, I'd recommend looking into using the MSDeploy API instead of msdeploy.exe. The latter is just a command line wrapper for the former, so all the functionality is available to you. There's a sample deployment application available on the IIS blog if you need help getting started.
It seems you have NUnit build step configured in TeamCity and invoke cmd.exe from your test. This looks like an issue with the test code then. Most probably it will reproduce without TeamCity if you run the test in question with NUnit directly.
As Richard noted, most probably the issue root cause is related to stdin/stdout processing.
If you want to fix it in your code, you can try to experiment by explicitly closing stdin or the other way around, try writing something into it, etc.
Work around we did is, we observed msdeploy doesn't take more than 3-5 seconds to execute and deploy (Even for our biggest project which is almost 300mb website). So we set timeout of 20 seconds. So far since last 1 weeks we have not seen any issue with it and hopefully it will not cause more trouble but still we are not sure why such behavior.

Resources