I am currently facing one issue with my Azure Webjobs. I have created custom build tasks in TFS that build my solution and copy the files of 5 different console projects to app_data/jobs/triggered/MyProjectName then zip them into 1 file and deploy into my staging slot of my Azure Web App. When I navigate to the slot I can see 4 out of 5 jobs. When I log in to the FTP location I can see all 5 jobs. What is the possible issue, is there a limit on the number of jobs ? I do not think it is the webjob name as I have one with longer name and it is showing fine. The name format is following xxxx.webjobs.projectname no numbers, no special characters. I have tried to rename the webjob did not help.
It was a silly mistake that I made, when I publish to azure I was publishing both Release and Debug folder. The Release folder was my leftover from my previous test with no files. I have removed the Release folder and it all started working. Perhaps, if Azure by default will use Release folder and does not find anything in that folder and if there is Debug folder it should go to debug and leave a note in logs that Debug is currently being used? I hope that helps!
Related
We have a new project in which we are trying to make use of the built in continuous integration in Kentico for tracking changes to templates, page types, transformations etc.
We have managed to get this working locally between two instances of a Kentico database, making changes in one, syncing the changes through CI and then restoring them up to the second database using the Continuous integration application that sits in the bin folder in the Kentico site.
The issue we are having is when it comes to deploying our changes to our dev and live environments.
Our sites are hosted as Azure App services and we deploy to them using VSTS (Azure DevOps) build and release workflows however, as these tasks run in an agent, any powershell script we try to run to trigger the CI application fails because it is not running in the site / server context.
My question is, has anyone managed to successfully run Kentico CI in the context of an Azure app service? Alternatively, how can I trigger a powershell script on the site following a deployment?
Yes, I've got this running in Azure DevOps within the release pipeline itself. It's something that we're currently rolling out as a business where I work.
The key steps to getting this working for me were as follows:
I don't want to deploy the ContinuousIntegration.exe or the repository folders, so I need to create a second artefact set from source control (this is only possible at the moment with Azure Repos and GitHub to my knowledge).
Unzip your deployment package and copy the CMS folder to a working directory, this is where you're going to run CI. I did this because I need the built assemblies available.
From the repo artefact in step 1, copy the ContinuousIntegration.exe and CI repository folders into the correct place in your unzipped working folder.
Ensure that the connection string actually works for the DB in your unzipped folder, if necessary, you may want to change your VS build options in regards to how the web.config is handled.
From here, you should be able to run CI in the new working folder against your target database.
In my instance, as I don't have CI running on the target site it means that everything is restored every time.
I'm in to process of writing this up in more detail, so I'll share here when I've done that.
Edit:
- I finally wrote this up in more detail: https://www.ridgeway.com/blog/web-development/using-kentico-12-mvc-with-azure-devops
We do, but no CI. VSTS + GIT. We store virtual objects in the file system and using git for version control. We have our own custom library that does import export of the Kentico objects (the ones are not controlled by Git).Essentially we have a json file "publishing manifest" where we specify what objects need to be exported (i.e. moved between environments).
There is a step from Microsoft 'Powershell on Target Machines', you guess you can look into that.
P.S. Take a look also at Three Ways to Manage Data in Kentico Using PowerShell
Deploy your CI files to the Azure App Service, and then use a Azure Job to run "ContinuousIntegration.exe"
If you place a file called KenticoCI.bat in the directory \App_Data\jobs\triggered\ContinuousIntegration - this will automatically create a web job that you can can trigger:
KenticoCI.bat
cd D:\home\site\wwwroot
ren App_Offline.bak App_Offline.htm
rem # run Kentico CI Integraton
cd D:\home\site\wwwroot\bin
ContinuousIntegration.exe -r
rem # Removes the 'App_Offline.htm' file to bring the site back online
cd D:\home\site\wwwroot
ren App_Offline.htm App_Offline.bak
I'm deploying an azure function app from package (using this guide - https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package) - this deploys correctly. Yet I can't seem to get an update deployed. Even after I upload a new package, changes are not picked up by azure function app. I tried stopping/starting the app to no avail.
How can I force it to pick up changes?
I ran into this issue with a newly deployed function using function v3, so this still seems to be an issue.
Short answer:
Remove the .zip in D:\home\data\SitePackages and then redeploy and your changes will get picked up.
Long answer
My setup is using ZIP deployment and WEBSITE_RUN_FROM_PACKAGE = 1.
It helps to know what happens when you use ZIP deployment:
The zip file is not written to wwwroot of your site, but instead to D:\home\data\SitePackages (source: https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package)
Then the contents of the zip file is mounted to D:\home\site\wwwroot and run from there.
For some reason, the .zip in D:\home\data\SitePackages was not being replaced, it was still the old version. To fix that, I used the Console of the App Service to delete the file before redeploying.
Navigate to your App Service function and open the Console (under Development tools section)
Run cd D:\home\data\SitePackages and then ls to see the files in the folder.
Run a rm command to remove both the zip and text file.
Redeploy your function, and you will see that changes are picked up.
We had a very weird problem using Azure Function App Deploy from Azure DevOps.
This worked perfectly until we one day made some code changes to the Azure Function that worked locally but not on our dev server. We started looking at the .zip file and the release pipeline but everything looked good here. We could also see that nothing had changed in our azure-pipelines.yml or release pipeline:
Git command: git log -p azure-pipelines.yml
For releases we used the Azure Functions task by Microsoft Corporation.
Looking at the release logs everything seemed good as well:
We then logged in to Kudu (Advanced Tools) and used powershell to look at the files deployed.
https://<your-function>.scm.azurewebsites.net/DebugConsole/?shell=powershell
Running the command dir D:\home\site\wwwroot we could see that the files had not been updated and when we looked at dir D:\home\data\SitePackages we could not see a new .zip file either.
Confirmed wrong .zip by running the command Get-Content D:\home\data\SitePackages\packagename.txt to see which .zip is being used.
Then wen't back to Azure DevOps and tried to create a new release but the files still did not update. Then I tried to clone the Azure Function App Deploy step that had previously worked and disabled the other one. Tried a new release and now everything worked.
I think this must be a Microsoft bug since we did not really change any values at all. Hope this can help someone else and that Microsoft fixes this.
If you replace old package using new one with the same package name(to leverage the same sas url), make sure the old one is overwritten. And you have to click the refresh button next to Function app to sync triggers along with the changes.
Update
I recommend using the publish command(func azure functionapp publish <functionAppName>) provided by Azure Function Core Tools(Cli). V2 Cli benefits from Run from package as well and automates the whole process for us(zip the folder, upload, create app setting, sync triggers).
The command get publishing info(username, password for deployment) first, then
Archives function project.
Uploads zip file(name is in the format of UTCTime-GUID.zip) to function-releases container in Storage account specified by AzureWebJobsStorage app setting.
Create an app setting WEBSITE_RUN_FROM_ZIP(original name of WEBSITE_RUN_FROM_PACKAGE, both work) with SAS Url.
Sync triggers to pick up changes.
After all efforts, the only thing I really determined was that you need to restart - and then wait. About 15-20 minutes after restart the changes were automagically there.
If you try to deploy with new JAR/ZIP to function-app , the changes will be reflected within 1-2 mins
* Make sure that you are using correct value for WEBSITE_RUN_FROM_PACKAGE
* Download the content from console and check whether its same or not
I had the same problem, after changing the deployment to use WEBSITE_RUN_FROM_PACKAGE. I think the problem is that it is using the cached deployment tool, so it's using the previous deployment script, not a new one when you change how you want it to deploy.
If you look at the deploy log in azure devops there is a link to the deploy log in the kudu interface:
I found this answer which explains how to fix it.
We have 2 Azure Webjobs connected to our ASP WebApi application. None of them is using any dependencies from the other one. And yet, after publishing, one of them has executables from the other one. It's worth mantioning that this only happens on publish. Everything is normal on VS build.
This is how files hierachy looks like on Azure FTP
The first one, Deployment, is being published as expected. Those are executables in it's folder:
The second one, EmailSender, has executables from the Deployment:
What's curious, there is also app.publish folder in both of them, containing only one and the same file WebJob.Deployment.exe:
Deployment job works fine. Unfortunately Azure don't recognize EmailSender job, instead it executes Deployment. The only solution that works right now is to manualy delete Deployment's executables directly from FTP, on every publish.
Right now we tried couple of things from SO and blogs, but with no success.
Microsoft.Web.WebJobs.Publish producing duplicate assemblies in deploy package
Azure Webjobs not getting updated after new publish
GitHub solution
Azure Web Job Run Command Incorrectly Set
Edit:
I did accomplished something. It did not resolve the problem, but we don't have app.publish folder anymore. Here is the link to solution on SO. I don't know why we had 'ClickOnce security' options checked for WebJob.Deployment application.
Update:
I run few tests with MSBuild and found something curious. As I said before, Visual Studio Publish works just fine - no additional executables are deployed. But when I run MSBuildprogram (with the same publish profile and project configuration) I got additional Deployment.exe inside EmailsSender folder. This is the command I run:
MSBuild RestAPI.Host.csproj /t:Build /p:Configuration="Develop" /p:Platform="AnyCPU" /p:DocumentationFile="RestAPI.Host.XML" /p:DeployOnBuild="true" /p:PublishProfile="fakebuild_develop.pubxml" /p:OutputPath="backend\build\app\\" /p:SolutionDir="backend\\"
Can someone tell me what's the difference between MSBuild and VS Publish? I cannot find anything usefull on the internet.
I'm confused about deployment webjobs to azure.
I'm using .net core, so I manaully publish my webjobs in my deploy.cmd file, for example like this:
call :ExecuteCmd dotnet publish "%DEPLOYMENT_SOURCE%\My.WebJobs\Mt.WebJobs.csproj" --output "%DEPLOYMENT_TEMP%\App_Data\Jobs\Continuous\MyWebJobs" --configuration Release
That then deploys the webjob to the deployment_temp folder. After that, KuduSync kicks in and syncs to DEPLOYMENT_TARGET which is d:\home\site\wwwroot I can see in there if I look, that there is a App_Data\Jobs\Continuous\MyWebJobs folder, and that ALL my files have been correctly deployed and sync-ed up to that folder.
However, when I run the webjob, it reports it's running from a totally different location (D:\local\Temp\jobs\continuous\MyCustomerIO\bp003r3f.h2g). And when I look in that folder, I see most of my deployed webjob files, but some JSON configuration files that are in App_Data... are missing from here.
So - why is my webjob running from here? And why are some of my deployed files missing?
why is my webjob running from here?
The WebJob is copied to a temporary directory under %TEMP%\jobs\{job type}\{job name}\{random name} and will run from there. This option prevents the original WebJob binaries from being locked which might cause issues redeploying the WebJob.
The WebJob is run directly from the WebJob binaries directory. We call this option in place. This option can have the locking issue and should only be used when there is no risk of locking the file.
By default the first option is used (in place = false).You can explicitly configure this setting in your settings.job. e.g.
{ "is_in_place": true/false }
And why are some of my deployed files missing?
I tested and reproduced this situation too. But I did not find solution to solve this. I suggest that you could use Environment.GetEnvironmentVariable("WEBJOBS_ROOT_PATH"); to get the root path
direcroty of webjob.
WEBJOBS_ROOT_PATH which is location of webjob files, you can specify an absolute path, or otherwise the value will be combined with the default root path:
D:/home/site/wwwroot/ + WEBJOBS_ROOT_PATH(relative) to get webjob_name, webjob_run_id and so on.
For more detail, you could read this article.
I have a solution containing a .net mvc website, and a webjob.
I deploy using git - so on git push to azure, my website is upgraded. I'm now just adding a console application that is going to be run on a schedule. I'm trying to work out how to deploy this with the website when I git push, but I'm not sure how to do this.
I know I could create a folder website\app_data\jobs\triggered\webjob and copy the files into there (say from a post-build event on the webjob), but that would mean I would need to commit all those files to the git repo for the deploy to pick them up - which would also mean that every time I build, Git would be prompting me to commit them again - ugh.
Is there a nicer way to do this - where I can just push my repo to azure, and it will deploy my website correctly AND my webjob?
Thanks
Yes, you can do this without having to put the actual EXE's and project output into the folder explicitly. This blog post from the Azure Blog documents the workaround to enable Git or command-line deployment of a web application inclusive with WebJobs.
http://azure.microsoft.com/blog/2014/08/18/enabling-command-line-or-continuous-delivery-of-azure-webjobs/
If this doesn't unblock you please post an update and I'll help walk diagnose any other issues you run into. You may also want to update the WebJob publishing NuGet to the most updated one on NuGet here: https://www.nuget.org/packages/Microsoft.Web.WebJobs.Publish/1.0.2
As of 9/15/2015, this appears to be as simple as some context menus inside Visual Studio.
If you want your WebJob to automatically deploy whenever your Website is deployed, in Visual Studio you can right-click on the Website and select "Add->Existing Project as Azure WebJob".
More details here, in particular the "Enable automatic WebJobs deployment with a web project" section.
I was struggling with this, but I've got it working now.
It appears that WebJobs.Publish 1.0.2 must be used. 1.0.1 was not working for me. Worked as soon as I updated.
I had also tried adding webjobs.props files as indicated here by David Ebbo, but that didn't work for 1.0.1 and I've now removed those files and it's working under 1.0.2 without them.
Using WebJobs.Publish creates a webjob-publish-settings.json (in the webjob project) and a webjobs-list.json (in the MVC app) and that would seem to be all that is needed.
Only thing that does not work is creating the schedule for a scheduled job. Continuous and Triggered jobs deploy just fine. There's a thread here where David Ebbo mentions that this is a current limitation.