Build an encrypted Docker container via Azure DevOps - azure

I have a pipeline in Azure DevOps that build a container and push it in my Azure Container Registry.
# Docker
# Build and push an image to Azure Container Registry
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- main
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: 'myguyd'
imageRepository: 'p2005'
containerRegistry: 'myacr.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/api/DOCKERFILE'
tag: '$(Build.BuildId)'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: latest
In the container I also have some raw data file that the application needs. In the security point of view, it is not great to have the raw data in the container.
So, I was thinking if there is a way to encrypt the Docker container. And if so, how to do it in the Azure pipeline.

I don't know there is a way to encrypt via the Azure pipeline. If you're using AKS by chance, you can take a look at Confidential Computing.
A hardware-based Trusted Execution Environment (TEE) provides strong
assurances. A TEE provides hardware and software measurements from
trusted computing base (TCB) components. Confidential containers
offerings on Azure allow verification of these measurements and
validate if the container applications run in a verifiable execution
environment.
Confidential containers support custom applications developed with any
programming languages. You can also run Docker containers off the
shelf.
You can achieve this with Intel SGX and AKS.
To run an existing Docker container, applications on confidential
computing nodes require an abstraction layer or Intel Software Guard
Extensions (SGX) software to use the special CPU instruction set.
Configure SGX to protect your sensitive application code. SGX creates
a direct execution to the CPU to remove the guest operating system
(OS), host OS, or hypervisor from the trust boundary. This step
reduces the overall surface attack areas and vulnerabilities.
Azure Kubernetes Service (AKS) fully supports confidential containers.
You can run existing containers confidentially on AKS.

Related

Can't Deploy Image to Azure App Service via GitHub Actions

I've tried so many different things here, so i'm super excited to see what the issue is :) (but, im assuming it's something silly on my end)
Here's the pertinent bit of my GitHub actions YML:
deploy:
name: Update Azure App Service
needs: [tests]
runs-on: ubuntu-latest
steps:
- name: Deploy to Azure
uses: azure/webapps-deploy#v2
with:
app-name: $APP_SERVICE_NAME
images: $IMAGE_NAME
publish-profile: ${{ secrets.AZURE_WEBAPP_PUBLISH_PROFILE }}
GitHub actions logs (debug mode):
##[debug]Evaluating condition for step: 'Deploy to Azure'
##[debug]Evaluating: success()
##[debug]Evaluating success:
##[debug]=> true
##[debug]Result: true
##[debug]Starting: Deploy to Azure
##[debug]Loading inputs
##[debug]Evaluating: secrets.AZURE_WEBAPP_PUBLISH_PROFILE
##[debug]Evaluating Index:
##[debug]..Evaluating secrets:
##[debug]..=> Object
##[debug]..Evaluating String:
##[debug]..=> 'AZURE_WEBAPP_PUBLISH_PROFILE'
##[debug]=> '***'
##[debug]Result: '***'
##[debug]Loading env
Run azure/webapps-deploy#v2
::add-mask::***
::add-mask::***
##[debug][GET] https://SNIP.scm.azurewebsites.net:443/diagnostics/runtime
##[debug]loaded affinity cookie ["ARRAffinity=SNIP;Path=/;HttpOnly;Secure;Domain=SNIP-qa-ause.scm.azurewebsites.net","ARRAffinitySameSite=SNIP;Path=/;HttpOnly;SameSite=None;Secure;Domain=SNIP.scm.azurewebsites.net"]
##[debug]getAppRuntime. Data: {"statusCode":200,"statusMessage":"OK","headers":{"connection":"close","content-type":"application/json; charset=utf-8","date":"Fri, 22 Jul 2022 06:27:05 GMT","server":"Kestrel","set-cookie":["ARRAffinity=SNIP;Path=/;HttpOnly;Secure;Domain=app-be-SNIP.scm.azurewebsites.net","ARRAffinitySameSite=SNIP;Path=/;HttpOnly;SameSite=None;Secure;Domain=app-be-SNIP.azurewebsites.net"],"transfer-encoding":"chunked"},"body":{"nodejs":[],"system":{"os_name":"Unix 4.15.0.169","os_build_lab_ex":"","cores":2}}}
##[debug]App Runtime OS: Unix 4.15.0.169
Error: Deployment Failed with Error: Error: Publish profile is invalid for app-name and slot-name provided. Provide correct publish profile credentials for app.
##[debug]Deployment failed
##[debug]Node Action run completed with exit code 1
##[debug]AZURE_HTTP_USER_AGENT='GITHUBACTIONS_DeployWebAppToAzure_0d7e9cbfcd2b52a8a34111a0798ab22d77203fc1f56b2732d2431bfaeb336577'
##[debug]AZURE_HTTP_USER_AGENT=''
##[debug]Finishing: Deploy to Azure
The obvious solution here is what the error says: secret is wrong. It's not. I've added secret to GitHub repository, with the value of the Azure App Service. Done this twice, to make sure i've downloaded it correct, and copied as is.
Other things i've tried:
Tried setting WEBSITE_WEBDEPLOY_USE_SCM to true in the App Service configuration.
Tried setting slot-name to Production in the GitHu action step (even though, there is no slots here)
Other things, that might help solve my problem:
I'm building a .NET image as a Docker image, and pushing this to Azure Containr Registry. I'm then trying to 'update' this, in Azure App Service
The Azure App Service is running Linux, not Windows
"Deployment Center" in App Service still has "Source: Container Registry", pinned to a static image. This is because this was required when i setup App Service. But i assume that my GitHub actions builds will override this?
Any help would be great. :) Thanks in advance!
EDIT
I know i can:
Push to a constant tag, e.g latest
Set the image name to latest in Azure App Service
Set continuous deployment to "on" in Azure App Service
And this will automatically update App Service, because of the webhook.
But this will not help production scenarios, as i need to create different image tags, for different versions i can rollback. Since the version is different, i can't "pin" to a constant tag. Therefore, i need some kind of task to update Azure App Service with a specific tag. I'd prefer to have the same CI for dev/prod.

Azure devops service connection and central pipeline

I have a requirement of giving multiple teams access to a shared resource in azure. I therefore want to limit how people can publish changes to the shared resource.
The idea is to limit the use of a service connection to a specific pipeline, as per this documentation. However if the pipeline is stored in their own repo the developer could change it. This would not give me enough control. I therefore found that it was possible using a template from a central repo. Using a shared repo, would then allow me to have a service connection solely for the template?
So how I imagine doing the above is I need to grant project X a service connection for my BuildTemplates Repo. But this is basically just access to the repo and to be able to use the shared templates. Then in BuildTemplates repo I can have a service connection for my template A.
Now the developer in project X - creates her deployments and configurations for her pipeline with her own service connection scoped for her resources. Then she inherits a template from BuildTemplates Repo and passes relevant parameters for the template A.
She cannot alter the template pipeline A and only the template pipeline A can publish to the shared resource, because of the scoped service connection. I can therefore create relevant guards for the shared azure resource in the template pipeline A - so I restrict how developer X can publish to my shared azure resource.
does this make sense and is it viable?
The pipeline part in A cannot be edited by developer in X ?
The service connection in A will not propagate out so developer in X can use it in an inappropriate way?
Update
The above solution does not seem to be viable since the pipeline template is executed in the source branch scope.
Proposed Solution
The benefits I see with the above suggestion doe not seem possible, because of the issues. However one can utilise pipeline triggers, as a viable solution. This however results in a new issue. When a pipeline is triggered by Developer Y in Y's repository and it succeeds. Then a trigger is made in MAIN repository and the pipeline in MAIN fails e.g., because the artifacts from Y introduced an Issue. How does developer Y get notified about the issues in MAIN pipeline?
Here is my solution, in same Azure organization, we can create a Azure Project, then create a repo to save common pipeline template.
All the repos in other Azure project can access this pipeline template.
UserProject/UserRepo/azure-pipelines.yml
trigger:
branches:
include:
- master
paths:
exclude:
- nuget.config
- README.md
- azure-pipelines.yml
- .gitignore
resources:
repositories:
- repository: devops-tools
type: git
name: PipelineTemplateProject/CommonPipeline
ref: 'refs/heads/master'
jobs:
- template: template-pipeline.yml#devops-tools
PipelineTemplateProject/CommonPipeline/template-pipeline.yml
Since the inline script of pipeline has 5000 characters limitation,
you can put your script(not only powershell, but also other languages) in PipelineTemplateProject/CommonPipeline/scripts/test.ps1
# Common Pipeline Template
jobs:
- job: Test_Job
pool:
name: AgentPoolName
steps:
- script: |
echo "$(Build.RequestedForEmail)"
echo "$(Build.RequestedFor)"
git config user.email "$(Build.RequestedForEmail)"
git config user.name "$(Build.RequestedFor)"
git config --global http.sslbackend schannel
echo '------------------------------------'
git clone -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" -b $(ToolsRepoBranch) --single-branch --depth=1 "https://PipelineTemplateProject/_git/CommonPipeline" DevOps_Tools
echo '------------------------------------'
displayName: 'Clone DevOps_Tools'
- task: PowerShell#2
displayName: 'Pipeline Debug'
inputs:
targetType: 'inline'
script: 'Get-ChildItem -Path Env:\ | Format-List'
condition: always()
- task: PowerShell#2
displayName: 'Run Powershell Scripts'
inputs:
targetType: filePath
filePath: 'DevOps_Tools/scripts/test.ps1'
arguments: "$(System.AccessToken)"
Notes:
Organization Setting - Settings - Disable Limit job authorization scope to current project for release pipelines
Organization Setting - Settings - Limit job authorization scope to current project for non-release pipelines
Check some option in project setting as well.
So the normal user only access their own repo, cannot access DevOps project, and DevOps owner can edit template pipeline only.
For the notification issue, I use an Email extention "rvo.SendEmailTask.send-email-build-task.SendEmail#1"

Security hole in Azure Pipelines?

I've been researching Azure DevOps and I've come across what looks like a pretty obvious security hole in Azure pipelines.
So, I'm creating my pipeline as YAML and defining 2 stages: a build stage, and a deployment stage. The deployment stage looks like this:
- stage: deployApiProdStage
displayName: 'Deploy API to PROD'
dependsOn: buildTestApiStage
jobs:
- deployment: deployApiProdJob
displayName: 'Deploy API to PROD'
timeoutInMinutes: 10
condition: and(succeeded(), eq(variables.isRelease, true))
environment: PROD
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp#1
displayName: 'Deploy Azure web app'
inputs:
azureSubscription: '(service connection to production web app)'
appType: 'webAppLinux'
appName: 'my-web-app'
package: '$(Pipeline.Workspace)/$(artifactName)/**/*.zip'
runtimeStack: 'DOTNETCORE|3.1'
startUpCommand: 'dotnet My.Api.dll'
The Microsoft documentation talks about securing this by adding approvals and checks to an environment; in the above case, the PROD environment. This would be fine if the protected resource here that allows publishing to my PROD web app - the service connection in azureSubscription - were pulled from the PROD environment. Unfortunately, as far as I can tell, it's not. It's associated instead with the pipeline itself.
This means that when the pipeline is first run, the Azure DevOps UI prompts me to permit the pipeline access to the service connection, which is needed for any deployment to happen. Once access is permitted, that pipeline has access to that service connection for evermore. This means that from then on, that service connection can be used no matter which environment is specified for the job. Worse still, any environment name specified that is not recognized does not cause an error, but causes a blank environment to be created by default!
So even if I setup a manual approval for the PROD environment, if someone in the organization manages to slip a change through our code review (which is possible, with regular large code reviews) that changes the environment name to 'NewPROD' in the azure-pipelines.yml file, the CI/CD will create that new environment, and go ahead and deploy immediately to PROD because the new environment has no checks or approvals!
Surely it would make sense for the service connection to be associated with the environment instead. It would also make sense to have an option to ban the auto-creation of new environments - I don't really see how that's particularly useful anyway. Right now, as far as I can tell, this is a huge security hole that could allow deployments to critical environments by anyone who has commit access to the repo or manages to slip a change to the azure-pipelines.yml file through the approval process, introducing a major single point of failure/weakness. What happened to the much-acclaimed incremental approach to securing your pipelines? Am I missing something here, or is this security hole as bad as I think it is?
In your example, it seemed you created/used an empty environment, there is no deployment target. Currently, only the Kubernetes resource and virtual machine resource types are supported in an environment.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
The resource in your example is a service connection, so you need to go the service connection and define checks for this service connection.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass

Deploy ARM Template and Docker image

Well, everyone keeps saying to try to learn something new during this pandemic, so that's what I'm trying to do. I'm brand new to YAML and Docker, and fairly new to ARM templates.
I've been able to use YAML to build and publish .NET Core apps using the dotnet YAML commands. I've been able to use YAML to build a Docker image of a .NET Core 3.1 Web API app.
I'm publishing the docker image to an Azure Resource Container.
Where I'm stuck:
Ideally, I want to use ARM templates in the .NET Core project to be able to deploy my Azure Portal resources via code vs creating manually in the portal. This should include the Azure Resource Container.
It seems like what I need to do to accomplish this is build the docker image during the Build phase and then publish that to the $(System.ArtifactsDirectory) so that I can hold it there until the Azure Resources have been deployed via ARM templates.
I can't find anything about publishing the docker image as a pipeline artifact though. Only things about pushing to DockerHub, GitHub, and Azure Resource Containers, which isn't what I want to do right now.
Anyone have any ideas how I can pull this off?
I actually found a way to pull this off. The article I read about this is over on dev.to
There are a few things that I needed to do to pull this off.
First, don't set the containerRegistry on build.
- task: Docker#2
displayName: Build Docker Image
inputs:
repository: $(imageName)
command: build
dockerfile: '**/Dockerfile'
buildContext: 'HopefulMommaDesignsAPI'
tags: $(Build.BuildId)
The reason for this is because if you set the container registry then, with ACR at least, it prefixes the ACR URL to the docker image name. However, that URL is encrypted when doing this in Azure DevOps. There's nothing wrong with that, but it causes confusion when you list images because it will show up as ***/ImageName.
Second, save the image as a TAR file in the artifact staging directory.
- task: Docker#2
displayName: 'Save image to TAR'
inputs:
repository: '$(imageName)'
command: save
arguments: '--output $(build.artifactstagingdirectory)/$(imageName).image.tar $(imageName):$(Build.BuildId)'
addPipelineData: false
That will include the .tar file when you publish your build.
The next steps are during the deploy jobs.
After you have downloaded the build artifact and deployed the ARM templates, you'll need to load the docker image from the .tar file.
- task: Docker#2
displayName: 'Load Image from Tar'
inputs:
command: load
arguments: '--input $(build.artifactstagingdirectory)/$(buildArtifactName)/$(imageName).image.tar'
Now that the docker image has been loaded you'll want to ReTag the image and publish it.
- task: Docker#2
displayName: 'ReTag Image with ACR URL - BuildId'
inputs:
containerRegistry: 'Hopeful Momma Designs ACR'
repository: $(imageName)
command: tag
arguments: '$(imageName):$(Build.BuildId) $(containerRegistryUrl)/$(imageName):$(Build.BuildId)'
- task: Docker#2
displayName: 'ReTag Image with ACR URL - latest'
inputs:
containerRegistry: 'Hopeful Momma Designs ACR'
repository: $(imageName)
command: tag
arguments: '$(imageName):$(Build.BuildId) $(containerRegistryUrl)/$(imageName):latest'
- task: Docker#2
displayName: push
inputs:
containerRegistry: 'Hopeful Momma Designs ACR'
repository: $(imageName)
command: push
dockerfile: '**/Dockerfile'
buildContext: 'HopefulMommaDesignsAPI'
tags: |
$(Build.BuildId)
latest
NOTE: If your Azure Container Registry is part of your ARM template, the Docker commands where you include the containerRegistry will fail because you have to manually set up the service connection from Azure DevOps to the Azure Container Registry before you can reference it in your YAML files as the containerRegistry.
This isn't a huge deal for me because I will normally work on pipelines in sections anyway. I'll make sure the build works, then I'll make sure the ARM templates work, then I'll make sure the docker images pushes to the ACR. If you're rebuilding an environment from scratch, just comment out the deployment steps after the ARM template step to deploy the infrastructure and then uncomment the remaining steps. It isn't the cleanest solution, but I like that better than having a separate pipeline just to deploy infrastructure.
You publish your container to an Azure Container Registry. Containers aren't build artifacts in the traditional sense.
I suspect that you have a chicken/egg problem. You want to create the registry during your deployment process, but you don't have a registry during the build process. So how can you deploy your container to a registry that won't exist until the next phase, right? In fact, you'll discover that you need to create a service connection for the ACR before you can publish resources to it. The ARM template isn't going to be able to do that for you.
This is a case where it's common to separate the application's requirements from larger infrastructural requirements. The ACR is a supporting piece of infrastructure. It's common in cases like this to have a separate pipeline that creates required supporting infrastructure to solve the chicken-and-egg problem.
Also, please be aware of correct terminology. There's nothing called an "Azure Resource Container". There is something called an "Azure Container Registry", which is what I assume you're talking about.

Bitbucket pipeline dynamic IP address and azure database restore

I have CI deployment with dotnet core. Simply I need restore database, before publish to server. But azure firewall is blocking bitbucket engine dynamic IP address.
In yml config I have this:
image: microsoft/dotnet:sdk
pipelines:
default:
- step:
caches:
- dotnetcore
script: # Modify the comma`nds below to build your repository.
- export ASPNETCORE_ENVIRONMENT=Production
- export PROJECT_NAME=XXX
- export TEST_NAME=XXXTests
- dotnet restore $PROJECT_NAME
- dotnet build
- dotnet ef database update -p $XXX --configuration Release
- dotnet test $XXXTests
#...
#...
After this pipeline is finished with error:
Client with IP address 'DYNAMIC_GENERATED_IP_ADDRESS' is not allowed to access the
server. To enable access, use the Windows Azure Management Portal or
run sp_set_firewall_rule on the master database to create a firewall
rule for this IP address or address range. It may take up to five
minutes for this change to take effect.
Is there way, how to solve it?
AFAIK, although the Client IPs are dynamically generated, it should belong to one of the ranges mentioned here (See the section Valid IP addresses for Bitbucket Pipelines build environments). Do note that these are prone to change as mentioned within, and that in addition to IP whitelisting, you should use a secure means of authentication for any services exposed to Bitbucket Pipelines.
And then you can whitelist them as mentioned here, or here.
Hope this helps!

Resources