Bitbucket pipeline dynamic IP address and azure database restore - azure

I have CI deployment with dotnet core. Simply I need restore database, before publish to server. But azure firewall is blocking bitbucket engine dynamic IP address.
In yml config I have this:
image: microsoft/dotnet:sdk
pipelines:
default:
- step:
caches:
- dotnetcore
script: # Modify the comma`nds below to build your repository.
- export ASPNETCORE_ENVIRONMENT=Production
- export PROJECT_NAME=XXX
- export TEST_NAME=XXXTests
- dotnet restore $PROJECT_NAME
- dotnet build
- dotnet ef database update -p $XXX --configuration Release
- dotnet test $XXXTests
#...
#...
After this pipeline is finished with error:
Client with IP address 'DYNAMIC_GENERATED_IP_ADDRESS' is not allowed to access the
server. To enable access, use the Windows Azure Management Portal or
run sp_set_firewall_rule on the master database to create a firewall
rule for this IP address or address range. It may take up to five
minutes for this change to take effect.
Is there way, how to solve it?

AFAIK, although the Client IPs are dynamically generated, it should belong to one of the ranges mentioned here (See the section Valid IP addresses for Bitbucket Pipelines build environments). Do note that these are prone to change as mentioned within, and that in addition to IP whitelisting, you should use a secure means of authentication for any services exposed to Bitbucket Pipelines.
And then you can whitelist them as mentioned here, or here.
Hope this helps!

Related

Azure devops service connection and central pipeline

I have a requirement of giving multiple teams access to a shared resource in azure. I therefore want to limit how people can publish changes to the shared resource.
The idea is to limit the use of a service connection to a specific pipeline, as per this documentation. However if the pipeline is stored in their own repo the developer could change it. This would not give me enough control. I therefore found that it was possible using a template from a central repo. Using a shared repo, would then allow me to have a service connection solely for the template?
So how I imagine doing the above is I need to grant project X a service connection for my BuildTemplates Repo. But this is basically just access to the repo and to be able to use the shared templates. Then in BuildTemplates repo I can have a service connection for my template A.
Now the developer in project X - creates her deployments and configurations for her pipeline with her own service connection scoped for her resources. Then she inherits a template from BuildTemplates Repo and passes relevant parameters for the template A.
She cannot alter the template pipeline A and only the template pipeline A can publish to the shared resource, because of the scoped service connection. I can therefore create relevant guards for the shared azure resource in the template pipeline A - so I restrict how developer X can publish to my shared azure resource.
does this make sense and is it viable?
The pipeline part in A cannot be edited by developer in X ?
The service connection in A will not propagate out so developer in X can use it in an inappropriate way?
Update
The above solution does not seem to be viable since the pipeline template is executed in the source branch scope.
Proposed Solution
The benefits I see with the above suggestion doe not seem possible, because of the issues. However one can utilise pipeline triggers, as a viable solution. This however results in a new issue. When a pipeline is triggered by Developer Y in Y's repository and it succeeds. Then a trigger is made in MAIN repository and the pipeline in MAIN fails e.g., because the artifacts from Y introduced an Issue. How does developer Y get notified about the issues in MAIN pipeline?
Here is my solution, in same Azure organization, we can create a Azure Project, then create a repo to save common pipeline template.
All the repos in other Azure project can access this pipeline template.
UserProject/UserRepo/azure-pipelines.yml
trigger:
branches:
include:
- master
paths:
exclude:
- nuget.config
- README.md
- azure-pipelines.yml
- .gitignore
resources:
repositories:
- repository: devops-tools
type: git
name: PipelineTemplateProject/CommonPipeline
ref: 'refs/heads/master'
jobs:
- template: template-pipeline.yml#devops-tools
PipelineTemplateProject/CommonPipeline/template-pipeline.yml
Since the inline script of pipeline has 5000 characters limitation,
you can put your script(not only powershell, but also other languages) in PipelineTemplateProject/CommonPipeline/scripts/test.ps1
# Common Pipeline Template
jobs:
- job: Test_Job
pool:
name: AgentPoolName
steps:
- script: |
echo "$(Build.RequestedForEmail)"
echo "$(Build.RequestedFor)"
git config user.email "$(Build.RequestedForEmail)"
git config user.name "$(Build.RequestedFor)"
git config --global http.sslbackend schannel
echo '------------------------------------'
git clone -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" -b $(ToolsRepoBranch) --single-branch --depth=1 "https://PipelineTemplateProject/_git/CommonPipeline" DevOps_Tools
echo '------------------------------------'
displayName: 'Clone DevOps_Tools'
- task: PowerShell#2
displayName: 'Pipeline Debug'
inputs:
targetType: 'inline'
script: 'Get-ChildItem -Path Env:\ | Format-List'
condition: always()
- task: PowerShell#2
displayName: 'Run Powershell Scripts'
inputs:
targetType: filePath
filePath: 'DevOps_Tools/scripts/test.ps1'
arguments: "$(System.AccessToken)"
Notes:
Organization Setting - Settings - Disable Limit job authorization scope to current project for release pipelines
Organization Setting - Settings - Limit job authorization scope to current project for non-release pipelines
Check some option in project setting as well.
So the normal user only access their own repo, cannot access DevOps project, and DevOps owner can edit template pipeline only.
For the notification issue, I use an Email extention "rvo.SendEmailTask.send-email-build-task.SendEmail#1"

Azure DevOps – Pipelines to intranet TFS 2018: SSL_ERROR_SYSCALL

I am setting up Azure Pipelines, I have few that get sources from GitHub and trying to setup pipelines to reach TFS on intranet for some time.
I started anew, I created a new PAT token in TFS with full access.
I created a new Service Connection of type: “Azure Repos/Team Foundation Server” using this URL: https://tfs.myCie.com/defaultcollection/MyProject and the PAT token from TFS above.
I created a new pipeline, selecting Other Git and using the new service connection. I didn’t know what to use as default branch as I don’t see any branches on TFS so I used master.
I used the empty job option and entered again the service connection name and the master branch. When I tried to select the YAML template, it wasn’t able to save it to a repo, should I have a different repo for Yaml files to TFS?
I paid attention that my .proxy file was filled and the .proxybypass file had the TFS server URL.
When I run the pipeline, early on the UI gives a timeout but after 6-7 minutes, I have logs that says:
Syncing repository: repository (ExternalGit)
##[debug]repository url=https://tfs.myCie.com/defaultcollection/myProject/
##[debug]targetPath=D:\Agent_work\19\s
git version 2.26.2.windows.1
git remote add origin https://tfs.myCie.com/defaultcollection/myProject/
##[debug]Finished process 6888 with exit code 0.
git config --get-all http.https://tfs. myCie.com/defaultcollection/myProject/.extraheader
If I test this .ExtraHeader URL, it either doesn’t exist or I don’t have access !
git config --get-all http.proxy
git remote set-url origin https://emptyusername:***#tfs. myCie.com/defaultcollection/myProject/
fatal: unable to access 'https://tfs.myCie.com/defaultcollection/myProject/': OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to tfs.oecd.org:443
Can it be a proxy error or a user access account ?
In the later case, what access rights would I need ?
What settings I should use when creating the PAT token in TFS ?
Thanks.
I am afraid you were using the wrong serve connection type.
Azure Repos/Team Foundation Server service connection is used in the repositories resources section in the yaml pipeline, which you can refer to the repositories in other organizations using a service connection. See this document for more informaiton.
resources:
repositories:
- repository: otherrepo
name: ProjectName/RepoName
endpoint: newTFSServiceConnection
steps:
- checkout: self
- checkout: otherrepo
If you want to set up a pipeline for a repos in another TFS server. You need to create a new Service Connection of type Other Git
And enter the tfs repo url and users name / password in the edit page.
Then you can select this Other git type Service connection when creating a pipeline.
If you cannot connect to your tfs repo via PAT. It might be because the IIS Basic Authentication is enabled on your windows machine, it prevents you from using personal access tokens (PATs) as an authentication mechanism. See here, You can try using basic anthentication method (username and password) instead
Please be noted that you need to run this pipeline on your self-hosted agent. As you are trying to connect to the repo on your tfs intranet server. Cloud Microsoft hosted agent cannot connect to your intranet tfs server. Unless it is can be accessed in the public network.

Security hole in Azure Pipelines?

I've been researching Azure DevOps and I've come across what looks like a pretty obvious security hole in Azure pipelines.
So, I'm creating my pipeline as YAML and defining 2 stages: a build stage, and a deployment stage. The deployment stage looks like this:
- stage: deployApiProdStage
displayName: 'Deploy API to PROD'
dependsOn: buildTestApiStage
jobs:
- deployment: deployApiProdJob
displayName: 'Deploy API to PROD'
timeoutInMinutes: 10
condition: and(succeeded(), eq(variables.isRelease, true))
environment: PROD
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp#1
displayName: 'Deploy Azure web app'
inputs:
azureSubscription: '(service connection to production web app)'
appType: 'webAppLinux'
appName: 'my-web-app'
package: '$(Pipeline.Workspace)/$(artifactName)/**/*.zip'
runtimeStack: 'DOTNETCORE|3.1'
startUpCommand: 'dotnet My.Api.dll'
The Microsoft documentation talks about securing this by adding approvals and checks to an environment; in the above case, the PROD environment. This would be fine if the protected resource here that allows publishing to my PROD web app - the service connection in azureSubscription - were pulled from the PROD environment. Unfortunately, as far as I can tell, it's not. It's associated instead with the pipeline itself.
This means that when the pipeline is first run, the Azure DevOps UI prompts me to permit the pipeline access to the service connection, which is needed for any deployment to happen. Once access is permitted, that pipeline has access to that service connection for evermore. This means that from then on, that service connection can be used no matter which environment is specified for the job. Worse still, any environment name specified that is not recognized does not cause an error, but causes a blank environment to be created by default!
So even if I setup a manual approval for the PROD environment, if someone in the organization manages to slip a change through our code review (which is possible, with regular large code reviews) that changes the environment name to 'NewPROD' in the azure-pipelines.yml file, the CI/CD will create that new environment, and go ahead and deploy immediately to PROD because the new environment has no checks or approvals!
Surely it would make sense for the service connection to be associated with the environment instead. It would also make sense to have an option to ban the auto-creation of new environments - I don't really see how that's particularly useful anyway. Right now, as far as I can tell, this is a huge security hole that could allow deployments to critical environments by anyone who has commit access to the repo or manages to slip a change to the azure-pipelines.yml file through the approval process, introducing a major single point of failure/weakness. What happened to the much-acclaimed incremental approach to securing your pipelines? Am I missing something here, or is this security hole as bad as I think it is?
In your example, it seemed you created/used an empty environment, there is no deployment target. Currently, only the Kubernetes resource and virtual machine resource types are supported in an environment.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
The resource in your example is a service connection, so you need to go the service connection and define checks for this service connection.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass

Create VM and deploy application on to Azure VM via Ansible

I am new to Ansible Azure modules. What I need is the ability to create VM's(n) and deploy an application on all of them. From what I read online azure_rm_virtual machine can be used to create VM(Assuming vnet, subnet and other networking jazz is in place). I am trying to figure out how do I deploy(copy bits and run my app installer) my application onto the newly created VM's? Can my app deployment be part of the VM creation process? If so what are the options? I looked into other modules but couldn't find any relevant one's. Didn't find any on the azure documentation too. Thanks.
Use azure_rm_deployment module to create VMs.
Because you know what you are deploying, use azure_rm_networkinterface_facts to get the IP address of your VM.
Use add_host to create an ad-hoc inventory. This all with:
---
- name: Create VM
hosts: localhost
gather_facts: true
Once you have your inventory use:
- name: Run updates
hosts: myazureinventory
gather_facts: true
From here, you can install your software. Hope it helps.
The ansible role to deploy a VM is now here: azure-iaas. I couldn't find one called azure_rm_deployment in the ansible repo.

How to Integrate GitLab-Ci w/ Azure Kubernetes + Kubectl + ACR for Deployments?

Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you.
More Background
We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl.
That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci.
That GitLab-Ci integration is the goal, and the 'Why' behind this question.
My Question
Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc).
While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API).
My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service):
How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service?
If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able!
Kubernetes Resources I am Reviewing
How should I manage deployments with kubernetes
Kubernetes Deployments
Will update as I work through the process.
Creating the integration
I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this question because I was having some error when I tried to add my Kubernetes cluester info into GitLab.
How to integrate them:
Inside GitLab, go to "Operations" > "Kubernetes" menu.
Click on the "Add Kubernetes cluster" button on the top of the page
You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using az login command, and then execute this other command to get the Kubernetes cluster credentials: az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name>
The previous command will create a ~/.kube/config file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this .kube/config file
These are the fields:
Kubernetes cluster name: It's the name of your cluster on Azure, it's in the .kube/config file too.
API URL: It's the URL in the field server of the .kube/config file.
CA Certificate: It's the field certificate-authority-data of the .kube/config file, but you will have to base64 decode it.
After you decode it, it must be something like this:
-----BEGIN CERTIFICATE-----
...
some base64 strings here
...
-----END CERTIFICATE-----
Token: It's the string of hexadecimal chars in the field token of the .kube/config file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with cluster-admin privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen here under Create a gitlab service account in the default namespace) and apply it to your cluster by means of kubectl apply -f serviceaccount.yml.
Project namespace (optional, unique): I leave it empty, don't know yet for what or where this namespace can be used.
Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now.
Deploy
In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the kubectl command, here is a list of all the variables available:
https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables
To have these variables injected in your deploy job, there are some conditions:
You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above
Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your .gitlab-ci.yml) must have an environment key (take a look at the line 31 in this example), and the environment name must match the name you used in menu "Operations" > "Environments".
Here are an example of a .gitlab-ci.yml with three stages:
Build: it builds a docker image and push it to gitlab private registry
Test: it doesn't do anything yet, just put an exit 0 to change it later
Deploy: download a stable version of kubectl, copy the .kube/config file to be able to run kubectl commands in the cluster and executes a kubectl cluster-info to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this kubectl cluster-info command is executing fine.
Tip: to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command env in the script of your deploy stage. It helps a lot to debug a job.
I logged into our GitLab-Ci backend today and saw a 'Kubernetes' button — along with an offer to save $500 at GCP.
GitLab Kubernetes
URL to hit your repo's Kubernetes GitLab page is:
https://gitlab.com/^your-repo^/clusters
As I work through the integration process I will update this answer (but also welcome!).
Official GitLab Kubernetes Integration Docs
https://docs.gitlab.com/ee/user/project/clusters/index.html

Resources