How does Crafter Delivery pull from Studio’s published local git repo remotely? - crafter-cms

Referring to the 2nd diagram in this page:
https://docs.craftercms.org/en/3.0/developers/architecture.html
or
https://docs.craftercms.org/en/3.0/_images/detailed.png
specifically the arrow from Delivery to Authoring. Here I assume Deliver and Authoring does not share any file system.
In crafter-deployer configuration for Delivery, what is the syntax for the url setting shown in this yaml example?
https://docs.craftercms.org/en/3.0/system-administrators/deployer/admin-guide.html#target-configuration

It's a repository path/url to a valid the site's (published) Git repository on the authoring server.
If both authoring and delivery are on the same machine (simple deployments, developer machines, PoC's etc) this is just a file path.
In "real-world" deployments such as production and lower environments authoring and delivery are typically installed in separate machines. This you need a URL/path that points to the authoring server. Typically over SSH. It's secure and simple.
Example:
ssh://myserver/opt/crater/sites/mysite
https://git-scm.com/book/en/v2/Git-on-the-Server-The-Protocols
The best way to configure the deployer on the delivery instance is to use the init-site executable.
https://docs.craftercms.org/en/3.0/system-administrators/activities/setup-site-for-delivery.html

Related

GitLab runner needs access to private user/client certificate

We have a hosted GitLab instance internally and a Nexus repository hosted internally (neither of which touches the open internet). The Nexus repository uses client certificates for authentication. We have a repository in GitLab that is accessed by many developers and we need a way to get the user's client certificate in the runner so we can access Nexus.
Is there a way to specify in the .gitlab-ci.yml a user-specific mount? Putting the user's certificate information in the repository's "variables" is not an option because we have many developers accessing the same project. We (as developers) also don't have access to the runners. I can, however, create a new container/image that the GitLab runner can execute. Any thoughts on how to get the CI pipeline to recognize the user's certificate in the pipeline would be greatly appreciated!
After reading the GitLab documentation and realizing how far behind we were in releases (a major version) I discovered that GitLab now integrates with Vault. This appears to work for exactly our use case.
https://docs.gitlab.com/ee/ci/examples/authenticating-with-hashicorp-vault/

How can I centralize Gitlab CI Deployment / Environments information when a deploy job is initiated from a fork of the repository?

Each developer on our team forks the production repository, then opens MRs from their fork to merge into the master branch of the production one.
We have a fixed number of test environments which are shared by all developers. Eventually we hope to move to ephemeral environments for each MR but at the moment this is not possible.
We utilize the Environments / Deployments functionality of Gitlab: https://docs.gitlab.com/ee/ci/environments/ and the production repository is the central location where we see what is deployed to each environment.
The problem is that when a developer opens a MR they frequently choose to manually deploy to one of our shared test environments. But since the pipeline for their branch runs in their fork it records the Environment / Deploy information there rather than in production. This makes it impossible for us to know who deployed what to each test environment as that information is recorded in random developer forks rather than in the centralized location.
We only deploy to production hosts from the production repository (it is disabled in developer forks) so that information is centralized and up to date. But it is a headache for developers to determine who has deployed something to one of the shared test environments and they frequently overwrite each others' changes by accident.
Is there any way for us to allow test deploys from developer branches, but centralize the information about what is deployed in each environment in the production repository?

How to automate IIS web applications using Jenkins with Team foundations server source code management tools

How to automate IIS web applications using Jenkins with Team foundations server source code management tools.
I am planning to automate IIS applications can you please provide me any documents which is really helpful.
If my understanding is correct, you want to queue Jenkins build automatically when there is a change in IIS web applications which hosted in TFS. You can check the steps below:
In Jenkins, add a new project, and in Source Code Management, select TFVC or Git (it depends on which version control type you use in TFS). Details you can refer to the link below:
https://github.com/jenkinsci/tfs-plugin/blob/master/README.md#job-configuration
In TFS/VSTS, add a new Service Hook, and select Jenkins and choose Code checked in or Code pushed event (it depends on which version control type you use in TFS).
With these two configurations, when there is a new check-in or push to the IIS web application, the Jenkins build will be triggered automatically. Here is a useful blog for your reference:
http://www.donovanbrown.com/post/Setting-up-CICD-with-the-TFS-Plugin-for-Jenkins

Continuous deployment to Azure using Bamboo

I'm working with Atlassian Bamboo on Demand for Continuous Integration and it works great.
Now I'm trying to use the "Deploy" feature and the problem is that I'm working with Azure (ftp, publish, git, mercurial... I really don't care how) and I can't find a "task" which could perform it.
Has anyone achieved this?
I do automated deployments to AWS from bamboo, but the concept is pretty much the same.
Bamboo has no specific options for deploying to the public cloud, so you have to build or call an existing deployment tool. At the end of the day bamboo deployments provide you with meta-data over which build has been deployed to which environment, and security over who can do deploys, but its up to you have to make the actual deploy work. Bamboo does give you a totally extensible engine for controlling the "how" via scripting. The deployment engine is basically a cut down version of the CI engine with a subset of tasks.
I resolved to build our deployment tooling due to it being fairly simple to get started and a worthwhile investment in time because this will be used often and improved over time. Bamboo gives me authorization and access control, and my scripts give me fine grained control of my deployments.
I'm assuming you are running a bamboo agent on a windows image like me. So powershell scripts are your friend . If you're running in linux you'll want to do the same with bash.
I have a powershell scripts controlling my deployments through a controller/agent model.
The controller script is source controlled and maintained in mercurial repo. This is pulled by the repository task.
The agent is a powershell script wrapped by a simple webapi rest service with a custom authentication mechanism. The agent is setup when an app server instance is provisioned in ec2. We use puppet for server provisioning.
The controller does the following for a deployment
connects to the vpc
determines the available nodes in my web farm using ec2
selects the first node and sends the node an "upgrade database" command
then proceeds to send "upgrade app server" command to each node
The logic for doing the deploy is parameterized so it can be re-used for deployment to different environment. I use bamboo deploy variables to manage feeding parameters for the different environments.
DEV is deployed automatically, test, staging and prod are all manual click deploys which are locked down to specific users.
One option I considered but did not invest the time to look at as aws elastic beanstalk as a deployment tool. It has a rich api for deploys. On the Azure side it looks like web deploy supports deployment to Azure IIS sites.

Team Foundation Service publishing to Azure with multiple branches

I'm currently working with Team Foundation Service in combination with Windows Azure. I've created a Website in Azure and setup TFS publishing.
Everything is working perfectly except I was wondering how to configure different branches.
This article explains how you can use Git to configure different websites to point to different branches. It says:
In this example, I’ll be using GitHub.com to manage my source code.
You don’t have to use GitHub.com for this, so if you’re new to Git
don’t freak out, you have other options like CodePlex, BitBucket, or
TFS. Heck, you could even automate an FTP deployment if you want to.
However, I can't find the branching option for my TFS publishing configuration. Am I missing something? Is this a Git only feature?
After searching the web and having some contact with Microsoft I found the necessary configuration steps.
These are the things you have to do:
Link the two websites to your team project
Branch your code
Adjust the workspace mappings on the build definition to the staging site to map the branch
Change the solution to build property on the build definition for your staging site to point to the solution in your branch
I wrote a blog post about it that explains it in more detail: Branches, Team Foundation Service and Azure Websites

Resources