By default, Synapse creates a new branch called workspace_publish when publish is executed via the UI. This has two things I am uncertain about:
The branch has no relation to or files from the collaboration branch (main)
The branch contains only a folder with the workspace name
It feels to me like I would like to have the publish template in my main branch together with everything else. I also would like to have the templates in a different folder because having the workspace name at the root of my repository does not make sense for me. This also makes it difficult to have a common release pipeline for both the resource ARM templates and the published workspace templates, because they are managed in different branches.
I know that I can change the publish branch via publish_config.json. There is also a Microsoft blog post that describes publishing directly to main, but this has multiple issues:
Workspace name is kept in the root folder
Pushing directly to main should only be possible via pull request
The way the templates are published makes me feel like they are not supposed to be merged to main. Is my feeling incorrect? Should the publish branch always stay completely separate from main? If so, why?
Related
When working with the regular source code, (Java, C++, etc..) there are things like
git pull ..
git fetch ..
git push ..
to synch your remote git repo branch with your local branch.
What is the equivalent of such in the Azure Data Factory world ?
So, I am using azure data factory with the Azure git repo.
I am working in the particular feature branch - "fefature branch"
And my pipeline has a copy activity that hits a data set in its "Sink" stage.
Here is a screen shot but .. it's pretty simple and seems right
I see that my code for Data set definition (Json) in the remote Git repository is different from what I see in the Azure portal gui (being pointed to that same remote branch). ADF Gui in the Azure Portal is correct, the one in the git repo contains some stuff that I already deleted, but it does not gets deleted there (Why??)
So, when I 'Debug' pipeline I get errors which indicate this discrepancy as a problem. I want ty sync the environments and .. given that I do not understand how the discrepancies came about, I don't know how to fix an issue?. Any help is appreciated.
In the ADF world, we use publish and create a new pull request to merge the new changes from a feature branch to the main branch.
it seems like your git repository version is not up to date with the live ADF.
If there are any pending changes in your main branch, then you can click on Publish button to merge the changes
And if you are working on the feature branches, you can merge the changes using the new pull request.
If you have multiple feature branches, then you will need to manually compare the different versions to resolve these conflicts.
I have a use case where I have a template project set up that I then use as a base for new microservices. While this works in building some basic stuff, such as base files, ci/cd pipeline, etc, it's still the template. I'm going through all the ci/cd variables now to check, but wanted to see if anyone else had this use case come up. Basically, I want to know if there's something like a "first run on repo creation" trigger that can run as soon as the repo is cloned from the template, but then never run again. This stage would modify some of the files in the repo to change names of things like the service, etc.
Is there any way to currently do this? The only other way I can think of doing it would be to have another project that uses the api or something to get the new repo name then check in the modified files.
Thanks!
You could use a rule that checks for a specific commit message crafted to be the message at the HEAD of the template project. Optionally, you can also check if the project ID is not the project ID of the template project (to avoid the job running in the template project itself).
rules:
- if: '$CI_COMMIT_MESSAGE == "something very specific" && CI_PROJECT_ID != "1234"'
When a new project is created, the rule will evaluate true, but future commits that users make won't (or at least shouldn't, under normal circumstances) match the rule.
Though, to hook into project creation, using a system hook (for self-managed instances) might be a better option.
In git mode, when we want to test a pipeline, ADF forces us to publish first.
Publish action does two things is my understanding
Saves to the local ADF (DEV) as given here
Creates arm templates in a branch (adf_publish/the branch we
specify)
But to get the 'Publish' button enabled, we need to be in collaboration branch. This means no two people can work at the same time on a DEV ADF. As both people will be asked to publish by ADF before they could test the pipeline they are building.
If this is the case then why is there an option for us to connect another branch other than collaboration branch? (by changing it from the drop down)
Also what is a 'working branch'?
As we know, we only can 'Publish' in collaboration branch and changes are being pushed to to "adf_publish" branch by default. By default, the collaboration branch is named main.
If you want team work, you need to create several branches.
Working on the own branch, we can validate and debug the pipeline to make sure everything is ok.
Then click save all, it will commit on the own branch. If we want publish, we need to creat a pull request to the main branch.
4.After merged to the main branch, we can publish to the adf_publish branch.
We are building a set of serverless functions in Azure, but having difficulty deciding how to structure our source (Azure GIT) and DevOps to support them.
I am thinking of a single GIT repo, with all function apps housed independently within projects. We may have a lot of these function apps, we see great value in small code segments to do utility type of work, and I don't want dozens and dozens of independent repos just because of DevOps deployments. Is there a way to have a unique build and release process for each project, not the repo entirely? We aren't clear how this can be done and searches have come up empty on this. I thought it was possible to have unique build YAMLs per project across many projects in a single repo - but unclear how to implement the DevOps build and release pipleines to support this approach - ie; only a single function gets updated and we need to deploy - any guidance if this is possible and how to approach it would be great.
I haven't done this myself, but I'm in a similar situation where I'd like to have multiple functions (and other stuff) in a single Git repo for simplicity, but only build/deploy them as needed when they change. It looks like you can have multiple pipelines on a single repo with a different YAML file for each pipeline. The steps are documented in this link, and summarized below
In Azure DevOps, create a new Pipeline.
For the "Where is your code?" page, at the bottom choose the Use the classic editor option.
Select your source repo and branch.
On the "Select a template" screen, choose the YAML option at the top. Hit Apply.
There is a YAML file path field where you can specify the path and name of your YAML file for the pipeline.
You may want to set the pipeline to run manually if you don't want a build each time there's a commit to the repo.
EDIT There may be an easier way to do this now. If you go through the New Pipeline wizard, select your source location, on the Configure tab, at the bottom you can choose the Existing Azure Pipelines YAML file option. This lets you select a custom YAML file directly.
I’m trying to set up GitLab CI/CD for an old client-side project that makes use of Grunt (https://github.com/yeoman/generator-angular).
Up to now the deployment worked like this:
run ’$ grunt build’ locally which built the project and created files in a ‘dist’ folder in the root of the project
commit changes
changes pulled onto production server
After creating the .gitlab-ci.yml and making a commit, the GitLab CI/CD job passes but the files in the ‘dist’ folder in the repository are not updated. If I define an artifact, I will get the changed files in the download. However I would prefer the files in ‘dist’ folder in the to be updated so we can carry on with the same workflow which suits us. Is this achievable?
I don't think commiting into your repo inside a pipeline is a good idea. Version control wouldn't be as clear, some people have automatic pipeline trigger when their repo is pushed, that'd trigger a loop of pipelines.
Instead, you might reorganize your environment to use Docker, there are numerous reasons for using Docker in a professional and development environments. To name just a few: that'd enable you to save the freshly built project into a registry and reuse it whenever needed right with the version you require and with the desired /dist inside. So that you can easily run it in multiple places, scale it, manage it etc.
If you changed to Docker you wouldn't actually have to do a thing in order to have the dist persistent, just push the image to the registry after the build is done.
But to actually answer your question:
There is a feature request hanging for a very long time for the same problem you asked about: here. Currently there is no safe and professional way to do it as GitLab members state. Although you can push back changes as one of the GitLab members suggested (Kamil Trzciński):
git push http://gitlab.com/group/project.git HEAD:my-branch
Just put it in your script section inside gitlab-ci file.
There are more hack'y methods presented there, but be sure to acknowledge risks that come with them (pipelines are more error prone and if configured in a wrong way, they might for example publish some confidential information and trigger an infinite pipelines loop to name a few).
I hope you found this useful.