My company has recently moved to self-hosted Gitlab instance, and now I’m trying to wrap my head around it to see how we could use the CI/CD features of it for our cases. For instance:
We have a PHP-based front end project which consists of several PHP, CSS and JS files that as of now are being copied to our Apache2 folder in the same structure as we use for development.
Is it possible to configure Gitlab to do it for us, let’s say, implementing such an algorithm:
we make a DEPLOY branch in our repository
when we’re done with changes in other branches, we merge those branches into DEPLOY
Gitlab CI/CD detects the new activity and automatically puts the latest version of the files from this branch into our Apache2 directory (connected as a remote folder)
Any advice and guidance is appreciated. I’m currently lost in many manuals that describe Docker deployment etc. which is not yet used in our project.
Related
I have a DigitalOcean VPS with ubuntu and a few laravel projects, for my projects initial setup I do a git clone to create a folder with my application files from my online repository.
I do all development work in my local machine, where I have two branches (master and develop), what I do is merge develop into my local master, then I push from master into my local repository.
Nw back into my production server, when I want to add all the changes added into production I do a git pull from origin, so far this has resulted into git telling me to stash my changes, why is this?
What would be the best approach to pull changes into production server? take in mind that my production server has no working directory perse, all I do in my VPS is either clone or push upgrades into production.
You can take a look at the CI/CD (continuous integration / continuous delivery) systems. GitLab for example offer free-to-use plan for small teams.
You can create a pipeline with a manual deploy step (you have to press a button after the code is merged to the master branch) and use whatever tool you like to deploy your code (scp, rsync, ftp, sftp etc.).
And the biggest benefit is that you can have multiple intermediate steps (even for the working branches) where you can run unit tests which would prevent you to upload failing builds (whenever you merge non-working code)
For the first problem, do a git status on production to see which files that git sees as changed or added and consider adding them to your .gitignore file (which itself should be a part of your repo). Laravel generally has good defaults for these, but you might have added things or deviated from them in the process of upgrading Laravel.
For the deployment, the best practice is to have something that is consistent, reproducible, loggable, and revertable. For this, I would recommend choosing a deployment utility. These usually do pretty much the same thing:
You define deployment parameters in code, which you can commit as a part of your repo (not passwords, of course, but things like the server name, deploy path, and deploy tasks).
You initiate a deploy directly from your local computer.
The script/utility SSH's into your target server and pulls the latest code from the remote git repo (authorized via SSH key forwarded into the server) into a 'release' folder.
The script does any additional tasks you define (composer install, npm run prod, systemctl restart php-fpm, soft-linking shared files like .env, and etc.)
The script soft-links the document root to your new 'release' folder, which results in an essentially zero-downtime deployment. If any of the previous steps fail, or you find a bug in the latest release, you just soft-link to the previous release folder and your site still works.
Here are some solutions you can check out that all do this sort of thing:
Laravel Envoyer: A 1st-party (paid) service that allows you to deploy via a web-based GUI.
Laravel Envoy: A 1st-party (free) package that allows you to connect to your prod server and script deployment tasks. It's very bare-bones in that you have to write all of the commands yourself, but some may prefer that.
Capistrano: This is (free) a tried-and-tested popular ruby-based deployment utility.
Deployer: The (free) PHP equivalent of Capistrano. Easier to use, has a lot of built-in tasks (including a Laravel one), and doesn't require ruby.
Using these utilities is not necessarily exclusive of doing CI/CD if you want to go that route. You can use these tools to define the CD step in your pipeline while still doing other steps beforehand.
Netlify subdomains work based on branches on a repo. If I have a domain say xyz.com and repo Repo-A, the master branch will deploy to xyz.com and dashboard branch will deploy to dashboard.xyz.com. However the dashboard and master branch are very different expect for a few visual elements.
I’m trying to figure out a clean way to structure the repo
Repo - A
(master branch)
src/app
package.json
webpack.config.js
Repo - A
(dashboard branch)
src/app
package.json
webpack.config.js
The problem with this approach is I’d have to change my webpack, package and src files extensively.
I believe switching back and forth between branches will generate a lot of junk in the dist/ folder too.
What’s the best repo structure to make this work? Are there tools to make life simpler for this use case?
Another approach -
Create a Release Repo that has release branches like master and dashboard.
master commits to Repo A which pushes build to master branch of Release repo
master commits to Repo B which pushes build to dashboard branch of Release repo
Is this a cleaner approach compared to first one? Any suggestions?
This feature seems to be more for staging/development/production(master) when you are using them to track changes for review and doing pull requests to each sub-domain branch through the workflow. I don't use this feature, because it is easy to track workflow by creating branch deploys anyway. Where I think this would really come in handy is when tracking versions of my site at subdomains for different versions.
When using a sub-domain for a totally different project, you should consider moving them to their own repositories and managing the project as it's own site at the sub-domain. Then entering a CNAME sub-domain entry into DNS to point to the my-dashboard-site-name.netlify.com
Mono-repo
You could have them in the same mono-repo if you don't want to make them their own repo, you would still separate the sites deploy. This is a little bit more complex than their own repository, but tools like Lerna are there if you want to maintain it that way. It does make for a nice way to maintain projects that re-use the same libraries that are not published to a package manager, but in the same mono-repo.
I’m trying to set up GitLab CI/CD for an old client-side project that makes use of Grunt (https://github.com/yeoman/generator-angular).
Up to now the deployment worked like this:
run ’$ grunt build’ locally which built the project and created files in a ‘dist’ folder in the root of the project
commit changes
changes pulled onto production server
After creating the .gitlab-ci.yml and making a commit, the GitLab CI/CD job passes but the files in the ‘dist’ folder in the repository are not updated. If I define an artifact, I will get the changed files in the download. However I would prefer the files in ‘dist’ folder in the to be updated so we can carry on with the same workflow which suits us. Is this achievable?
I don't think commiting into your repo inside a pipeline is a good idea. Version control wouldn't be as clear, some people have automatic pipeline trigger when their repo is pushed, that'd trigger a loop of pipelines.
Instead, you might reorganize your environment to use Docker, there are numerous reasons for using Docker in a professional and development environments. To name just a few: that'd enable you to save the freshly built project into a registry and reuse it whenever needed right with the version you require and with the desired /dist inside. So that you can easily run it in multiple places, scale it, manage it etc.
If you changed to Docker you wouldn't actually have to do a thing in order to have the dist persistent, just push the image to the registry after the build is done.
But to actually answer your question:
There is a feature request hanging for a very long time for the same problem you asked about: here. Currently there is no safe and professional way to do it as GitLab members state. Although you can push back changes as one of the GitLab members suggested (Kamil Trzciński):
git push http://gitlab.com/group/project.git HEAD:my-branch
Just put it in your script section inside gitlab-ci file.
There are more hack'y methods presented there, but be sure to acknowledge risks that come with them (pipelines are more error prone and if configured in a wrong way, they might for example publish some confidential information and trigger an infinite pipelines loop to name a few).
I hope you found this useful.
I've been using gitlab on a private server for development. Unfortunately, requiring a dual core, 2GB RAM VPS purely for the purpose of holding git repos for a couple of people is not cost effective. I would like to migrate to the free gitlab hosted accounts.
Is there are way to transfer a repo and issues to gitlab hosted servers?
It depends on your Gitlab version. They added an import/export feature in 8.9. If you have a lower version you can update to the current version and export your data afterwards.
The following items will be exported:
Project and wiki repositories
Project uploads
Project configuration including web hooks and services
Issues with comments, merge requests with diffs and comments, labels, milestones, snippets, and other project entities
The following items will NOT be exported:
Build traces and artifacts
LFS objects
I am trying to deploy a project to azure, via the "remote git repo" method. But in my repo, the actual node application is a few directories in. Thus, Azure does not do anything when the repo is pushed.
Is there some way to configure the azure website to run from a directory buried in the repo?
There's a super easy way actually. This scenario was anticipated by the Azure team and there's a good way to handle it. It's simple too.
You simply create a text file at the root of your project called .deployment. In the .deployment file you add the following text...
[config]
project = mysubfolder
When you either Git deploy or use CI to deploy from source control, the entire repository is deployed, but the .deployment file tells Kudu (that's the engine that handles your website management) where the actual website (or node project) is.
You can find more info here.
Also, check out this post where I mention an alternative strategy for project repos in case that helps.
This isn't so much an Azure question as a Git question. What you want to know is if there is a way to clone only a sub-directory or branch of a project. From doing some research on this just a couple of weeks ago, the best I could find were solutions for how to do a sparse clone, which does allow one to restrict the files cloned (almost there) but does so within the entire project's directory structure (denied).
A couple of related SO questions & answers which you might find helpful:
How do I clone a subdirectory only of a Git repository?
(Short answer 'no')
Checkout subdirectories in Git?
(Answer describes the sparse checkout ability).
I would love to see if a git guru might have a better answer based on updates to git, etc.
Good luck with it - I ended up just putting my node app in its own Git project as it seemed the most straightforward approach overall, though not ideal.