Disable specific puppet environments on puppetmasters - puppet

Just implemented puppet directory environments on our puppetmasters and I'm looking for a way to disable specific environments.
I have a production and a staging environment and what I want to avoid is someone accidentally running puppet with the staging environment on a production server and vice versa. There is one puppetmaster on each of those environments so I want to disable the "wrong" one on each. They both use the same repo which includes all code.
I'm on puppet 3.8.7 if that's relevant but since we're planning on upgrading soon a solution that works for some other version is also welcome.

Related

Good practices for pulling from git repo into production server

I have a DigitalOcean VPS with ubuntu and a few laravel projects, for my projects initial setup I do a git clone to create a folder with my application files from my online repository.
I do all development work in my local machine, where I have two branches (master and develop), what I do is merge develop into my local master, then I push from master into my local repository.
Nw back into my production server, when I want to add all the changes added into production I do a git pull from origin, so far this has resulted into git telling me to stash my changes, why is this?
What would be the best approach to pull changes into production server? take in mind that my production server has no working directory perse, all I do in my VPS is either clone or push upgrades into production.
You can take a look at the CI/CD (continuous integration / continuous delivery) systems. GitLab for example offer free-to-use plan for small teams.
You can create a pipeline with a manual deploy step (you have to press a button after the code is merged to the master branch) and use whatever tool you like to deploy your code (scp, rsync, ftp, sftp etc.).
And the biggest benefit is that you can have multiple intermediate steps (even for the working branches) where you can run unit tests which would prevent you to upload failing builds (whenever you merge non-working code)
For the first problem, do a git status on production to see which files that git sees as changed or added and consider adding them to your .gitignore file (which itself should be a part of your repo). Laravel generally has good defaults for these, but you might have added things or deviated from them in the process of upgrading Laravel.
For the deployment, the best practice is to have something that is consistent, reproducible, loggable, and revertable. For this, I would recommend choosing a deployment utility. These usually do pretty much the same thing:
You define deployment parameters in code, which you can commit as a part of your repo (not passwords, of course, but things like the server name, deploy path, and deploy tasks).
You initiate a deploy directly from your local computer.
The script/utility SSH's into your target server and pulls the latest code from the remote git repo (authorized via SSH key forwarded into the server) into a 'release' folder.
The script does any additional tasks you define (composer install, npm run prod, systemctl restart php-fpm, soft-linking shared files like .env, and etc.)
The script soft-links the document root to your new 'release' folder, which results in an essentially zero-downtime deployment. If any of the previous steps fail, or you find a bug in the latest release, you just soft-link to the previous release folder and your site still works.
Here are some solutions you can check out that all do this sort of thing:
Laravel Envoyer: A 1st-party (paid) service that allows you to deploy via a web-based GUI.
Laravel Envoy: A 1st-party (free) package that allows you to connect to your prod server and script deployment tasks. It's very bare-bones in that you have to write all of the commands yourself, but some may prefer that.
Capistrano: This is (free) a tried-and-tested popular ruby-based deployment utility.
Deployer: The (free) PHP equivalent of Capistrano. Easier to use, has a lot of built-in tasks (including a Laravel one), and doesn't require ruby.
Using these utilities is not necessarily exclusive of doing CI/CD if you want to go that route. You can use these tools to define the CD step in your pipeline while still doing other steps beforehand.

automate adding capabilities to Azure DevOps self-hosted agents

As far as I know, Azure DevOps agents are capable of automatically detecting its own capabilities. Based on the documentation, so as long as I restart the host once a new piece of software has been installed, the capability should be registered automatically.
What I am having trouble doing right now is getting the agent to detect the presence of Yarn on a self-hosted agent on a windows host. Looking at the PATH environment variable shows the existence of the Yarn executable, but it is not listed as a capability despite having the host restarted. My current workaround is to manually add Yarn to the capability list and setting its value to true.
As a side note, yarn was installed via Ansible using win_chocolatey plugin. The install was successful with no errors.
I am wondering a few things
1) Am I missing something which is causing this issue?
2) Is this an inherent issue with Yarn? If this is an inherent issue with Yarn, is there a way to automate the process of manually adding yarn as a capability?
Capabilities for a windows agent come from the environmental variables.
if you want to set a value you add a line that adds that adds an entry to the machine.
[System.Environment]::SetEnvironmentVariable("CAPABILITYNAME", "value", "Machine")
when you start the service it then picks this up.
I am currently trying to do something similar for a set of linux agents...
The interesting thing about capabilities is that they are not paths. for example it might show you have msbuild for 2019 and 2017, but I have not been able to use those as pipeline variables.

Restrict people from running production pipelines

I'm evaluating gitlab community edition as a self hosted version. Everything is fine with the product except anyone who has access to pipelines (master or admin) can run deployment on production.
I saw their issue board and i see that this is a feature that will not be coming to gitlab anytime soon.
See https://gitlab.com/gitlab-org/gitlab-ce/issues/20261
For now, I plan on deploying my spring boot applications using the following strategy.
A separate runner installed on the production server
There will be an install script with instructions some where in production server
gitlab-runner user will only have permissions to run the above specific script (Somehow).
Have validations in the script for GITLAB_USER_NAME variable.
But I also see that there are disadvantages in this approach.
GITLAB_USER_NAME is an environment variable which can be overridden easily thus compromising the validation.
Things are complicated when introducing new prod servers.
Too many changes apart from .gitlab-ci.yml... CI/CD was supposed to be simple not painful...
Do I have any alternate approaches or hacks for this...?

How does deployment on remote servers work?

I'm a bit new to version control and deployment environments and I've come to a halt in my learning about the matter: how do deployment environments work if developers can't work on the same local machine and are forced to always work on a remote server?
How should the flow of the deployment environments be set up according to best practices?
For this example I considered three deployment environments: development, staging and production; and three storage environments: local, repository server and final server.
This is the flow chart I came up with but I have no idea if it's right or how to properly implement it:
PS. I was thinking the staging tests on the server could have restricted access through login or ip checks, in case you were wondering.
I can give you (according to my experience) a good and straightforwarfd practice, this is not the only approach as there is not a unique standard on how to work on all projects:
Use a distributed version control system (like git/github):
Make a private/public repository to handle your project
local Development:
Developers will clone the project from your repo and contribute to it, it is recommended that each one work on a branch, and create a new branch for each new feature
Within your team, there is one responsible for merging the branches that are ready with the master branch
I Strongly suggest working on a Virtual Machine during Development:
To isolate the dev environment from the host machine and deal with dependencies
To have a Virtual Machine identic to the remote production server
Easy to reset, remove, reproduce
...
I suggest using VirtualBox for VM provider and Vagrant for provisioning
I suggest that your project folder be a shared folder between your host machine and your VM, so, you will write your source codes on your host OS using the editor you love, and at the same time this code exists and runs inside your VM, is in't that amazingly awesome ?!
If you are working with python I also strongly recommend using virtual environments (like virtualenv or anaconda) to isolate and manage inner dependencies
Then each developer after writing some source code, he can commit and push his changes to the repository
I suggest using project automation setup tools like (fabric/fabtools for python):
Making a script or something that with one click or some commands, reproduces all the environment and all the dependencies and everything needed by the project to be up and running, so all developers backend, frontend, designers... no matter their knowlege nor their host machine types can get the project running very merely. I also suggest doing the same thing to the remote servers whether manually or with tools like (fabric/fabtools)
The script will mainly install os dependencies, then project dependencies, then cloning the project repo from your Version Control, and to do so, you need to Give the remote servers (testing, staging, and production) access to the Repository: add ssh public keys of each server to the keys in your Version Control system (or use agent forwarding with fabric)
Remote servers:
You will need at least a production server which makes your project accessible to the end-users
it is recommended that you also have a testing and staging servers (I suppose that you know the purpose of each one)
Deployment flow: Local-Repo-Remote server, how it works ?:
Give the remote servers (testing, staging, and production) access to the Repository: add ssh public keys of each server to the keys in your Version Control system (or user agent forwarding with fabric)
The developer writes the code on his machine
Eventually writes tests for his code and runs them locally (and on the testing server)
The developer commits and pushes his code to the branch he is using to the remote Repository
Deployment:
5.1 If you would like to deploy a feature branch to testing or staging:
ssh access to the server and then cd to the project folder (cloned from repo manually or by automation script)
git checkout <the branch used>
git pull origin <the branch used>
5.2 If you would like to deploy to production:
Make a pull request and after the pull request gets validated by the manager and merged with master branch
ssh access to the server and then cd to the project folder (cloned from repo manually or by automation script)
git checkout master # not needed coz it should always be on the master
git pull origin master
I suggest writing a script like with fabric/fabtools or use tools like Jenkins to automate the deployment task.
VoilĂ ! Deployment is done!
This is a bit simplified approach, there are still a bunch of other recommended and best practice tools and tasks.

Is it possible to set up multiple developers to edit the same Liferay Portal remotely?

I am new to Liferay Portal and am using 6.1 CE. I am trying to find a way to allow multiple developers to make changes to a single Liferay Portal simultaneously. I know that I can set up a Staging environment and allow all developers to log into the Live site and develop inside the Staging environment on that instance. I also know that I can set up remote staging - allowing a developer to make changes on a separate staging environment (on a different Liferay instance) and then remotely publish the changes to the Live site. I also know that multiple developers can each log in to that single remote staging environment.
What I want to know is this: Can I set up multiple Liferay instances as remote staging environments (one for each developer) that all publish to the same Live Liferay Portal Instance (separate from all staging environments)? If so, will changes made in one remote staging environment and then published to Live be reflected in the other remote staging environments? E.g., if a page is changed in Staging Env. A and published to Live, will the change be seen in Staging Env. B, or would it be oblivious to the change?
I hope the question/scenario makes sense. If further clarification is needed, please let me know so that I can add detail. Thanks in advance.
Starting with Liferay 6.1 you're able to work with page variations - effectively branches of your content, so that you have multiple parallel versions that you can work with. This seems to come closest to what you describe, although it might not be an exact match.
You can also manually export/import pages and articles and move them around, but I have the feeling that you're looking for an automatism that works more like a distributed versioning system - I doubt you'll find that anywhere. A certain amount of manual work to disambiguate conflicts will still remain - and the interface for that is typically a bit hairy.

Resources