Should I add ParcelJS `.parcel-cache` to CI bitbucket pipeline caches? Is there a benefit? - bitbucket-pipelines

BitBucket Pipelines have the ability to add custom caches which are reused between pipeline runs. ParcelJS makes use of its own .parcel-cache file cache to speed up builds between runs.
I have a team running several CI branches at the same time and we're looking to move to ParcelJS.
As far as I can tell BitBucket creates the same cache regardless of the branch you're on - so I was wondering if it is possible and whether it makes sense to create a custom cache pointing to the .parcel-cache?

As far as I understand, .parcel-cache is only good for the development server
If you restart the dev server, Parcel will only rebuild files that have changed since the last time it ran.
https://parceljs.org/features/development/#caching
I reckon you are running build, lint and test steps in Bitbucket Pipelines so, unless I misunderstand parceljs' cache purpose, you should better not save it. Otherwise you would be wasting build time by uploading and downloading a bundle that would not speed-up your scripts in return.
You should test this in your workstation nonetheless. Like, with a clean clone of your repo, does the presence of this cache speed up the instructions that run in the pipeline? If it does, then you can face the issues of cache usefulness across different branches.

Related

Intermediate step(s) between manual prod and CI/CD for Node/Next on EC2

For about 18 months now I've been working in Node; and for the last 6 months I've been slowly migrating my existing WordPress websites to NextJS.
To date, I've been deploying to production manually. I log into my production server, checkout the latest release from GitHub, build, and do a pm2 restart.
Even though the above workflow seems to be the most commonly documented around the internet, it's always felt a little wrong to me.
Recently, I found myself in a situation where I needed to customise some 3rd party code. So, my main code now has a line in package.json that says
{
...
"dependencies": {
...
"react-share": "file:../react-share/react-share-4.4.1.tgz",
...
},
...
}
which implies that I'm going to checkout my custom react-share, build it somewhere on the production server, change this line to point to wherever I put it, and then rebuild.
Also, I'm using Prisma, which means that every time I deploy, before I do a build, I need to do an npx prisma generate to create the client.
This now all seems really, really wrong.
I don't know how a "simple" CI/CD environment might look, but whatever it looks like, it feels like overkill. It's just me doing development, and my production environment is a single EC2 server sitting behind AWS CloudFront.
It seems to me that I should be doing something more/different than what I'm currently doing, in service to someday moving to a CI/CD model, if/when I have a whole team working on this, or sufficient users that I have multiple load-balanced servers and need production to be continually up.
In particular, it feels like I shouldn't be building on the production server.
Are there any intermediary step(s) I can/should be taking for faster/less-error-prone/less-down-time deployment to a single EC2 instance for Next/Node apps, between manually deploying as I am currently, and some sort of CI/CD setup? Or are my only choices to do what I'm doing now, or go research how to do CI/CD?
You're approaching towards your initial stages of what technically is called DevOps, if not already as it appears from your context. What you're asking is a broad topic, which is an understatement, and explaining each and everything here will almost be like writing an article about it, at the very least.
However, I'll brief you overall on how to approach with this.
I don't know how a "simple" CI/CD environment might look, but whatever it looks like, it feels like overkill.
Simplicity & complexity are relative terms. A system which is complicated for one might be simple for another. CI/CD doesn't define any laws that you need to follow in order to create a perfect deployment procedure, as everyone's deployment requirement is unique (at some point).
If I mention it in bullet points, what you need to figure out before you start with setting up CI&CD, is -
The sequence of steps your deployment procedure needs in order to deploy your latest version. As you have stated already that you've been doing deployment manually, that means you already know your steps. All you need to do is to fine-tune each step so that it shouldn't require manual intervention while being executed automatically by the CI program.
Choose a CI program, like Travis CI, Circle CI, or if you're using GitHub, it has it's own GitHub Actions for the purpose, you can read their documentation for more details. Your CI program will be responsible for executing your deployment steps which you'll mention to it in whichever format it understands (mostly .yml).
The CI program will execute your steps on behalf of you based on the condition which you'll provide, (like when code is pushed on prod branch). It will execute the commands on a machine (like your EC2), specifically, GitHub actions runner will be responsible for running your commands on your machine, the runner should be setup beforehand in the instance you intend to deploy your code on. More details on runners can be found in relevant documentations.
Since the runner will actually execute the commands on your machine, make sure that all required commands and parameters, including the concerned files & directories are accessible to the runner program, from permissions point of view at least. For example, running your npx prisma generate command should require that npx command is available and executable in the system, and the concerned folders in which the command will CRUD files is accessible by the runner program. Similarly for all other commands.
Get your hands on bash scripting as well.
If your steps contain dynamic info, like the one you mentioned that in your package.json an npm script needs to be updated, then a custom bash script created to update the same automatically will help, for instance. There will be however, several other ways depending on the specific nature of the dynamic changes.
The above points are huge (by huge, I mean astronomically huge) oversimplification of the ways through which CI&CD pipelines are setup. But I hope you get the idea of it at least.
In particular, it feels like I shouldn't be building on the production server.
Your feeling is legitimate. You should replicate your production environment (including deployment procedures) into a separate development environment as close as possible, in order to have all your experiments, development and testing done separately from production environment, and after successful evaluation on the development environment, deploy on production one. Steps like building will most likely be done on both environments, as it is something your program needs to run, irrespective of the environment it is running in. Your future team will appreciate this separation of environments.
if/when I have a whole team working on this, or sufficient users that I have multiple load-balanced servers and need production to be continually up.
Again, this small statement in itself is a proper domain of IT department, known as System Design, in which, to put it simply, you or your team will create an architecture for your whole system which will support your business requirements and scaling as your audience increases, which is something a simple Stackoverflow QnA won't suffice to explain.
Therefore,
or go research how to do CI/CD?
is what I'd recommend and you should also feel is the right way ahead, after reading everything above.
Useful references to begin with (not endorsing any resources, you can search for relevant/better resources too)
GitHub Actions self-hosted runners
System Design - Getting started
Bash scripting
Development, Staging, Production

About gitlab CI runners

I am new to gitlab CI and I am fascinated with it. I managed already to get the pipelines working even using docker containers, so I am familiar with the flow for setting jobs and artifacts. I just wish now to understand how this works. My questions are about the following:
Runners
Where is actually everything happening? I mean, which computer is running my builds and executables? I understand that Gitlab has its own shared runners that are available to the users, does this mean that if a shared runner grabs my jobs, is it going to run wherever those runners are hosted? If I register my own runner in my laptop, and use that specific runner, my builds and binaries will be run in my computer?
Artifacts
In order to run/test code, we need the binaries, which from the build stage they are grabbed as artifacts. For the build part if I use cmake, for example, in the script part of the CI.yml file I create a build directory and call cmake .. and so on. Once my job is succesful, if I want the binary i have to go in gitlab and retrieve it myself. So my question is, where is everything saved? I notice that the runner, withing my project, creates something like refs/pipeline/, but where is this actually? how could I get those files and new directories in my laptop
Working space
Pretty much, where is everything happening? the runners, the execution, the artifacts?
Thanks for your time
Everything that happens in each job/step in a pipeline happens on the runner host itself, and depends on the executor you're using (shell, docker, etc.), or on the Gitlab server directly.
If you're using gitlab.com, they have a number of shared runners that the Gitlab team maintains and you can use for your project(s), but as they are shared with everyone on gitlab.com, it can be some time before your jobs are run. However, no matter if you self host or use gitlab.com, you can create your own runners specific for your project(s).
If you're using the shell executor, while the job is running you could see the files on the filesystem somewhere, but they are cleaned up after that job finishes. It's not really intended for you to access the filesystem while the job is running. That's what the job script is for.
If you're using the docker executor, the gitlab-runner service will start a docker instance from the image you specify in .gitlab-ci.yml (or use the default that is configurable). Then the job is run inside that docker instance, and it's deleted immediately after the job finishes.
You can add your own runners anywhere -- AWS, spare machine lying around, even your laptop, and jobs would be picked up by any of them. You can also turn off shared runners and force it to be run on one of your runners if needed.
In cases where you need an artifact after a build/preparatory step, it's created on the runner as part of the job as above, but then the runner automatically uploads the artifact to the gitlab server (or another service that implements the S3 protocol like AWS S3 or Minio). Unless you're using S3/minio, it will only be accessible through the gitlab UI interface, or through the API. In the UI however, it will show up on any related MR's, and also the Pipeline page, so it's fairly accessible.

Keep Gitlab CI pipeline on single runner to ensure cache is shared

I have a local gitlab server running with a few Gitlab CI runners. In the past, we've had each runner have concurrent = 1 setup, and then when a pipeline is run, any available runner takes any job in each stage.
However, I'd like to start caching dependencies between stages. This means that I must ensure an entire pipeline is run within a single runner instance (I'm trying to avoid uploading caches).
Is it possible for an entire pipeline to be assigned a runner? But have 2+ pipelines run concurrently on multiple runners?
The cache is always stored on the same location where the runner is installed and running[1]. So to share a cache across all your runners, you need to setup an S3 replacement like minio[2] and configure your runners to use that cache.
Without uploading (and downloading) the cache to a central storage, it is not possible that every runner can access the cache of another runner.
[1]https://docs.gitlab.com/ce/ci/caching/#cache-vs-artifacts
[2]https://docs.gitlab.com/runner/install/registry_and_cache_servers.html#install-your-own-cache-server
Is it possible for an entire pipeline to be assigned a runner?
Yes. Just give every runner a unique tag. Than tag every job in your pipeline with the tag of one runner. This will ensure that your pipeline will be executed only by one runner. For more see https://docs.gitlab.com/ce/ci/runners/#using-tags
What you want is currently (GitLab 11.7) not possible (At least on Windows it seems) without the significant administrative overhead of assigning each runner specifically for each of your jobs. Pinning a specific runner to your project and disable all shared ones would work too.
There are a handful of issues that prevent this use case as it is not possible to share the runners cache even with a S3 blob storage configuration (we tried minio).
One of them is a race condition that prevents the cache to be extracted correctly if subsequent jobs are executed on different nodes. This is especially the case for parallel jobs.
What we tried:
Sharing cache using a SMB folder available on all runner machines
Using minio to share the cache
You can find our bug ticket here:
https://gitlab.com/gitlab-org/gitlab-runner/issues/3920

Deploying and scheduling changes with Ansible OSS

Please note: I am not interested in any enterprise/for-pay (Tower?) solutions here, only solutions available via Ansible's OSS offering.
OK so I've got my Ansible project configured and working perfectly, woo hoo! Looks something like this:
myansible01.example.com:/opt/ansible/
site.yml
fizz.yml
buzz.yml
group_vars/
roles/
common/
tasks/
main.yml
handlers
main.yml
foos/
tasks/
main.yml
handlers/
main.yml
There's several things I need to accomplish to get this working in a production environment:
I need to be able to automate the deployment of changes to this project
I need to schedule playbooks to be ran, say, every 30 seconds (to ensure all managed nodes are always in compliance)
So my concerns:
How are changes usually deployed to live Ansible projects? Say the project is located at myansible01.example.com:/opt/ansible (my Ansible server). Is it sufficient to simply delete the Ansible project root (rm -rf /opt/ansible) and then copy the latest (containing changes) Ansible project back to the same location? What happens if Ansible is currently running any plays while I perform this "drop-n-swap"?
It looks like the commercial offering (Ansible Tower) has a scheduling feature built into it, but not the OSS offering. How can I schedule Ansible OSS to run plays at certain times? For instance, I might want certain plays to be ran every 30 seconds, so as to ensure nodes are always within compliance. Is cron sufficient to do this, or is there a more standard approach?
For this kind of task you typically want an orchestration engine such as Jenkins to do all your, well, orchestration.
You can set Jenkins to run playbooks on timers or other events such as a push to an SCM such as git.
Typically a job starts by checking out a tag/branch of our Ansible code base and then applying it to all of our specified servers so you always know what is being run. If you want, this can simply be the head on master (in git terms) so it's always applying the most recent changes. If you were also to have this to hook into your SCM repo then a simple push will force those changes to be applied to all of your servers.
Because of that immediacy you might want to consider only doing this on some test servers that then have some form of testing done against them (such as Serverspec) to verify that your changes are good before rolling them out to a production environment.
Jenkins, by default, will not run a job while the same job is running (or if you are maxed out on executor slots) so you can always be sure that it will only pull the repo (including any changes) after your Ansible run is complete. If you have multiple jobs running you can use blocking to prevent jobs running at the same time (both trying to apply potentially different configurations to the servers) but you don't have to worry about a new job starting and pulling the repo into the already running job as Jenkins separates these into separate work spaces.
We use Jenkins for manual runs of Ansible against our environment but we also have a "self healing" Jenkins job that simply runs a tagged commit of our Ansible code base against our environment, forcing it to an idempotent state to prevent natural drift of configurations. When we need to do something different to the environment or are running a slightly further ahead commit of our code base in to it we can easily disable the self healing job until we're happy with things and then either just re-enable the job to put things back or advance the tag that Jenkins is using to now use the more recent commit.

GitLab CI and Distributed Build Confusion

I'm relatively new to continuous integration servers. I've been using GitLab (v6.5) for a while to manage projects, but I'd like to begin using the GitLab CI to ensure tests pass and builds succeed.
My testing setup consists of two virtual machines: one machine for GitLab and another machine for the GitLab CI (and runners). However, in production I only have a single machine, which is running GitLab. The GitLab team posted an interesting blog post a while back that emphasized:
If you are running tests on the CI server you are doing it wrong!
It was a very informative post, but I didn't come away feeling like I understood this specific point. Does this mean one shouldn't run GitLab and GitLab CI on the same server? Does it mean one shouldn't run GitLab CI and GitLab CI runners on the same server? Or both-- Do I need three servers, one for each task?
From the same post:
Anybody who can push to a branch that is tested on a CI server can easily own that server.
This implies to me that the runners are the security risk since they can run stuff contained in a commit. If that's the case, what's the typical implementation? Put GitLab and GitLab CI on the same machine, but the runners on a separate machine? Wouldn't it still suck if the runner machine was compromised? So people are okay losing their runner machine as long as their code machine is safe?
I would really like to understand this a bit more-- definitely before I implement it in production. Is there any possible yet safe way to implement GitLab, GitLab CI, and GitLab CI runners all on the same machine?
Ideally you're fine running gitlab-ci and gitlab on the same host. Others may disagree with me but the orechestrator (the gitlab-ci node) doesn't do any of the heavy lifting. Its strictly job meta IO and warehousing the results.
With that being said, I would not put the runners on the same machine. Gitlab-CI Runners are resource intensive and will be executing at full tilt on whichever machine you place them on. Its a good idea if you're running in production to put these on spot instances to help curb some of the costs of running the often cpu/memory hungry builds - but can be impractical as your instances are not always on at that point.
I've had some success with putting my gitlab-ci runner's in digital ocean on small instances. I'm not doing HUGE builds, but the idea is to distribute the work load against several servers so your CI server:
Is responsive
Can build multiple project builds at once
Can exercise isolation (this is kind of arbitrary in this list)
and a few other things that don't come to mind right away.
Hope this helps!

Resources