Call AWS CodeBuild project from CodePipeline with different parameters - node.js

Let's imagine that we have one CodePipeline with 2 stages in the following fashion:
new codepipeline.Pipeline(this, name + "Pipeline", {
pipelineName: this.projectName + "-" + name,
crossAccountKeys: false,
stages: [{
stageName: 'Source',
actions: [codeCommitSourceAction]
},{
stageName: 'Build',
actions: [buildAction]
}]
});
Here the Source stage is where we pull the changes from the repository and the Build one is a CodeBuild project which has the following actions in the buildspec file:
Install the dependencies (npm i).
Run the tests (npm run test).
Pack the project (npm run pack).
Update/deploy lambda function (aws lambda update-function-code).
In general it does what it supposed to do, however, if the build fails, the only way to find out, which part has failed, is to look to the logs. I would like that it is seen straight from CodePipeline. In this case CodePipeline must have more stages which correlate with each action from CodeBuild. Based on my experience I can do it if for every stage I provide different CodeBuild project.
Question: can I provide same CodeBuild project to the different CodePipeline stages so, that it will execute only part of buildspec file (for example, only running the tests)?

You can have your buildspec.yml perform different actions based on environment variables. You can then pass different environment variables to CodeBuildAction with environmentVariables.
new codepipeline_actions.CodeBuildAction({
actionName: 'Build',
project: buildProject,
input: sourceInput,
runOrder: 1,
environmentVariables: {
STEP: { value: 'test' }
}
}),
And then check STEP environment variable in buildspec.yml.

Question: can I provide same CodeBuild project to the different CodePipeline stages so, that it will execute only part of buildspec file (for example, only running the tests)?
No, I don't think that is possible.
What you can do however, is to have different buildspecs.yml file called at different stages of your pipeline.
For example, you could have a Codepipeline stage called Init which will call the builspec_init.yml of your project. If that succeed, you could have a following state Apply calling the buildspec_apply.yml file of your project.

Related

Unable to run Azure pipeline "A task is missing. The pipeline references a task called 'Cache'

I am setting up my fork of a Github project with the azure_pipeline.yaml configuration.
This seems to work just fine for everyone else in the community but when I setup the pipeline it gives me the following exception:
A task is missing. The pipeline references a task called 'Cache'. This usually indicates the task isn't installed, and you may be able to install it from the Marketplace: https://marketplace.visualstudio.com. (Task version 2, job 'compile_ci_build', step ''.)
A task is missing. The pipeline references a task called 'Cache'. This usually indicates the task isn't installed, and you may be able to install it from the Marketplace: https://marketplace.visualstudio.com. (Task version 2, job 'test_ci_build', step ''.)
A task is missing. The pipeline references a task called 'Cache'. This usually indicates the task isn't installed, and you may be able to install it from the Marketplace: https://marketplace.visualstudio.com. (Task version 2, job 'e2e_ci_build', step ''.)
Specifically,
Here is my Azure pipeline link
I am creating Flink CI build pipeline according to this instruction.
Which already have an azure-pipeline.yml in the repo
It uses the template to run the job parameterized in tools/azure-pipelines/jobs-template.yml
[UPDATE]
I modified the jobs-templates.yml and commented out all steps with Cache#2 and it runs fine.
Was able to get this working eventually.
Apparently for my Azure account. I am not allow to use Cache#2.
changing all lines with
- task: Cache#2
to
- task: CacheBeta#1
resolves all my problem.
By design, Azure DevOps does not automatically make all tasks available when you run a pipeline.
You have to add them manually as part of the pipeline.
'cache' is one such task.
I'm doing this in classic gui (non-yaml) mode as I find it easier to search for things
But what you do is in your pipeline add a new task and in the task search box type 'cache'.
This will bring up the task.
Click Add to include it in the pipeline.
For more information on this I would recommend reading:
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/caching?view=azure-devops
I do not see you added cache task in your pipeline from your git repo.
It should look like the example here:
variables:
YARN_CACHE_FOLDER: $(Pipeline.Workspace)/.yarn
steps:
- task: Cache#2
inputs:
key: 'yarn | "$(Agent.OS)" | yarn.lock'
restoreKeys: |
yarn | "$(Agent.OS)"
yarn
path: $(YARN_CACHE_FOLDER)
displayName: Cache Yarn packages
- script: yarn --frozen-lockfile
Source
I was getting the same error A task is missing. The pipeline references a.. on one of azure task, PublishCucumberReport#1 . I resolved it by visiting https://marketplace.visualstudio.com/ and going to the task and then clicking on get free button, which installs it on your pipeline

Gulp-compiled CSS folder missing from the Azure DevOps pipeline Build Artifact

A little background...
I have a small dotnet core application that is hosted on Azure and is being built and deployed using Azure DevOps Pipelines. Before we started using the DevOps Pipelines the CI was hooked up directly to Azure which compiled fine but took an actual lifetime to deploy, hence the decision to move.
However, the build pipeline no longer compiles or outputs the sass/css folder
Everything else works okay - I check in, the Build pipeline picks up my commits and has the following steps:
Restore [.NET Core]
Build [.NET Core]
Publish [.NET Core]
Publish Build Artifact
Part of step 3 (Publish) uses a Gulp task:
gulp.task('prod', function (callback) {
runSequence('clean','set-prod',
['icon-sprite', 'logo-sprite', 'images', 'sass', 'modernizr', 'mainjs', 'adminjs'],
callback);
});
And locally (and previously) this generated five folders:
icons
img
js
logos
css (now mysteriously missing in action)
Variations I've tried
I've tried deleting my local css folder and running the CLI dotnet publish exactly the same way the Pipeline does and that appears to work fine locally.
I've also stripped the sass task way back in case that was causing an issue somewhere in the pipeline, so that now looks like this:
return gulp.src('src/sass/style.scss')
.pipe(sass({outputStyle: 'compressed'}))
.pipe(gulp.dest('wwwroot/dist/css));
I can see all of the output in the console logs on the Pipeline and it successfully executes the sass task:
2019-01-02T14:43:51.3558593Z [14:43:51] Starting 'sass'...
2019-01-02T14:43:51.9284145Z [14:43:51] Finished 'sass' after 524 ms
There are no other errors or warnings in the build script and everything completes and fires off the Release pipeline (which copies the artifact up to the Azure site).
Speculation
I would expect an error somewhere... but nothing - all of the green ticks are downright cheerful... so I'm a little stumped at what may or may not be happening! I can only think that there must be some dependency or something missing in the Pipeline environment? Orrrrr maybe I'm missing a Pipeline step?
Any help or nudges or ideas would be greatly appreciated! Thank you for sticking it out through my small essay and for any help you can provide :)
Something I've done in this situation before is changing the Publish Build Artifact task to upload everything in the build folder. My guess is that right now the 'Path to Publish' value in that task is set to $(build.artifactStagingDirectory). Change it to $(build.SourcesDirectory). After running the build again you'll see that the entire build directory has was uploaded. This includes your source code and any other folders like you have on your local environment. From there you can figure out if the CSS folder is actually missing, or if it ended up in some other folder location.
If the folder ends up in a weird location you can either add a file copy task to move the CSS folder to the proper folder in $(build.artifactStagingDirectory) or make a change to the Gulp task. Whatever is better for your scenario.
Once you find the location, you can fix the Publish Build Artifact task.
I was having the exact same issue. I was able to get everything working locally without issue. gulp would generate the css folder just fine. dotnet publish -c release would do the same. However, when ran through the pipeline, no css folder.
The thing that I find the most strange, is that there is a sibling folder (scripts) that is used in the same way the css gulp task is used, but that folder makes it just fine. Here's my css task:
gulp.task('min', function() {
return gulp.src('wwwroot/css/**/*.css')
.pipe(cssnano({zindex:false}))
.pipe(gulp.dest('wwwroot/dist/css/'));
});
but, this task does works both locally and in the pipeline:
gulp.task('build-js', function() {
return gulp.src('wwwroot/scripts/**/*.js')
.pipe(concat('site.bundle.js'))
.pipe(uglify())
.pipe(gulp.dest('wwwroot/dist/scripts/'));
});
I ended up just giving up since this is legacy code anyways and settled on a workaround:
Add the Copy Files task right after your gulp task with the below configuration:
Or, if you like YAML:
steps:
- task: CopyFiles#2
displayName: 'Copy Files to: wwwroot/dist/css'
inputs:
SourceFolder: wwwroot/css
Contents: '*.css'
TargetFolder: wwwroot/dist/css

Concourse: Use a semver resource to control which artifact to use from s3

My pipeline contains a task with the following pre-requisites
- get: version
trigger: true
params: { bump: patch }
passed: ["trigger_job [CI]"]
- get: sdk-package
passed: ["package_generation_job"]
params:
version: {path: "artifact_[I want to put the version here]"}
version is a semver stored in git; sdk-package is a build artifact stored in s3 where each run of the pipeline puts a new artifact using the version number as part of the name.
What I would like to do is used the version input to determine which version of the artifact is pulled from S3. Based on this I suspect that Concourse doesn't allow this, but I couldn't find a definitive answer.
This is not currently possible, you will have to download the artifact you want in a task script. You can pass the version into that task.

GitLab CI/CD pull code from repository before building ASP.NET Core

I have GitLab running on computer A, development environment (Visual studio Pro) on computer B and Windows Server on computer C.
I set up GitLab-Runner on computer C (Windows server). I also set up .gitlab-ci.yml file to perform build and run tests for ASP.NET Core application on every commit.
I don't know how can I get code on computer C (Windows server) so I can build it (dotnet msbuild /p:Configuration=Release "%SOLUTION%"). It bothers me that not a single example .gitlab-ci.yml I found on net, doesn't pull code form GitLab, before building application. Why?
Is this correct way to set-up CI/CD:
User create pull request (a new branch is created)
User writes code
User commit code to branch from computer B.
GitLab runner is started on computer C.
It needs to pull code from current branch (CI_COMMIT_REF_NAME)
Build, test, deploy ...
Should I use common git command to get the code, or is this something GitLab runner already do? Where is the code?
Why no-one pull code from GitLab in .gitlab-ci.yml?
Edited:
I get error
'"git"' is not recognized as an internal or external command
. Solution in my case was restart GitLab-Runner. Source.
#MilanVidakovic explain that source is automatically downloaded (which I didn't know).
I just have one remaining problem of how to get correct path to my .sln file.
Here is my complete .gitlab-ci.yml file:
variables:
SOLUTION: missing_path_to_solution #TODO
before_script:
- dotnet restore
stages:
- build
build:
stage: build
script:
- echo "Building %CI_COMMIT_REF_NAME% branch."
- dotnet msbuild /p:Configuration=Release "%SOLUTION%"
except:
- tags
I need to set correct variable for SOLUTION. My dir (where GitLab-Runner is located) currently holds this folder/files:
- config.toml
- gitlab-runner.exe
- builds/
- 7cab42e4/
- 0/
- web/ # I think this is project group in GitLab
- test/ # I think this is project name in GitLab
- .sln
- AND ALL OTHER PROJECT FILES #Based on first look
- testm.tmp
So, what are 7cab42e4, 0. Or better how to get correct path to my project structure? Is there any predefined variable?
Edited2:
Answer is CI_PROJECT_DIR.
I'm not sure I follow completely.
On every commit, Gitlab runner is fetching your repository to C:\gitlab-runner\builds.. on the local machine (Computer C), and builds/deploys or does whatever you've provided as an action for the stage.
Also, I don't see the need for building the source code again. If you're using Computer C for both runner and tests/acceptance, just let the runner do the building and add Artifacts item in your .gitlab-ci.yaml. Path defined in artifacts will retain your executables on Computer C, which you are then able to use for whatever purposes.
Hope it helps.
Edit after comment:
When you push to repository, Gitlab CI/CD automatically checks your root folder for .gitlab-ci.yaml file. If its there, the runner takes over, parses the file and starts executing jobs/stages.
As soon as the file itself is valid and contains proper jobs and stages, runner fetches the latest commit (automatically) and does whatever script item tells it to do.
To verify that everything works correctly, go to your Gitlab -> CI / CD -> Pipelines, and check out whats going on. You should see something like this:
Maybe it would be best if you posted your .yaml file, there could be a number of reasons your runner is not picking up the code. For instance, maybe your .yaml tags are not matching what runner is created to pick up etc.

Concourse CI - Build Artifacts inside source, pass all to next task

I want to set up a build pipeline in Concourse for my web application. The application is built using Node.
The plan is to do something like this:
,-> build style guide -> dockerize
source code -> npm install -> npm test -|
`-> build website -> dockerize
The problem is, after npm install, a new container is created so the node_modules directory is lost. I want to pass node_modules into the later tasks but because it is "inside" the source code, it doesn't like it and gives me
invalid task configuration:
you may not have more than one input or output when one of them has a path of '.'
Here's my jobs set up
jobs:
- name: test
serial: true
disable_manual_trigger: false
plan:
- get: source-code
trigger: true
- task: npm-install
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
outputs:
- name: node_modules
run:
path: npm
args: [ install ]
- task: npm-test
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
- name: node_modules
run:
path: npm
args: [ test ]
Update 2016-06-14
Inputs and outputs are just directories. So you put what you want output into an output directory and you can then pass it to another task in the same job. Inputs and Outputs can not overlap, so in order to do it with npm, you'd have to either copy node_modules, or the entire source folder from the input folder to an output folder, then use that in the next task.
This doesn't work between jobs though. Best suggestion I've seen so far is to use a temporary git repository or bucket to push everything up. There has to be a better way of doing this since part of what I'm trying to do is avoid huge amounts of network IO.
There is a resource specifically designed for this use case of npm between jobs. I have been using it for a couple of weeks now:
https://github.com/ymedlop/npm-cache-resource
It basically allow you to cache the first install of npm and just inject it as a folder into the next job of your pipeline. You could quite easily setup your own caching resources from reading the source of that one as well, If you want to cache more than node_modules.
I am actually using this npm-cache-resource in combination with a Nexus proxy to speed up the initial npm install further.
Be aware that some npm packages have native bindings that need to be built with the standardlibs that matches the containers linux versions standard libs so, If you move between different types of containers a lot you may experience some issues with libmusl etc, in that case I recommend either streamlinging to use the same container types through the pipeline or rebuilding the node_modules in question...
There is a similar one for gradle (on which the npm one is based upon)
https://github.com/projectfalcon/gradle-cache-resource
This doesn't work between jobs though.
This is by design. Each step (get, task, put) in a Job is run in an isolated container. Inputs and outputs are only valid inside a single job.
What connects Jobs is Resources. Pushing to git is one way. It'd almost certainly be faster and easier to use a blob store (eg S3) or file store (eg FTP).

Resources