Codeship command runs very long after it is finished - node.js

I have the following codeship-steps.yml
# test
- name: test
type: serial
services:
- build
steps:
- command: npm test
What I experience is that even if the command is finished the step runs ahead for 1 or 2 minutes without doing anything or go to the next step/group. Does anybody experience same thing, what can I do here?
Even if I just do npm -v this sometimes take 3 minutes, thats weird, isn't it?

Related

How to increase the file limit of GitHub Actions?

I have the follwing error:
Error: ENOSPC: System limit for number of file watchers reached, watch '/home/ runner/work...
I tried all ways to increase the limit (like ulimit -S -n unlimited, sysctl, etc) but seems to not work, neither with sudo
screnshot
My website has a lot of markdown files (~ 80k) used by gatsby to build the final .htmls.
On my machine I need to increase the file limit, of couse, then works. But in the github actions I can't figure out a way to do this.
My github action workflow.yml
name: Build
on: [push, repository_dispatch]
jobs:
update:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Increase file limit
run: sudo sysctl -w fs.file-max=65536
- name: Debug
run: ulimit -a
- name: Set Node.js
uses: actions/setup-node#master
with:
node-version: 12.x
- name: Install dependencies
run: npm install
- name: Build
run: npm run build
I think this could be related to this issue: https://github.com/gatsbyjs/gatsby/issues/17321
It sounds like these GitHub/Expo issues might be the problem:
https://github.com/expo/expo-github-action/issues/20
ENOSPC: System limit for number of file watchers reached
https://github.com/expo/expo-cli/issues/277
Handle ENOSPC error (fs.inotify.max_user_watches reached)
Thanks for testing!
I'm afraid this seems to be a GitHub Action
limitation. That docker image is forcing the
fs.inotify.max_user_watches limit to 524288, but apparently GHA is
overwriting this back to 8192. You can see this happen in a fork of
your repo (when we are done, I'll remove the fork ofc, let me know if
you want to have it removed earlier).
Continuing...
Yes, it's related to a limitation of the environment you are running
Expo CLI in. The metro bundler requires a high amount of listeners
apparently. This fails if the host environment is limiting this. So
technically its an environment issue, but I'm not sure if the CLI can
change anything about this.
I find the limit in GitHub Action personally a little low. Like I
tried to outline in an earlier comment on that CLI issue, the
limitation in other CI vendors is actually set to the default max
listeners. Why they did not do this in GH Actions is unclear, that's
what I try to find out. Might be a configurational issue on their
hands, or an intentional limitation.
... And ...
So, there exists a fix, that seemed to work for me when I tried. What
I did was to follow this guys tip: “Increasing the number of
watchers” — #JNYBGR https://link.medium.com/9Zbt3B4pM0
I then did this in my main action.yml With all the specifics
underneath the dev release
steps:
- uses: actions/checkout#v1
- name: Setup kernel for react native, increase watchers
run: echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
- name: Run dev release fastlane inside docker action
Please let us know if any of this matches your environment/scenario, and if you find a viable workaround.
UPDATE:
The OP tried fs.inotify.max_user_watches=524288 in his .yaml, and now Gatsby is failing with Error: EMFILE: too many open files open '/home/runner/work/virtualizedfy.gatsby, and NodeJS subsequently crashes with an assertion error:
node[3007]: ../src/spawn_sync.cc:460:v8:Maybe<bool> node:SycProcessRunner::TryInitializeAndRunLoop(v8:Local<v8::Value>): Assertion `{uv_loop_init(vu_loop_ == (0)' failed.
ADDITIONAL SUGGESTION:
https://github.com/gatsbyjs/gatsby/issues/12011
Google seems to suggest https://github.com/isaacs/node-graceful-fs as
a drop in replacement for fs, I might also experiment with that to see
if it makes a difference.
EDIT: I can confirm that monkeypatching fs with graceful-fs at the top
of gatsby-node.js as in the snippet below fixes the issue for me.
const realFs = require('fs')
const gracefulFs = require('graceful-fs')
gracefulFs.gracefulify(realFs)
EDIT2: Actually after upgrading from Node 10 to Node 11 everything
seems to be fine without having to patch fs... So all is well!

GitLab CI: How to continue job even when script fails

I have a job in my pipeline that has a script with two very important steps:
mvn test to run JUnit tests against my code
junit2html to convert the XML result of the tests to a HTML format (only possible way to see the results as my pipelines aren't done through MRs) that is uploaded to GitLab as an artifact
docker rm to destroy a container created earlier in the pipeline
My problem is that when my tests fail, the script stops immediately at mvn test, so the junit2html step is never reached, meaning the test results are never uploaded in the event of failure, and docker rm is never executed either, so the container remains and messes up subsequent pipelines as a result.
What I want is to be able to keep a job going till the end even if the script fails at some point. Basically, the job should still count as failed in GitLab CI / CD, but its entire script should be executed. How can I configure this?
In each step that you need to continue even if the step fails, you can add a flag to your .gitlab-ci.yml file in that step. For example:
...
Unit Tests:
stage: tests
only:
- branches
allow_failure: true
script:
- ...
It's that allow_failure: true flag that will continue the pipeline even if that specific step fails. Gitlab CI Documentation about allow_failure is here: https://docs.gitlab.com/ee/ci/yaml/#allow_failure
Update from comments:
If you need the step to keep going after a failure, and be aware that something failed, this has worked well for me:
./script_that_fails.sh || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi

Gitlab-CI succeeds on non-zero exit

Gitlab-CI seems allow the build to succeed even though the script is returning a non-zero exit. I have the following minimal .gitlab-ci.yml:
# Run linter
lint:
stage: build
script:
- exit 1
Producing the following result:
Running with gitlab-runner 11.1.0 (081978aa)
on gitlab-runner 72348d01
Using Shell executor...
Running on [hostname]
Fetching changes...
HEAD is now at 9f6f309 Still having problems with gitlab-runner
From https://[repo]
9f6f309..96fc77b dev -> origin/dev
Checking out 96fc77bb as dev...
Skipping Git submodules setup
$ exit 1
Job succeeded
Running on GitLab Community Edition 9.5.5 with gitlab-runner version 11.1.0. Closest post doesn't propose a resolution nor does this issue. A related question shows this setup should fail.
What are the conditions of failing a job? Isn't it a non-zero return code?
The cause of the problem was su was wrapped to call ksu as the shared machines are authenticated using Kerberos. In that case the wrapped ksu succeeds even though the script command might fail, indicating the job succeeded. This affected gitlab-runner since the shell executor was running su to run as the indicated user.

Circle ci runs deploy section despite test failing

Circle ci is running the deploy section of my circle.yml depsite the test section failing. I'd expect that if anything goes wrong in the test section, the deploy section wouldn't be run, but it is.
There's two command in the test section the second one fails with :
npm run test:coverage -- --maxWorkers=2 returned exit code 1
I do not seem to be the first one, and the oldest post is a few months old:
https://discuss.circleci.com/t/deployment-triggered-after-tests-fail/12356/3
Is this a bug or am I doing something wrong ?
Ideas ?
circleci v 1.0

How to run Jasmine tests on Node.js from command line

How do I run Jasmine tests on Node.js from command line? I have installed jasmine-node via npm and written some tests. I want to run tests inside the spec directory and get results in the terminal, is this possible?
This should get you going quickly:
install Node.js (obviously).
Next install Jasmine. Open a command prompt and run:
npm install -g jasmine
Next, cd to any directory and set up an example 'project':
jasmine init
jasmine examples
Now run your unit tests:
jasmine
If your jasmine.json file is somewhere else besides spec/support/jasmine.json, simply run:
jasmine JASMINE_CONFIG_PATH=relative/path/to/your/jasmine.json
For more info see:
https://www.npmjs.com/package/jasmine
http://jasmine.github.io/2.2/node.html
EDIT
It seems this is no longer the current best answer as the package is unmaintained. Please see the answer below
You can do this
from your test directory
sudo npm install jasmine-node
This installs jasmine into ../node_modules/jasmine-node
then
../node_modules/jasmine-node/bin/jasmine-node --verbose --junitreport --noColor spec
which from my demo does this
Player - 5 ms
should be able to play a Song - 2 ms
when song has been paused - 1 ms
should indicate that the song is currently paused - 0 ms
should be possible to resume - 0 ms
tells the current song if the user has made it a favorite - 1 ms
#resume - 0 ms
should throw an exception if song is already playing - 0 ms
Player - 5 ms
should be able to play a Song - 2 ms
when song has been paused - 1 ms
should indicate that the song is currently paused - 0 ms
should be possible to resume - 0 ms
tells the current song if the user has made it a favorite - 1 ms
#resume - 0 ms
should throw an exception if song is already playing - 0 ms
Finished in 0.01 seconds
5 tests, 8 assertions, 0 failures, 0 skipped
The easiest way is to run the command in your project root:
$ npx humile
It founds all your specs which name ends with .spec.js.
If you think humile is fine for your project, just install it as dev dependency. It speeds up the command.
$ npm install -D humile
Try Karma (formerly Testacular), it is a testing library agnostic test runner done by Angular.js team
http://karma-runner.github.io/0.12/index.html
Jasmine support is well baked.
http://karma-runner.github.io/0.12/intro/how-it-works.html

Resources