WebSocket error when using Playwright on GitLab CI - jestjs

I am using Playwright with Jest and jest-playwright-preset and trying to get my tests to run in GitLab CI.
My .gitlab-ci.yml:
ui_test:
image: node:latest
script:
- npm install
- npm run test:ui
When I run this locally on my Windows machine everything works fine. However if I try to run in GitLab CI I get the following error:
2020-06-26T21:01:39.770Z pw:api => chromium.launchServer started
2020-06-26T21:01:39.786Z pw:api <= chromium.launchServer succeeded
2020-06-26T21:01:40.182Z pw:api => chromium.connect started
2020-06-26T21:01:40.217Z pw:api <= chromium.connect failed
FAIL browser: chromium ./test.js
● Test suite failed to run
WebSocket error: connect ECONNRESET 127.0.0.1:38267
================== chromium.connect logs ==================
<ws connecting> ws://127.0.0.1:38267/b32ed779b7c87222e0b2b6aa117b0c79
<ws connect error> ws://127.0.0.1:38267/b32ed779b7c87222e0b2b6aa117b0c79 connect ECONNRESET 127.0.0.1:38267
<ws disconnected> ws://127.0.0.1:38267/b32ed779b7c87222e0b2b6aa117b0c79
============================================================
Note: use DEBUG=pw:api environment variable and rerun to capture Playwright logs.
at WebSocket.<anonymous> (node_modules/playwright/lib/transport.js:119:24)
at WebSocket.onError (node_modules/playwright/node_modules/ws/lib/event-target.js:128:16)
at ClientRequest.<anonymous> (node_modules/playwright/node_modules/ws/lib/websocket.js:568:15)
EDIT: This has nothing to do with Jest. The same problem will arise when only using Playwright and using the node image instead of the Playwright image.

The node:latest does not have the appropriate system dependencies to run the browsers. You can use the Playwright docker image.
ui-test:
stage: test
image: mcr.microsoft.com/playwright:bionic
script:
- npm install # this should install playwright
- npm run test:ui
(Edited to reflect the official docker image)

Related

Astro 2.0 on AWS Amplify

I'm trying to use the SSR with AWS Amplify but when I activate the Node.js and change the output type to server. When I deploy to server I got an 404 error page.
I tried to build the project and I have to run two npm commands: npm run build and after that the npm run server. But the deploy is not working.
frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build
postBuild:
commands:
- npm run server
artifacts:
baseDirectory: /dist
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Astro has, adapters for each SSR cloud solution, and I haven't seen AWS listed
you could use vercel or cloudflare
and install the adapter of those server
npx astro add cloudflare
on your option I think amplify is needing for nodejs adapter and its already exist
use this instead
npx astro add node

Yarn install in gitlab pipline

I'm using pipline to install my packages and this is my script:
default:
image: node:latest
before_script:
- yarn
Install Dependencies:
stage: install_deps
script:
- yarn
and I got this error:
error An unexpected error occurred:
"https://registry.yarnpkg.com/#ant-design%2ficons: tunneling socket
could not be established, statusCode=407".
What is the problem? What am I doing wrong?
The runner that you are running on - Is that a private runner for your project? 407 means when its trying to reach to the internet, its hitting up against a proxy which requires authentication. You can verify this easyly by running curl on that URL and see the response.

Deploy NodeJS Express on amplify

I am trying to deploy an app on AWS Amplify.
The app is React front and and NodeJS Express backend.
The frontend works fine, but the backend is just stuck without any reasonable explanation
My YML file is
version: 1
backend:
phases:
build:
commands:
- npm run build-backend
postBuild:
commands:
- cd ..
frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build-frontend
artifacts:
baseDirectory: ./client/build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
build backend-script :
"build-backend": "cd server && npm run start",
npm run start script:
"start": "npm install && node index.js"
The build is getting stuck on the npm install and after 10-20 minutes just "gives up" without the following log
2021-04-22T11:49:20.693Z [INFO]: > server#1.0.0 start /codebuild/output/src650104622/src/myBlog/server
> npm install && node index.js
2021-04-22T11:49:26.976Z [INFO]: > bcrypt#5.0.0 install /codebuild/output/src650104622/src/myBlog/server/node_modules/bcrypt
> node-pre-gyp install --fallback-to-build
Thanks
I came across this thread while looking for a solution for my project. And because there are no more answers here, I'll tell you what I could find myself.
Amplify works fine with SSG web applications (Gatsby, etc.), with SSR (React, NextJS, NuxtJS, etc.) and with simple NodeJS and ExpressJS applications (which only run on requests because Amplify uses Lambda-functions to handle it). So:
If you need a simple ExpressJS application for your API you can
use the following serverless Ampl solution:
https://docs.amplify.aws/guides/api-rest/express-server/q/platform/js/
If you need a socket.io application (or another constantly
running application) you need to use AWS Fargate (uses
docker images) or AWS EC2 (works on simple virtual machine
with access by SSH) solutions.
P.S. If you have any other information on this subject, please post it here.

npm scripts hang indefinitely on Concourse CI

On Concourse, I am running integration tests where I run certain npm scripts. There is a particular script that build my backend/frontend and then proceeds to run the tests. However once the test is done (fail or succeed). the npm script does not stop. It doesn't error out and hangs indefinitely either when the tests fail or succeeds. I have run this script on a local machine and a local container and the npm script works fine. Only on Concourse, the script hangs forever.
To give more context to my setup here is a sample of the npm script which is run on the frontend
"ci:start:backend": "npm run --prefix ../emailservice/mock-service dev & npm run --prefix ../server-repo ci:start:server & sleep 3"
"ci:test:system": "npm run ci:start:backend && npm run build:dist:serve & sleep 90 && npm run test:browser:ci"
npm run ci:test:system is the main script that is run. What it does it will start running an email service, a server and the frontend all at once in order to run the tests. It is a messy way of doing things but it works for both local and in containers. This method has been done for similar tests for server testing and it is running on concourse fine.
The task of the pipeline can be seen below
# runs unit tests for frontend
- name: run-tests
plan:
- get: frontend-repo
- get: server-repo
- get: emailservice
- task: run-npm-tests
privileged: true
config:
platform: linux
image_resource:
type: docker-image
source:
repository: jonoc/techradar-integration
inputs:
- name: frontend-repo
- name: server-repo
- name: emailservice
run:
path: sh
args:
- -exc
- |
mongod --fork --logpath /var/log/mongodb.log
export SHELL=/bin/bash
cd server-repo
npm install --silent
cd ../emailservice/mock-service
npm install --silent
cd ../../frontend-repo
npm install --silent
npm rebuild node-sass --silent
npm run postinstall --silent
npm run ci:test:system
Everything seems to be not out of the ordinary but concourse refuses to give a green or red build. I suspect it is due to the other scripts that are run forever but are hanging in the background and concourse does not want to end. However running npm run ci:start:backend in concourse will work fine, but running npm run test:browser:ci will hang forever which further adds confusion to whats the problem.
Concourse version:3.3.2
Deployment type (BOSH/Docker/binary):Docker
Infrastructure/IaaS:AWS/EC2
Browser (if applicable):Chrome
Did this used to work? Never
Are you sure that your resources are available in the tasks docker container?
You specify multiple inputs here
- name: frontend-repo
- name: server-repo
- name: emailservice
But concourse requires you to specify a proper path for each input if you have more than one.
Try to hijack the task container after execution and check if the resources are available. You can also execute the script in the container, so you can debug it easier.
fly -t <your_target> hijack -j demo_job/demo_task
My issue is resolved by changing up my npm scripts. Turns out chaining npm run --prefix ../emailservice/mock-service dev & npm run --prefix ../server-repo ci:start:server & sleep 3 with the other scripts causes some issues on Concourse.
I modified the npm scripts to use npm-run-all and use the -r parameter to finish the script when my tests are done

.ebextensions with CodePipeline and Elastic Beanstalk

I started working on my first CodePipeline with node.js app which is hosted on github. I would like to create simple pipe as follow:
Github repo triggers pipe
Test env (Elastic Beanstalk app) is built from S3 .zip file
Test env runs npm test and npm lint
If everything is OK then QA env (another EB app) is built
For above pipe I've created .config files under .ebextensions directory:
I would like to use npm install --production for QA and PROD env, but it seems that EC2 can't find node nor npm. I checked logs and EC2 triggered npm install by default in temporary folder, then it fails on my first script and app catalogue is always empty.
container_commands:
install-dev:
command: "npm install"
test: "[ \"$NODE_ENV\" = \"TEST\" ]"
ignoreErrors: false
install-prod:
command: "npm install --production"
test: "[ \"$NODE_ENV\" != \"TEST\" ]"
ignoreErrors: false
Is it posible to run unit tests and linting without jenkins?
container_commands:
lint:
command: "npm run lint"
test: "[ \"$NODE_ENV\" = \"TEST\" ]"
ignoreErrors: false
test:
command: "npm run test"
test: "[ \"$NODE_ENV\" = \"TEST\" ]"
ignoreErrors: false
I set NODE_ENV for each Elastic Beanstalk instance. No matter what I will do every time my pipe fails because of npm is not recognized, but how is it possible if I'm running 64bit Amazon Linux with node.js ? What's more I cannot find any examples about CodePipeline with node.js in AWS Docs. Thanks in advance!
If you're using AWS for CI/CD, you can use CodeBuild. However, Github provides a great feature called Actions for running Unit Tests, which I find much simpler than AWS. Anyway, I will walk you through both examples:
Using AWS for running Unit Tests
Essentially, you could create a new stage into your CodePipeline, and configure the CodeBuild for running Unit Tests, e.g.
First, add a buildspec.yml file in the root folder of your app so you can use the following example:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
commands:
- echo Installing Mocha globally...
- npm install -g mocha
pre_build:
commands:
- echo Installing dependencies...
- npm install
- npm install unit.js
build:
commands:
- echo Build started on `date`
- echo Run Unit Tests and so on
- npm run test
- npm run lint
post_build:
commands:
- echo Build completed on `date`
# THIS IS OPTIONAL
artifacts:
files:
- app.js
- package.json
- src/app/*
- node_modules/**/*
You can find everything you need in the BackSpace Academy, this course is for free:
AWS DevOps CI/CD - CodePipeline, Elastic Beanstalk and Mocha
Using Github for running Unit Tests
You could create your custom actions using Github, it will automatically set up everything you need in your root folder, e.g.
After choosing the appropriate workflow, it will automatically generate a folder/file ".github > workflow > nodejs.yml".
So it will look like this:
name: Node CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [8.x, 10.x, 12.x]
steps:
- uses: actions/checkout#v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
npm test
env:
CI: true
I hope you could find everything you need in this answer. Cheers
Have you incorporated CodeBuild into your pipeline?
You should
1) Create a pipeline whose source is your github account. Go through the setup procedure so that commits on a particular branch trigger the Codepipeline
2) Create a test stage in your Codepipeline which leverages the CodeBuild service. In order to run your Node tests, you might need to provide a configured build environment. And you probably also need to provide a build spec file that specifies the tests to run etc.
3) Assuming that the test stage passes, you can determine if the pipeline continues to another stage which is linked to an elasticbeanstalk app environment which supports the Node platform. These environments are purely for artifacts that have passed testing, so I see no need to have the .ebextensions commands written above.
Have a read of what CodeBuild can do to help you run tests for Node,
https://aws.amazon.com/codebuild/
Good luck!

Resources