Bazel sh_test doesn't find node - node.js

I am trying to run a script which needs node. I have node installed in my machine.
I can run sh_binary by bazel run //:sh_bin and the script runs node just fine:
sh_binary(
name = "sh_bin",
data = [
],
srcs = [":script.sh"],
)
script.sh:
node -v
bazel run //:sh_bin:
v14.17.6
Now I want to convert this to sh_test:
sh_test(
name = "sh_bin",
data = [
],
srcs = [":script.sh"],
)
but now bazel test //:sh_bin cannot find node:
node: command not found
I also tried to add local = True to the test and still the same issue.

Bazel tests are run in a more controlled environment than application run via bazel run. One of the initial conditions that the test runner establishes is the value of $PATH: https://docs.bazel.build/versions/main/test-encyclopedia.html#initial-conditions
If you are working with remote execution, another problem could be that your test is executed on a machine that does not have node installed.
It's always a great idea to strive for a hermetic build that runs and tests independent of the host's state. That means you'd need to make the node program available to your binary or test as a data dep.
A good alternative is to build on existing work such as https://github.com/bazelbuild/rules_nodejs.
That being said, your example actually works for me.
cd `mktemp -d`
touch WORKSPACE
echo "node -v" > script.sh
chmod +x script.sh
cat <<EOF > BUILD
sh_test(
name = "sh_bin",
srcs = [":script.sh"],
)
EOF
bazel test --test_output=all -- //:sh_bin
Starting local Bazel server and connecting to it...
INFO: Analyzed target //:sh_bin (24 packages loaded, 282 targets configured).
INFO: Found 1 test target...
INFO: From Testing //:sh_bin:
==================== Test output for //:sh_bin:
v17.1.0
================================================================================
Target //:sh_bin up-to-date:
bazel-bin/sh_bin
INFO: Elapsed time: 6.895s, Critical Path: 0.10s
INFO: 5 processes: 3 internal, 2 linux-sandbox.
INFO: Build completed successfully, 5 total actions
//:sh_bin PASSED in 0.0s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 5 total actions

Related

Jest process quit before all tests execution without any feedback

I am very new to Jest, so I'm trying to run my tests serially using the command yarn jest test -i (same for yarn jest test --runInBand according to Jest documentation).
This is my test files tree:
❯ cd src/test
vipires in scheduling-engine-test/src/test on master
❯ tree
.
├── customer
│ └── clean.test.ts
├── engine
│ └── job.test.ts
└── schedule
├── create.test.ts
└── generateSchedules.test.ts
3 directories, 4 files
The issue is that my project has 4 test files which only one is being tested when I run this command above, it quits occasionally after the first file such as generateSchedules.test.ts
The reason why I need to do it serially is that I'm using Jest for integration tests against an REST API and querying an Oracle Database, so when I run just the command yarn jest test, jest triggers some workers and run all the tests in parallell. It result in some assertion problems because every test file has an beforeAll() and afterAll() methods that delete all my test data generated during test execution.
Below it is my terminal log output for the command I´ve told:
❯ yarn jest test --runInBand
knex:query update "ENGINE_TASK" set "DAT_EXECUTION" = :1 where "IDT_ENGINE_TASK" in (:2, :3, :4, :5, :6) trx1 +0ms
knex:query update "TASK" set "DAT_EXECUTION" = :1 where "IDT_TASK" in (:2, :3, :4, :5, :6) trx1 +63ms
knex:query select * from "ENGINE_TASK" where "IDT_ENGINE_TASK" in (:1, :2, :3, :4, :5) trx3 +165ms
knex:query delete from "ENGINE_TASK" where "IDT_CUSTOMER_ID" = :1 trx5 +6s
knex:query delete from "TASK" "T" where exists (select * from "SCHEDULE" "S" where "IDT_CUSTOMER_ID" = :1 and t.idt_schedule = s.idt_schedule) trx5 +55ms
knex:query delete from "CUSTOMER_EXECUTION_FLAGS" where "IDT_CUSTOMER_ID" = :1 trx5 +70ms
knex:query delete from "SCHEDULE" "S" where "IDT_CUSTOMER_ID" = :1 trx5 +50ms
PASS src/test/schedule/generateSchedules.test.ts (15.647 s)
✓ generate schedules for today (8644 ms)
✓ should block delete on scheduled task (2464 ms)
✓ should permit delete on scheduled task (3173 ms)
RUNS src/test/engine/job.test.ts
vipires in scheduling-engine-test on master took 21s
Notice that the last log line from Jest was RUNS src/test/engine/job.test.ts and then the following line is from my terminal again informing that the execution took 21 seconds.
It was very hard to me to explain that behavior since I wasn't able to find any occurrence of this problem before at Jest documentation nor Stackoverflow. I've tried to discourse about the execution in way to make this post shorten, but anyway if more code is needed to investigate this behavior just let me know.
Node and OS info:
Node version: v14.16.1
Yarn version: v2.4.1
Jest version: 26.6.2
Model Name: MacBook Pro
Model Identifier: MacBookPro 11,4
Processor Name: Quad-Core Intel Core i7
Processor Speed: 2,2 GHz
Number of Processors: 1
Total Number of Cores: 4
L2 Cache (per Core): 256 KB
L3 Cache: 6 MB
Hyper-Threading Technology: Enabled
Memory: 16 GB
Boot ROM Version: 199.0.0.0.0
SMC Version (system): 2.29f24

How can I get more granular Testresults in AWS-Device Farm? [Appium node.js]

I run Appium node.js tests on AWS Device Farm. I would like to get granular Test results shown in Device Farm, but I always get one "Tests Suite" result which inlcudes all tests. So if one small test failes the whole Test Suite fails.
I read in the Device Farm Docs that in a Standard Environment more granular results will be displayed, but I am not sure how to switch or use standard environment. I asume it has something to do with the YAML File as the possibility to select between standard or custom environment is not longer given on the Device Farm UI.
This is my current YAML File:
version: 0.1
# Phases are collection of commands that get executed on Device Farm.
phases:
# The install phase includes commands that install dependencies that your tests use.
# Default dependencies for testing frameworks supported on Device Farm are already installed.
install:
commands:
# By default, Appium server version used is 1.7.2.
# You can switch to an alternate supported version from 1.6.5, 1.7.1, 1.7.2, 1.8.0 , 1.8.1, 1.9.1 by using a command like "avm 1.7.1"
# OR
# To install a newer version of Appium use the following commands:
- export APPIUM_VERSION=1.9.1
- avm $APPIUM_VERSION
- ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
# By default the node version installed is 11.4.0
# you can switch to an alternate node version using below command.
# - nvm install 10.13.0
# Unpackage and install the node modules that you uploaded in the test phase.
- echo "Navigate to test package directory"
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- npm install *.tgz
# The pre-test phase includes commands that setup your test environment.
pre_test:
commands:
# We recommend starting appium server process in the background using the command below.
# Appium server log will go to $DEVICEFARM_LOG_DIR directory.
# The environment variables below will be auto-populated during run time.
- echo "Start appium server"
- >-
appium --log-timestamp --device-name $DEVICEFARM_DEVICE_NAME
--platform-name $DEVICEFARM_DEVICE_PLATFORM_NAME --app $DEVICEFARM_APP_PATH
--automation-name UiAutomator2 --udid $DEVICEFARM_DEVICE_UDID
--chromedriver-executable $DEVICEFARM_CHROMEDRIVER_EXECUTABLE >> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
# The test phase includes commands that run your test suite execution.
test:
commands:
# Go into the root folder containing your source code and node_modules
- echo "Navigate to test source code"
# Change the directory to node_modules folder as it has your test code and the dependency node modules.
- cd $DEVICEFARM_TEST_PACKAGE_PATH/node_modules/*
- echo "Start Appium Node test"
# Enter the command below to start the tests . The comamnd should be similar to what you use to run the tests locally.
# For e.g. assuming you run your tests locally using command "node YOUR_TEST_FILENAME.js.", enter the same command below:
- npm run test:android
# The post test phase includes are commands that are run after your tests are executed.
post_test:
commands:
# The artifacts phase lets you specify the location where your tests logs, device logs will be stored.
# And also let you specify the location of your test logs and artifacts which you want to be collected by Device Farm.
# These logs and artifacts will be available through ListArtifacts API in Device Farm.
artifacts:
# By default, Device Farm will collect your artifacts from following directories
- $DEVICEFARM_LOG_DIR```
AWS Device Farm's standard mode is independent of YAML file. It's a setting that is configured in the "Configure" step when you schedule a run through the console or via CLI through the ScheduleRun API. Currently, AWS Device Farm does not support Appium Node in standard mode, which means that the granular reporting you are seeking is not available.
If you have further questions, you can head over to the AWS Device Farm Forums for additional assistance from their engineering team.
Andy

Haskell installation in docker container using stack failing: too many open files

I have a simple Dockerfile
FROM haskell:8
WORKDIR "/root"
CMD ["/bin/bash"]
which I run mounting pwd folder to "/root". In my current folder I have a Haskell project that uses stack (funblog). I configured in stack.yml to use "lts-7.20" resolver, which aims to install ghc-8.0.1.
Inside the container, after running "stack update", I ran "stack setup" but I am getting "Too many open files in system" during GHC compilation.
This is my stack.yaml
flags: {}
packages:
- '.'
- location:
git: https://github.com/agrafix/Spock.git
commit: 2c60a48b2c0be0768071cc1b3c7f14590ffcc7d6
subdirs:
- Spock
- Spock-core
- reroute
- location:
git: https://github.com/agrafix/Spock-digestive.git
commit: 4c85647427e21bbaefbf04c4bc315d4bdfabba0e
extra-deps:
- digestive-bootstrap-0.1.0.1
- blaze-bootstrap-0.1.0.1
- digestive-functors-blaze-0.6.0.6
resolver: lts-7.20
One import note: I don't want to use Docker to deploy the app, just to compile it, i.e. as part of my dev process.
Any ideas?
Should I use another image without ghc pre-installed to use with docker? Which one?
update
Yes, I could use the built-in GHC in the container and it is a good idea, but wondered if there is any issue building GHC within Docker.
update 2
For anyone wishing to reproduce (on MAC OSX by the way), you can clone repo https://github.com/carlosayam/funblog and grab this commit 9446bc0e52574cc574a9eb5f2733f69e07b874ef
(I will probably move on using container's GHC)
By default, Docker for macOS limits number of file descriptors to avoid hitting macOS system-wide limits (default limit is 900). To increase the limit, follow these commands:
$ cd ~/Library/Containers/com.docker.docker/Data/database/
$ git reset --hard
HEAD is now at 9410b78 last-start-time changed at 1480947038
$ cat com.docker.driver.amd64-linux/slirp/max-connections
900
$ echo 1200 > com.docker.driver.amd64-linux/slirp/max-connections
$ git add com.docker.driver.amd64-linux/slirp/max-connections
$ git commit -s -m 'Update the maximum number of connections'
[master 227a248] Update the maximum number of connections
1 file changed, 1 insertion(+), 1 deletion(-)
Then check the notice messages by:
$ syslog -k Sender Docker
<Notice>: updating connection limit to 1200
To check how many files you got open, run: sysctl kern.num_files.
To check what's your current limit, run: sysctl kern.maxfiles.
To increase it system-wide, run: sysctl -w kern.maxfiles=20480.
Source: Containers become unresponsive due to "too many connections".
See also: Docker: How to increase number of open files limit.
On Linux, you can also try to run Docker with --ulimit, e.g.
docker run --ulimit nofile=5000:5000 <image-tag>
Source: Docker error: too many open files

Custom remote task is not executed with capistrano 3

I run into a weird issue with capistrano 3 and brunch.
I want to execute brunch on remote server but nothing happen. My custom remote task looks like this:
namespace :brunch do
desc "Building assets with brunch.io"
task :build do
on roles(:web) do
within "#{release_path}" do
execute "node #{release_path}/node_modules/brunch/bin/brunch build --env=#{fetch(:stage)} #{release_path}"
end
end
end
end
When I run "cap staging deploy", I can see command is executed:
INFO [a246858c] Running node /releases/20160303145521/node_modules/brunch/bin/brunch build --env=staging /releases/20160303145521 as web
INFO [a246858c] Finished in 0.159 seconds with exit status 0 (successful).
But my assets are not built, nothing is done.
And if I connect on my server run command, everything works fine.
I don't understand this behaviour, is any one aware of that?
Thanks a lot for your help
I'm using Capistrano Version: 3.4.0 (Rake Version: 10.5.0)

R Command not recognized when submitted with SSH

I am submitting a shell script on a remote host that in turn submits an R script, but the error R: command not found or Rscript: command not found (depending whether I tried R CMD BATCH or Rscript).
I have tried submitting in the following ways:
ssh <remote-host> exec $HOME/test_script.sh
ssh <remote-host> `sh $HOME/test_script.sh`
The script test_script.sh contains (have tried Rscript as well):
#!/bin/sh
Rscript --no-save --no-restore $HOME/greetme.R
exit 0
The script greetme.R contains only cat("Hello\n").
The reason I am getting flustered is that when I log into the remote-host and submit the original script with sh $HOME/test_script.sh, it runs as intended.
The system specs and R versions for both the local and remote hosts are identical:
> R.version
_
platform x86_64-unknown-linux-gnu
arch x86_64
os linux-gnu
system x86_64, linux-gnu
status
major 3
minor 1.0
year 2014
month 04
day 10
svn rev 65387
language R
version.string R version 3.1.0 (2014-04-10)
nickname Spring Dance
Why is Linux refusing to recognize the commands?
I would prefer solutions using R CMD BATCH or Rscript but if there are known workarounds using littler or %R_TERM% I would like to hear them too.
I used this related question as reference, as well as the documents referenced in the comments: R.exe, Rcmd.exe, Rscript.exe and Rterm.exe: what's the difference?
EDIT for solution:
As #merlin2011 suggested, once I specified the full path in the test_script.sh, everything worked as intended:
#!/bin/sh
/opt/R/bin/Rscript --no-save --no-restore $HOME/greetme.R
exit 0
I got the path also by the provided suggestion:
$ which Rscript
/opt/R/bin/Rscript
It appears that you have a PATH issue, where R is not on your PATH when you try to run the command through ssh.
If you specify the full path to R and Rscript on the remote host, it should resolve the problem.
If you are not sure what the full path is, try logging into the server and running which R to get the path.

Resources