I'm trying to setup Appium using Nodejs and WebdriverIO. I created my own TestSpec.yml file based off the AWS Device Farm Default. I've hit the point where I think I have everything setup besides AWS complaining that it cannot find the ipa file.
Here's my index.js file that is being ran as a test against the ipa file.
const wdio = require("webdriverio");
const opts = {
path: '/wd/hub',
port: 4723,
capabilities: {
platformName: "iOS",
deviceName: "iPhone 13 Pro Max",
automationName: "XCUITest",
platformVersion: "15.5",
autoAcceptAlerts: true,
noReset: true,
app: "$DEVICEFARM_APP_PATH"
}
};
async function main () {
const client = await wdio.remote(opts);
}
main();
Here is the testSpec which is mostly the same as the default AWS Device Farm Nodejs one except for some tweaks to use newer versions of node and appium. I also have the command npm run ios that just uses the package json file to run the index.js script.
# Phases are collection of commands that get executed on Device Farm.
phases:
# The install phase includes commands that install dependencies that your tests use.
# Default dependencies for testing frameworks supported on Device Farm are already installed.
install:
commands:
- export NVM_DIR=$HOME/.nvm
- . $NVM_DIR/nvm.sh
- nvm install --lts
# This test execution environment uses Appium version 1.9.1 by default, however we enable you to change it using the Appium version manager (avm)
# An example "avm" command below changes the version to 1.19.0
# For your convenience, we have pre-installed the following Appium versions: 1.9.1, 1.10.1, 1.11.1, 1.12.1, 1.13.0, 1.14.1, 1.14.2, 1.15.1, 1.16.0, 1.17.1, 1.18.0, 1.18.1, 1.18.2, 1.18.3, and 1.19.0
# For iOS Devices on OS version 14.2 and above, please use Appium Version 1.19.0 or higher.
# For iOS devices on OS version 14.0 and above, please use Appium version 1.18.0 or higher.
# For iOS devices on OS version 13.4 through 13.7, please use Appium version 1.17.1 or higher.
# Additionally, for iOS devices on OS version 13.0 through 13.3, please use Appium version 1.16.0 or higher.
# To use one of these Appium versions, change the version number in the "avm" command below to your desired version:
- export APPIUM_VERSION=1.22.3
- avm $APPIUM_VERSION
- ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
# Device farm provides different pre-built versions of WebDriverAgent, and each is suggested for different versions of Appium:
# DEVICEFARM_WDA_DERIVED_DATA_PATH_V6: this version is suggested for Appium 1.18.2, 1.18.3, and 1.19.0. V6 is built from the following source code: https://github.com/appium/WebDriverAgent/releases/tag/v2.20.8
# DEVICEFARM_WDA_DERIVED_DATA_PATH_V5: this version is suggested for Appium 1.18.0 and 1.18.1. V5 is built from the following source code: https://github.com/appium/WebDriverAgent/releases/tag/v2.20.2
# DEVICEFARM_WDA_DERIVED_DATA_PATH_V4: this version is suggested for Appium 1.17.1. V4 is built from the following source code: https://github.com/appium/WebDriverAgent/releases/tag/v2.14.1
# DEVICEFARM_WDA_DERIVED_DATA_PATH_V3: this version is suggested for Appium 1.16.0. V3 is built from the following source code: https://github.com/appium/WebDriverAgent/releases/tag/v2.3.2
# DEVICEFARM_WDA_DERIVED_DATA_PATH_V2: this version is suggested for Appium 1.15.1. V2 is built from the following source code: https://github.com/appium/WebDriverAgent/tree/v1.3.5
# DEVICEFARM_WDA_DERIVED_DATA_PATH_V1: this version is suggested for Appium 1.9.1 through 1.14.2. V1 is built from the following source code: https://github.com/appium/WebDriverAgent/tree/2dbbf917ec2e4707bae9260f701d43c82b55e1b9
# We will automatically configure your WebDriverAgent version based on your Appium version using the following code.
# For users of Appium versions 1.15.0 and higher, your Appium version requires that the UDID of the device not contain any "-" characters
# So, we will create a new environment variable of the UDID specifically for Appium based on your Appium version
- >-
if [ $(echo $APPIUM_VERSION | cut -d "." -f2) -ge 19 ];
then
DEVICEFARM_DEVICE_UDID_FOR_APPIUM=$(echo $DEVICEFARM_DEVICE_UDID | tr -d "-");
DEVICEFARM_WDA_DERIVED_DATA_PATH=$DEVICEFARM_WDA_DERIVED_DATA_PATH_V6;
elif [ $(echo $APPIUM_VERSION | cut -d "." -f2) -ge 18 ];
then
DEVICEFARM_DEVICE_UDID_FOR_APPIUM=$(echo $DEVICEFARM_DEVICE_UDID | tr -d "-");
DEVICEFARM_WDA_DERIVED_DATA_PATH=$DEVICEFARM_WDA_DERIVED_DATA_PATH_V5;
elif [ $(echo $APPIUM_VERSION | cut -d "." -f2) -ge 17 ];
then
DEVICEFARM_DEVICE_UDID_FOR_APPIUM=$(echo $DEVICEFARM_DEVICE_UDID | tr -d "-");
DEVICEFARM_WDA_DERIVED_DATA_PATH=$DEVICEFARM_WDA_DERIVED_DATA_PATH_V4;
elif [ $(echo $APPIUM_VERSION | cut -d "." -f2) -ge 16 ];
then
DEVICEFARM_DEVICE_UDID_FOR_APPIUM=$(echo $DEVICEFARM_DEVICE_UDID | tr -d "-");
DEVICEFARM_WDA_DERIVED_DATA_PATH=$DEVICEFARM_WDA_DERIVED_DATA_PATH_V3;
elif [ $(echo $APPIUM_VERSION | cut -d "." -f2) -ge 15 ];
then
DEVICEFARM_DEVICE_UDID_FOR_APPIUM=$(echo $DEVICEFARM_DEVICE_UDID | tr -d "-");
DEVICEFARM_WDA_DERIVED_DATA_PATH=$DEVICEFARM_WDA_DERIVED_DATA_PATH_V2;
else
DEVICEFARM_DEVICE_UDID_FOR_APPIUM=$DEVICEFARM_DEVICE_UDID;
DEVICEFARM_WDA_DERIVED_DATA_PATH=$DEVICEFARM_WDA_DERIVED_DATA_PATH_V1;
fi
# By default the node version installed is 10.9.0
# you can switch to an alternate node version using below command.
- nvm install 18.6.0
# Unpackage and install the node modules that you uploaded in the test phase.
- echo "Navigate to test package directory"
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- npm install *.tgz
# The pre-test phase includes commands that set up your test environment.
pre_test:
commands:
# We recommend starting the appium server process in the background using the command below.
# Appium server log will go to $DEVICEFARM_LOG_DIR directory.
# The environment variables below will be auto-populated during run time.
- echo "Start appium server"
- >-
appium --log-timestamp
--default-capabilities "{\"usePrebuiltWDA\": true, \"derivedDataPath\":\"$DEVICEFARM_WDA_DERIVED_DATA_PATH\",
\"deviceName\": \"$DEVICEFARM_DEVICE_NAME\", \"platformName\":\"$DEVICEFARM_DEVICE_PLATFORM_NAME\", \"app\":\"$DEVICEFARM_APP_PATH\",
\"automationName\":\"XCUITest\", \"udid\":\"$DEVICEFARM_DEVICE_UDID_FOR_APPIUM\", \"platformVersion\":\"$DEVICEFARM_DEVICE_OS_VERSION\"}"
>> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
# The test phase includes commands that start your test suite execution.
test:
commands:
# Your test package is downloaded in $DEVICEFARM_TEST_PACKAGE_PATH
# However, we must navigate to its subdirectory "node_modules/*", as this directory has your test code and dependency node modules
- echo "Navigate to test code directory"
- cd $DEVICEFARM_TEST_PACKAGE_PATH/node_modules/*
- echo "Start Appium Node index test with npm run ios"
# Enter the command below to start the tests . The comamnd should be similar to what you use to run the tests locally.
# For e.g. assuming you run your tests locally using command "node YOUR_TEST_FILENAME.js.", enter the same command below:
- npm run ios
# The post test phase includes commands that are run after your tests are executed.
post_test:
commands:
# The artifacts phase lets you specify the location where your tests logs, device logs will be stored.
# And also let you specify the location of your test logs and artifacts which you want to be collected by Device Farm.
# These logs and artifacts will be available through ListArtifacts API in Device Farm.
artifacts:
# By default, Device Farm will collect your artifacts from following directories
- $DEVICEFARM_LOG_DIR
I've debugged a bunch of other issues but now - no matter what I try I cannot seem to find the ipa file. I've tried using the default location. I've tried changing my directory to there and it complains it does not exist. However, in the testSpec file I seem to be setting it as the default capability via the commands in the shell. I've tried leaving the capabilities blank and I've tried using my local path for the app path file but neither work as well (locally it all works). Here is a snippet of the output of running the TestSpec
[DeviceFarm] npm run ios
> appium#1.0.0 ios
> node index.js
2022-08-03T23:08:33.435Z INFO webdriver: Initiate new session using the WebDriver protocol
2022-08-03T23:08:33.475Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session
2022-08-03T23:08:33.475Z INFO webdriver: DATA {
capabilities: {
alwaysMatch: {
platformName: 'iOS',
deviceName: 'iPhone 13 Pro Max',
automationName: 'XCUITest',
platformVersion: '15.5',
autoAcceptAlerts: true,
noReset: true,
app: '$DEVICEFARM_APP_PATH'
},
firstMatch: [ {} ]
},
desiredCapabilities: {
platformName: 'iOS',
deviceName: 'iPhone 13 Pro Max',
automationName: 'XCUITest',
platformVersion: '15.5',
autoAcceptAlerts: true,
noReset: true,
app: '$DEVICEFARM_APP_PATH'
}
}
2022-08-03T23:08:35.175Z WARN webdriver: Request failed with status 500 due to An unknown server-side error occurred while processing the command. Original error: Bad app: $DEVICEFARM_APP_PATH. App paths need to be absolute or an URL to a compressed app file: The application at '$DEVICEFARM_APP_PATH' does not exist or is not accessible
My ipa file is uploaded and I can see it installed on the test devices when I watch the video via AWS Device Farm. I just seem unable to be able to find where the path to it is. The Built In Fuzz testing works as well.
Related
I am trying to run a script which needs node. I have node installed in my machine.
I can run sh_binary by bazel run //:sh_bin and the script runs node just fine:
sh_binary(
name = "sh_bin",
data = [
],
srcs = [":script.sh"],
)
script.sh:
node -v
bazel run //:sh_bin:
v14.17.6
Now I want to convert this to sh_test:
sh_test(
name = "sh_bin",
data = [
],
srcs = [":script.sh"],
)
but now bazel test //:sh_bin cannot find node:
node: command not found
I also tried to add local = True to the test and still the same issue.
Bazel tests are run in a more controlled environment than application run via bazel run. One of the initial conditions that the test runner establishes is the value of $PATH: https://docs.bazel.build/versions/main/test-encyclopedia.html#initial-conditions
If you are working with remote execution, another problem could be that your test is executed on a machine that does not have node installed.
It's always a great idea to strive for a hermetic build that runs and tests independent of the host's state. That means you'd need to make the node program available to your binary or test as a data dep.
A good alternative is to build on existing work such as https://github.com/bazelbuild/rules_nodejs.
That being said, your example actually works for me.
cd `mktemp -d`
touch WORKSPACE
echo "node -v" > script.sh
chmod +x script.sh
cat <<EOF > BUILD
sh_test(
name = "sh_bin",
srcs = [":script.sh"],
)
EOF
bazel test --test_output=all -- //:sh_bin
Starting local Bazel server and connecting to it...
INFO: Analyzed target //:sh_bin (24 packages loaded, 282 targets configured).
INFO: Found 1 test target...
INFO: From Testing //:sh_bin:
==================== Test output for //:sh_bin:
v17.1.0
================================================================================
Target //:sh_bin up-to-date:
bazel-bin/sh_bin
INFO: Elapsed time: 6.895s, Critical Path: 0.10s
INFO: 5 processes: 3 internal, 2 linux-sandbox.
INFO: Build completed successfully, 5 total actions
//:sh_bin PASSED in 0.0s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 5 total actions
I installed tflint on my mac and when I try to execute --init it is throwing 401 error.
Could you tell me if I need to export any env variables to fetch git repo.
tflint --init
Installing `azurerm` plugin...
Failed to install a plugin. An error occurred:
Error: Failed to fetch GitHub releases: GET https://api.github.com/repos/terraform-
linters/tflint-ruleset-azurerm/releases/tags/v0.14.0: 401 Bad credentials []
.tflint.hcl file
plugin "azurerm" {
enabled = true
version = "0.14.0"
source = "github.com/terraform-linters/tflint-ruleset-azurerm"
}
i searched tflint documentation but could not find anything.
thanks,
Santosh
tflint requires azurem plugin to be installed. For that download the azurem proper plugin binary here: https://github.com/terraform-linters/tflint-ruleset-azurerm/releases/tag/v0.16.0 (check the version that you need) , unzip it and then move it to your user's .tflint.d/plugins directory (create it if it doesn't exist)
mv ~/Downloads/tflint-ruleset-azurerm ~/.tflint.d/plugins/
I was recently was trying to use tflint behind a corporate firewall and was getting checksums errors. I was able to resolve it by:
Adding the following to my .zshrc file. Try open ~/.zshrc to open the file from the Mac terminal.
setup_local_tflint_plugin() {
for PLUGIN in ${PLUGINS[#]}; do
TFLINT_PLUGIN_NAME=${PLUGIN%|*}
TFLINT_PLUGIN_VERSION=${PLUGIN#*|}
TFLINT_PLUGIN_DIR=~/.tflint.d/plugins/terraform-linters/tflint-ruleset-${TFLINT_PLUGIN_NAME}/${TFLINT_PLUGIN_VERSION}
mkdir -p $TFLINT_PLUGIN_DIR
FILE=$TFLINT_PLUGIN_DIR/tflint-ruleset-${TFLINT_PLUGIN_NAME}
if [ ! -f "$FILE" ]; then
echo "Downloading version ${TFLINT_PLUGIN_VERSION} of the ${TFLINT_PLUGIN_NAME} plugin."
curl -L "https://github.com/terraform-linters/tflint-ruleset-${TFLINT_PLUGIN_NAME}/releases/download/v${TFLINT_PLUGIN_VERSION}/tflint-ruleset-${TFLINT_PLUGIN_NAME}_${PLATFORM_ARCHITECTURE}.zip" > ${TFLINT_PLUGIN_DIR}/provider.zip
yes yes | unzip "${TFLINT_PLUGIN_DIR}/provider.zip" -d ${TFLINT_PLUGIN_DIR} | rm ${TFLINT_PLUGIN_DIR}/provider.zip
fi
done
chmod -R +x ~/.tflint.d/plugins
}
# Valid values for PLATFORM_ARCHITECTURE are:
# 'darwin_amd64', 'darwin_arm64', 'linux_386', 'linux_amd64',
# 'linux_arm', 'linux_arm64', 'windows_386', 'windows_amd64'
PLATFORM_ARCHITECTURE="darwin_amd64"
PLUGINS=("azurerm|0.16.0" "aws|0.16.0")
setup_local_tflint_plugin
Opening up my code editor and navigating to my Terraform scripts.
Creating a .tflint.hcl configuration file in the same folder as my Terraform scripts (like below).
config {
module = true
force = false
disabled_by_default = false
plugin_dir = "~/.tflint.d/plugins/terraform-linters/tflint-ruleset-azurerm/0.16.0"
}
plugin "azurerm" {
enabled = true
}
Opening a new terminal window (plugins should start installing).
Running tflint . --config ./.tflint.hcl.
Note: This only works for one plugin at a time (e.g. azurerm, aws, etc.).
To install a new plugin or plugin version simply add more to the PLUGINS attribute in the .zshrc file. To select the plugin, update the .tflint.hcl files plugin_dir attribute to point to the right plugin and version.
I run Appium node.js tests on AWS Device Farm. I would like to get granular Test results shown in Device Farm, but I always get one "Tests Suite" result which inlcudes all tests. So if one small test failes the whole Test Suite fails.
I read in the Device Farm Docs that in a Standard Environment more granular results will be displayed, but I am not sure how to switch or use standard environment. I asume it has something to do with the YAML File as the possibility to select between standard or custom environment is not longer given on the Device Farm UI.
This is my current YAML File:
version: 0.1
# Phases are collection of commands that get executed on Device Farm.
phases:
# The install phase includes commands that install dependencies that your tests use.
# Default dependencies for testing frameworks supported on Device Farm are already installed.
install:
commands:
# By default, Appium server version used is 1.7.2.
# You can switch to an alternate supported version from 1.6.5, 1.7.1, 1.7.2, 1.8.0 , 1.8.1, 1.9.1 by using a command like "avm 1.7.1"
# OR
# To install a newer version of Appium use the following commands:
- export APPIUM_VERSION=1.9.1
- avm $APPIUM_VERSION
- ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
# By default the node version installed is 11.4.0
# you can switch to an alternate node version using below command.
# - nvm install 10.13.0
# Unpackage and install the node modules that you uploaded in the test phase.
- echo "Navigate to test package directory"
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- npm install *.tgz
# The pre-test phase includes commands that setup your test environment.
pre_test:
commands:
# We recommend starting appium server process in the background using the command below.
# Appium server log will go to $DEVICEFARM_LOG_DIR directory.
# The environment variables below will be auto-populated during run time.
- echo "Start appium server"
- >-
appium --log-timestamp --device-name $DEVICEFARM_DEVICE_NAME
--platform-name $DEVICEFARM_DEVICE_PLATFORM_NAME --app $DEVICEFARM_APP_PATH
--automation-name UiAutomator2 --udid $DEVICEFARM_DEVICE_UDID
--chromedriver-executable $DEVICEFARM_CHROMEDRIVER_EXECUTABLE >> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
# The test phase includes commands that run your test suite execution.
test:
commands:
# Go into the root folder containing your source code and node_modules
- echo "Navigate to test source code"
# Change the directory to node_modules folder as it has your test code and the dependency node modules.
- cd $DEVICEFARM_TEST_PACKAGE_PATH/node_modules/*
- echo "Start Appium Node test"
# Enter the command below to start the tests . The comamnd should be similar to what you use to run the tests locally.
# For e.g. assuming you run your tests locally using command "node YOUR_TEST_FILENAME.js.", enter the same command below:
- npm run test:android
# The post test phase includes are commands that are run after your tests are executed.
post_test:
commands:
# The artifacts phase lets you specify the location where your tests logs, device logs will be stored.
# And also let you specify the location of your test logs and artifacts which you want to be collected by Device Farm.
# These logs and artifacts will be available through ListArtifacts API in Device Farm.
artifacts:
# By default, Device Farm will collect your artifacts from following directories
- $DEVICEFARM_LOG_DIR```
AWS Device Farm's standard mode is independent of YAML file. It's a setting that is configured in the "Configure" step when you schedule a run through the console or via CLI through the ScheduleRun API. Currently, AWS Device Farm does not support Appium Node in standard mode, which means that the granular reporting you are seeking is not available.
If you have further questions, you can head over to the AWS Device Farm Forums for additional assistance from their engineering team.
Andy
I wonder what is the best practice of deploying complex node.js with elastic beanstalk without relying on availability of external npm repository (and dealing with credentials and high availability of privately managed git repositories for internally developed packages).
It looks like there one school of thought which preaches to actually check in node_modules into the source tree for projects that are actually being deployed.
source 1: http://www.futurealoof.com/posts/nodemodules-in-git.html
source 2: http://eng.yammer.com/managing-node-js-dependencies-and-deployments-at-yammer/
So sounds like checking them in is the right approach, but then there is a problem of different binary formats for some compiled packages (developing on mac and deploying to linux)
I've tried doing as yammer guys suggested (checkin modules except the bin folders), but even then local "npm rebuild" command fails (it tries to chmod something in a bin folder that doesn't exist in express.js module) so I haven't even gotten to try to see what beanstalk default deployment environment will do with such a repository. I assume it runs "npm install" (which will do nothing), but will it run "npm rebuild"?
So, again, what is the best practice to deploy a a complex project with multiple dependencies? It must be a solved problem by now in the node/beanstalk world, isn't it?
Thanks
Here's my configuration that does what you're talking about. Save it in the .ebextensions folder and you'll be set. The only difference between mine and the superior answer in https://stackoverflow.com/a/23242623/34340 is the NPM_CONFIG_UNSAFE_PERM=true line, which I learned from https://forums.aws.amazon.com/thread.jspa?messageID=534612
packages:
yum:
git: []
gcc: []
make: []
openssl-devel: []
libxml2: []
libxml2-devel: []
files:
"/opt/elasticbeanstalk/env.vars" :
mode: "000775"
owner: root
group: users
content: |
export HOME=/home/ec2-user # ADDED EXPORT COMMAND
export NPM_CONFIG_LOGLEVEL=error
export NPM_CONFIG_UNSAFE_PERM=true
export NODE_PATH=`ls -td /opt/elasticbeanstalk/node-install/node-* | head -1`/bin
"/opt/elasticbeanstalk/hooks/appdeploy/pre/50npm.sh" :
mode: "000775"
owner: root
group: users
content: |
#!/bin/bash
. /opt/elasticbeanstalk/env.vars
function error_exit
{
eventHelper.py --msg "$1" --severity ERROR
exit $2
}
#install not-installed yet app node_modules
if [ ! -d "/var/node_modules" ]; then
mkdir /var/node_modules ;
fi
if [ -d /tmp/deployment/application ]; then
ln -s /var/node_modules /tmp/deployment/application/
fi
OUT=$([ -d "/tmp/deployment/application" ] && cd /tmp/deployment/application && $NODE_PATH/npm install 2>&1) || error_exit "Failed to run npm install. $OUT" $?
echo $OUT
"/opt/elasticbeanstalk/hooks/configdeploy/pre/50npm.sh" :
mode: "000666"
owner: root
group: users
content: |
#no need to run npm install during configdeploy
We have a fairly simple node.js app, but due to AWS Elastic Beanstalk deployment mechanism, it takes about 5 minutes to roll-out a new version (via git aws.push) even after a single file commit.
I.e. the commit itself (and upload) is fast (only 1 file to push), but then Elastic Beanstalk fetches whole package from S3, unzips it and runs npm install, which causes node-gyp to compile some modules. Upon installation/building completion, Elastic Beanstalk wipes /var/app/current and replaces it with the new app version.
Needless to say, constant node_modules rebuilding is not necessary, and rebuilding that takes 30 seconds on my old Macbook Air, takes >5 mins on a ec2.micro instance, not fun.
I see two approaches here:
tweak /opt/containerfiles/ebnode.py and play with node_modules location to avoid its removal and rebuilding upon deployment.
set up a git repo on Elastic Beanstalk EC2 instance and basically re-write deployment procedure ourselves, so /var/app/current receives pushes and runs npm install only when necessary (which makes Elastic Beanstalk to look like OpsWorks..)
Both options lack grace and are prone to issues when Amazon updates their Elastic Beanstalk hooks and architecture.
Maybe somebody has a better idea how to avoid constant rebuilding of node_modules that are already present in the app dir? Thank you.
Thanks Kirill, it was really helpful !
I'm just sharing my config file for people who just look the simple solution to the npm install. This file needs to be placed in the .ebextensions folder of the project, it is lighter since it doesn't include last version of node installation, and ready to use.
It also dynamically checks the node version installed, so no need for it to be included in the env.vars file.
.ebextensions/00_deploy_npm.config
files:
"/opt/elasticbeanstalk/env.vars" :
mode: "000775"
owner: root
group: users
content: |
export NPM_CONFIG_LOGLEVEL=error
export NODE_PATH=`ls -td /opt/elasticbeanstalk/node-install/node-* | head -1`/bin
"/opt/elasticbeanstalk/hooks/appdeploy/pre/50npm.sh" :
mode: "000775"
owner: root
group: users
content: |
#!/bin/bash
. /opt/elasticbeanstalk/env.vars
function error_exit
{
eventHelper.py --msg "$1" --severity ERROR
exit $2
}
#install not-installed yet app node_modules
if [ ! -d "/var/node_modules" ]; then
mkdir /var/node_modules ;
fi
if [ -d /tmp/deployment/application ]; then
ln -s /var/node_modules /tmp/deployment/application/
fi
OUT=$([ -d "/tmp/deployment/application" ] && cd /tmp/deployment/application && $NODE_PATH/npm install 2>&1) || error_exit "Failed to run npm install. $OUT" $?
echo $OUT
"/opt/elasticbeanstalk/hooks/configdeploy/pre/50npm.sh" :
mode: "000666"
owner: root
group: users
content: |
#no need to run npm install during configdeploy
25/01/13 NOTE: updated scripts to run npm -g version upgrade (only once, on initial instance roll out or rebuild) and to avoid NPM operations during EB configuration change (when app dir is not present, to avoid error and to speed up configuration updates).
Okay, Elastic Beanstalk behaves dodgy with recent node.js builds (including presumably supported v.0.10.10), so I decided to go ahead and tweak EB to do the following:
to install ANY node.js version as per your env.config (including
the most recent ones that are not yet supported by AWS EB)
to avoid rebuilding existing node modules, including in-app
node_modules dir
to install node.js globally (and any desired module as well).
Basically, I use env.config to replace deploy&config hooks with customized ones (see below). Also, in a default EB container setup some env variables are missing ($HOME for example) and node-gyp sometimes fails during rebuild because of it (took me 2 hours of googling and reinstalling libxmljs to resolve this).
Below are the files to be included along with your build. You can inject them via env.config as inline code or via source: URL (as in this example)
env.vars (desired node version & arch are included here and in env.config, see below)
export HOME=/root
export NPM_CONFIG_LOGLEVEL=error
export NODE_VER=0.10.24
export ARCH=x86
export PATH="$PATH:/opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/:/root/.npm"
40install_node.sh (fetch and ungzip desired node.js version, make global symlinks, update global npm version)
#!/bin/bash
#source env variables including node version
. /opt/elasticbeanstalk/env.vars
function error_exit
{
eventHelper.py --msg "$1" --severity ERROR
exit $2
}
#UNCOMMENT to update npm, otherwise will be updated on instance init or rebuild
#rm -f /opt/elasticbeanstalk/node-install/npm_updated
#download and extract desired node.js version
OUT=$( [ ! -d "/opt/elasticbeanstalk/node-install" ] && mkdir /opt/elasticbeanstalk/node-install ; cd /opt/elasticbeanstalk/node-install/ && wget -nc http://nodejs.org/dist/v$NODE_VER/node-v$NODE_VER-linux-$ARCH.tar.gz && tar --skip-old-files -xzpf node-v$NODE_VER-linux-$ARCH.tar.gz) || error_exit "Failed to UPDATE node version. $OUT" $?.
echo $OUT
#make sure node binaries can be found globally
if [ ! -L /usr/bin/node ]; then
ln -s /opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/node /usr/bin/node
fi
if [ ! -L /usr/bin/npm ]; then
ln -s /opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/npm /usr/bin/npm
fi
if [ ! -f "/opt/elasticbeanstalk/node-install/npm_updated" ]; then
/opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/ && /opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/npm update npm -g
touch /opt/elasticbeanstalk/node-install/npm_updated
echo "YAY! Updated global NPM version to `npm -v`"
else
echo "Skipping NPM -g version update. To update, please uncomment 40install_node.sh:12"
fi
50npm.sh (creates /var/node_modules, symlinks it to app dir and runs npm install. You can install any module globally from here, they will land in /root/.npm)
#!/bin/bash
. /opt/elasticbeanstalk/env.vars
function error_exit
{
eventHelper.py --msg "$1" --severity ERROR
exit $2
}
#install not-installed yet app node_modules
if [ ! -d "/var/node_modules" ]; then
mkdir /var/node_modules ;
fi
if [ -d /tmp/deployment/application ]; then
ln -s /var/node_modules /tmp/deployment/application/
fi
OUT=$([ -d "/tmp/deployment/application" ] && cd /tmp/deployment/application && /opt/elasticbeanstalk/node-install/node-v$NODE_VER-linux-$ARCH/bin/npm install 2>&1) || error_exit "Failed to run npm install. $OUT" $?
echo $OUT
env.config (note node version here too, and to be safe, put desired node version in env config in AWS console as well. I'm not certain which of these settings will take precedence.)
packages:
yum:
git: []
gcc: []
make: []
openssl-devel: []
option_settings:
- option_name: NODE_ENV
value: production
- option_name: RDS_HOSTNAME
value: fill_me_in
- option_name: RDS_PASSWORD
value: fill_me_in
- option_name: RDS_USERNAME
value: fill_me_in
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.10.24
files:
"/opt/elasticbeanstalk/env.vars" :
mode: "000775"
owner: root
group: users
source: https://dl.dropbox.com/....
"/opt/elasticbeanstalk/hooks/configdeploy/pre/40install_node.sh" :
mode: "000775"
owner: root
group: users
source: https://raw.github.com/....
"/opt/elasticbeanstalk/hooks/appdeploy/pre/50npm.sh" :
mode: "000775"
owner: root
group: users
source: https://raw.github.com/....
"/opt/elasticbeanstalk/hooks/configdeploy/pre/50npm.sh" :
mode: "000666"
owner: root
group: users
content: |
#no need to run npm install during configdeploy
"/opt/elasticbeanstalk/hooks/appdeploy/pre/40install_node.sh" :
mode: "000775"
owner: root
group: users
source: https://raw.github.com/....
There you have it: on t1.micro instance deployment now takes 20-30 secs instead of 10-15 minutes! If you deploy 10 times a day, this tweak will save you 3 (three) weeks in a year.
Hope it helps and special thanks to AWS EB staff for my lost weekend :)
There's npm package that's overwriting default EB behaviour for npm install command by truncating following files:
/opt/elasticbeanstalk/hooks/appdeploy/pre/50npm.sh
/opt/elasticbeanstalk/hooks/configdeploy/pre/50npm.sh
https://www.npmjs.com/package/eb-disable-npm
Might be better than just copying script from SO, since this package is maintained and probably will be updated when EB behaviour will change.
I've found a quick solution to this. I looked through the build scripts that Amazon are using and they only run npm install if package.json is present. So after your initial deploy you can change it to _package.json and npm install won't run anymore! It's not the best solution but it's a quick fix if you need one!
I had 10+ minute builds when I would deploy. The solution was much simpler than others have came up with... Just check node_modules into git! See http://www.futurealoof.com/posts/nodemodules-in-git.html for the reasoning