From inside a yarn workspace subpackage, run a root-level script - node.js

I'm wondering: if your terminal's current working directory is inside a yarn workspace, is there a way to run a yarn script that's defined at the project root without changing the current directory to be outside of a workspace?
For instance, you can run a command for a particular workspace by running yarn workspace workspace-name script-name but is it possible to use that yarn workspace command to target not a subpackage, but the root package itself?

I couldn't find a way to do it with yarn workspace, but you can do it by specifying the current working directory (cwd) when running the root command. Assuming you're running your command from ~/packages/subpackage, you'll need to go back two times with ../..:
yarn --cwd="../.." my-root-script

Scripts that contain a : in their name can be run from anywhere!
For example, your root script called "root:something" can be called from within any workspace by running yarn root:something.
Note that this even works if the : script is not a root script, but a workspace script. See yarn docs.

Related

How to parameterize the workspace name in Yarn workspace commands

I use Yarn for a monorepo project. We also use Lerna to help automate some of the commands across the various workspaces. I'm trying to create a Yarn script command that accepts a variable for the workspace name. I have a working command:
"deploy:dryrun": "dotenv -e .env.local yarn workspace project1 deploy:dryrun"
This can be invoked with yarn deploy:dryrun.
What I would like is to have the workspace name (project1) be a variable provided on the command line. I would like to be able to invoke it as yarn deploy:dryrun project1 and have the workspace name substituted into the script command like
"deploy:dryrun": "dotenv -e .env.local yarn workspace $workspacename deploy:dryrun"
Is there a way to achieve this without turning the script into an actual bash script file?

How do I change directory to deployment-archive to run gulp?

I am using aws code deploy to deploy a nodejs server to an ec2 instance.
aws code deploy agent downloads bundle to following path:
/opt/codedeploy-agent/deployment-root/deployment-group-id/deployment-id/deployment-archive/gulpfile.js
which means my gulpfile.js reside at
/opt/codedeploy-agent/deployment-root/deployment-group-id/deployment-id/deployment-archive/gulpfile.js
however any bash command I run will run from
/opt/codedeploy-agent
How do I change directory to dynamically generated
deployment-group-id/deployment-id/deployment-archive
and also install node modules from package.json?
You may use the environment variables as for changing directory
$DEPLOYMENT_GROUP_ID/$DEPLOYMENT_ID/deployment-archive/
You can refer this for how to use them.
You can also change the root directory where code deploy adds the deployment-group-id/deployment-id/deployment-archive by changing it from the agent configuration file: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-agent-configuration.html
By default it's in /opt/codedeploy-agent/deployment-root but you can change that.
Furthermore you can tell CodeDeploy to copy your files over to a different directory in the appspec file by controlling each file's source and destination:
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-files.html
What I understand is you can not run
BeforeInstall from root of your project, you have to move the code to AfterInstall.
To find the full path of root of after install and then navigate from there check this out:
AWS CodeDeploy AfterInstall script is being run from code-deploy agent dir

CASSANDRA_HOME variable not found

I am not able to find CASSANDRA_HOME variable anywhere being set in the CASSANDRA installed path.
I could guess that it is my installation directory of cassandra because the log files are created in the installed_dir/logs.
Where can I find CASSANDRA_HOME being set?
You haven't provided a lot of information but I'll try and answer.
CASSANDRA_HOME is set in cassandra.in.sh or cassandra.bat if you are running on windows. If CASSANDRA_HOME isn't set it sets it to the parent of the directory that the script is running in.
I'm assuming that you are running from a tarball installation since you say that the log files are enter up under your install directory, hence your bin directory is directly under the install directory.

Alter where my Jenkins Project build execute shell CWD to a sub-folder of the GIT repository

We have a Jenkins Build server running on Ubuntu 14.04 x64 which is processing three other projects just dandy. We are integrating a fourth Node.js project but due to fact the outside contractor developing it put all the project files inside a folder of the repository. So to clarify, the root of the repository is a single folder; in which the actual project root is located.
Jenkins checks the repository out but when it runs the NPM commands for install and build fails as its looking for the package.json in the repository root versus the subfolder where all the necessary files are located.
There is lots of information out there on use case which have some similarities as mine, but nothing which provided a solution which worked for me.
I've tried using the full path when executing the shells commands, altering the projects workspace to the subfolder, even researched a way to checkout the specific folder using Git which appears to not be a trivial thing.
I cannot believe there isn't a way to execute a Jenkins build into the checkout repositories as if a specific subfolder was the root (CWD) for all the scripts being executed in the shell instance.
Any help would be greatly appreciated!!!
As hinted in the comment, the solution is to cd into the subfolder before the calls to npm.

Execute command after deploy AWS Beanstalk

I have problem with execute command after deploy, i have some node.js project and script, this script use some bin from node_modules, if i write my command for script in .ebextensions/.config he execute before npm install and return error ("node_modules/.bin/some": No such file or directory). How i can execute command after deploy. Thanks.
I found the following solution
I add to beanstalk config next command:
commands:
create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/some_job.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
cd /var/app/current
export PATH=$PATH:$(ls -td /opt/elasticbeanstalk/node-install/node-* | head -1)/bin
npm run some_script
This commands create(if not exist) folder for post-hooks scripts and adds bash script. Scripts in this folders execute only after npm install, this very important for my problem.
Thanks to this guy http://junkheap.net/blog/2013/05/20/elastic-beanstalk-post-deployment-scripts/
create a file called .ebextensions/post_actions.config:
container_commands:
<name of container_command>:
command: "<command to run>"
this will be executed after the code was extracted, but before it was launched.
A better approach would be to go with the aws platform hooks. Where you can define the postdeploy hooks AWS Patform Hooks
In that inside the project root directory you can add .platform/hooks/postdeploy/
Insdie this path you can create xxx-postdeploy-script.sh. Files here run after the Elastic Beanstalk platform engine deploys the application and proxy server.This is the last deployment workflow step
If you read the AWS ebextensions documentation they mention the execution, specifically where they mention that all commands are executed before the application version is deployed.
"You can use the container_commands key to execute commands for your
container. The commands in container_commands are processed in
alphabetical order by name. They run after the application and web
server have been set up and the application version file has been
extracted, but before the application version is deployed."
If you deploy it for a second time it should work; this is because your application is already unpacked. This however is not a working solution because every new instance that is spawned will error.

Resources