Can I tell which global node dependencies my app is using? - node.js

I'm deploying my first app to an AWS Elastic Beanstalk and it's failing to run as some of the dependencies on my dev machine have been installed locally and so are missing from my app'#s node_module folder than I have uploaded to AWS.
Is there a way to tell which dependencies my app is using that are not in package.json or node_modules?
Thanks

Try running this command on your terminal npm list -g --depth=0. It will return all globally installed modules and then you can manually check which ones you require on your project.

As a rough first round info gathering you can find all modules you require in your project with a simple grep:
grep -R --exclude-dir node_modules --include '*.js' require .
To extract just the module names and remove duplicates you can pipe the result through cut and sort:
grep -R --exclude-dir node_modules --include '*.js' require . |
cut -d '(' -f2 |
cut -d "'" -f2 |
sort -u
You can then filter the result and only print what's globally installed by comparing it with the output of npm list -g. The comm command comes in handy:
grep -R --exclude-dir node_modules --include '*.js' require . |
cut -d '(' -f2 |
cut -d "'" -f2 |
sort -u > required.txt
npm list -g --depth=0 2> /dev/null |
grep # |
cut -d ' ' -f2 |
cut -d# -f1 |
sort -u > global_install.txt
comm -1 -2 required.txt global_install.txt

Related

how get and use a docker container id from part of its name in a terminal pipe request?

I am trying to combine the following commands:
docker ps | grep track
that will give me
6b86b28a27b0 dev/jobservice/worker-jobtracking:3.5.0-SNAPSHOT "/tini -- /startup/s…" 25 seconds ago Up 2 seconds (health: starting)
jobservice_jobTrackingWorker_1
So then, I grab the id and use it in the next request as:
docker logs 6b8 | grep -A 3 'info'
So far, the easiest way I could find was to send those commands separately, but i wonder if there would be a simple way to do it.
I think that the main issue here is that I am trying to find the name of the container based on part of its name.
So, to resume, I would like to find and store the id of a container based on its name then use it to explore its logs.
Thanks!
Perhaps there are cleaner ways to do it, but this works.
To get the ID of a partially matching container name:
$ docker ps --format "{{.ID}} {{.Names}}" | grep "partial" | cut -d " " -f1
Then you can use it in another bash command:
$ docker logs $(docker ps --format "{{.ID}} {{.Names}}" | grep "partial" | cut -d " " -f1)
Or wrap it in a function:
$ function dlog() { docker logs $(docker ps --format "{{.ID}} {{.Names}}" | grep "$1" | cut -d " " -f1); }
Which can then be used as:
$ dlog partial
In a nutshell the pure bash approach to achieve what you want:
With sudo:
sudo docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs sudo docker logs
Without sudo:
docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs docker logs
Now let's break it down...
Let's see what containers I have running in my laptop for the Elixir programming language:
command:
sudo docker ps | grep -i elixir -
output:
0a19c6e305a2 exadra37/phoenix-dev:1.5.3_elixir-1.10.3_erlang-23.0.2_git "iex -S mix phx.serv…" 7 days ago Up 7 days 127.0.0.1:2000-2001->2000-2001/tcp Projects_todo-tasks_app
65ef527065a8 exadra37/st3-3211-elixir:latest "bash" 7 days ago Up 7 days SUBL3_1600981599
232d8cfe04d5 exadra37/phoenix-dev:1.5.3_elixir-1.10.3_erlang-23.0.2_git "mix phx.server" 8 days ago Up 8 days 127.0.0.1:4000-4001->4000-4001/tcp Staging_todo-tasks_app
Now let's find their ids:
command:
sudo docker ps | grep -i elixir - | awk '{print $1}'
output:
0a19c6e305a2
65ef527065a8
232d8cfe04d5
Let's get the first container ID:
command:
sudo docker ps | grep -i elixir - | awk '{print $1}' | head -1
NOTE: replace head -1 with head -2 to get the second line in the output...
output:
0a19c6e305a2
Let's see the logs for the first container in the list
command:
sudo docker ps | grep -i elixir - | awk '{print $1}' | head -1 | xargs sudo docker logs
NOTE: replace head -1 with tail -1 to get the logs for the last container in the list.
output:
[info] | module=WebIt.Live.Calendar.Socket function=mount/1 line=14 | Mount Calendar for date: 2020-09-30 23:29:38.229174Z
[debug] | module=Tzdata.ReleaseUpdater function=poll_for_update/0 line=40 | Tzdata polling for update.
[debug] | module=Tzdata.ReleaseUpdater function=poll_for_update/0 line=44 | Tzdata polling shows the loaded tz database is up to date.
Combining the different replies, I used:
function dlog() { docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs docker logs | grep -i -A 4 "$2";}
to get the best of both worlds. So I can have a function that will have me type 4 letters instead of 2 commands and with no case sensitivity
I can then use dlog keyword to get my logs.
I hardcoded track and -A 4 as I usually use that query but I could have passed them as arguments to add on modularity (my goal here was really simplicity)
Thanks for your help!

Most efficient way to get the latest version of an rpm via web

This is my attempt using wget to pull down the web page, dig for latest tar file and rerun a wget to take it down. In the example, i'm taking down pip.
wget https://pypi.org/project/pip/#files
wget $(grep tar.gz index.html | head -1 | awk -F= '{print $2}' | sed 's/>//' | sed 's/\"//g')
gunzip -c $(ls | grep tar |tail -1) | tar xvf -
yum install -y python-setuptools
cd $(ls -d */ | grep pip)
python setup.py install
cd ..
I'm sure that there is a better way, perhaps only using one wget or similar
Do you mean like that?
wget $(curl -s "https://pypi.org/project/pip/#files"|grep -o 'https://[^"]*tar\.gz')

How to get current foldername and remove characters from the name in bash

I'm trying to write a single command in my makefile to get the current folder and remove all "." from the name.
I can get the current folder with $${PWD##*/} and I can remove the "."s with $${PWD//.} but I can't figure out how to combine these two into one.
The reason I need this is to kill my docker containers based on name of project. This is my command:
docker ps -q --filter name="mycontainer" | xargs -r docker stop
and i was hoping I could inject the project name before my container name like this:
docker ps -q --filter name=$${PWD##*/}"_mycontainer" | xargs -r docker stop
You can try:
var=$(echo ${PWD##*/} | sed "s/\.//")
or:
var=$(tmp=${PWD##*/} && printf "${tmp//./}")
In your use case will be something like:
docker ps -q --filter name=$(tmp=${PWD##*/} && printf "%s_mycontainer" "${tmp//./}") | xargs -r docker stop
Note that there are more ways to do that (even more efficient).

Yocto: bitbake command to regenerate all RPM files

I wanted to make some free space and deleted all directories in build/tmp/deploy/rpm/, thinking yocto would detect it and recreate them at the next bitbake call... it was a mistake ! :(
Here's the bitbake error just in case:
bitbake <image_name>
[...]
ERROR: ... do_rootfs: minicom not found in the base feeds (<image_name> corei7-64-intel-common corei7-64 core2-64 x86_64 noarch any all).
[...list of every package...]
Is there any way to force the regeneration of every rpms using bitbake ?
Forcing the regeneration with bitbake -f -c package_write_rpm <package> works, but I didn't find the command to do it all at once.
I tried cleaning the state of the native rpm packages thinking it might detect it and invalidate the rpm files states, but no luck:
bitbake -f -c cleanall nativesdk-rpm nativesdk-rpmresolve rpmresolve-native rpm-native
bitbake <image_name>
I also thought this would work, but it didn't:
bitbake -f -c package_write_rpm <image_name>
I will try to hack something with bitbake-layers show-recipes and xargs, but it would be cool to have a proper bitbake command.
I am using Yocto 2.1 (Krogoth).
Thanks !
I ended up doing the following script and use bitbake dependency tree to get the list of packages (thanks to this yocto/bitbake reference page):
# bitbake -g <image> && cat pn-depends.dot | grep -v -e '-native' | grep -v digraph | grep -v -e '-image' | awk '{print $1}' | sort | uniq | grep -v "}" | grep -v cross | grep -v gcc | grep -v glibc > packages-list.txt
# cat packages-list.txt | xargs bitbake -f -c package_write_rpm
Maybe there is a more straightforward solution ? For now this worked.

How to remove all packages from node_modules that are listed in package.json

Basically I am looking for the "opposite" of npm prune or this SO question.
More specifically:
I am looking to clean up node_modules folder from all packages that are listed in my root package.json file. Sort of a fresh start before npm install.
The reason I don not want to simply rm -rf node_modules/ is because I have some local modules that I don't want to get deleted.
it isnt possible to remove at once all in your package.json you could write shell script to loop through. or you can check npm ls and then remove npm rm <name> or npm uninstall each manually. and to update it in package.json simultaneously npm rm <name> --save
A better approach would be to have your permanent (local) modules in a directory of a higher level:
-node_modules (local)
-my_project
|-node_modules (npm)
That way, when you wipe the node_modules directory, the outer local modules remain.
As pointed out by others, there's no native npm command to do that.
I've taken this answer and modified the command:
$ npm ls | grep -v 'npm#' | awk '/#/ {print $2}' | awk -F# '{print $1}' | xargs npm rm
This lists all of your local packages and then npm rm them one by one.
For convenience, if you want, you can add the following line to your package.json:
"scripts": {
"uninstall": "npm ls | grep -v 'npm#' | awk '/#/ {print $2}' | awk -F# '{print $1}' | xargs npm rm"
}
and then run
$ npm run uninstall

Resources