command-line: nodeJS not working on double-pipelining - node.js

I'm trying to execute my node app through a bash script using some pipes to have a nice formatted output:
node ~/app.js 2>&1 | grep -v "something" | grep --color -Ei "^|Error"
But I got a frozen output.
Everything is nice when I type node ~/app.js | command_1, the problem seems to appear when using node ~/app.js | command_1 | command_2.
No problem with other commands like cat myscript.sh | command_1 | command_2
I'm using node v0.10.15 and ubuntu 13.10 server.
TL;DR: No output from node ~/app.js | command_1 | command_2
How to handle this problem?
Say me if my question is not clear.

Related

how get and use a docker container id from part of its name in a terminal pipe request?

I am trying to combine the following commands:
docker ps | grep track
that will give me
6b86b28a27b0 dev/jobservice/worker-jobtracking:3.5.0-SNAPSHOT "/tini -- /startup/s…" 25 seconds ago Up 2 seconds (health: starting)
jobservice_jobTrackingWorker_1
So then, I grab the id and use it in the next request as:
docker logs 6b8 | grep -A 3 'info'
So far, the easiest way I could find was to send those commands separately, but i wonder if there would be a simple way to do it.
I think that the main issue here is that I am trying to find the name of the container based on part of its name.
So, to resume, I would like to find and store the id of a container based on its name then use it to explore its logs.
Thanks!
Perhaps there are cleaner ways to do it, but this works.
To get the ID of a partially matching container name:
$ docker ps --format "{{.ID}} {{.Names}}" | grep "partial" | cut -d " " -f1
Then you can use it in another bash command:
$ docker logs $(docker ps --format "{{.ID}} {{.Names}}" | grep "partial" | cut -d " " -f1)
Or wrap it in a function:
$ function dlog() { docker logs $(docker ps --format "{{.ID}} {{.Names}}" | grep "$1" | cut -d " " -f1); }
Which can then be used as:
$ dlog partial
In a nutshell the pure bash approach to achieve what you want:
With sudo:
sudo docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs sudo docker logs
Without sudo:
docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs docker logs
Now let's break it down...
Let's see what containers I have running in my laptop for the Elixir programming language:
command:
sudo docker ps | grep -i elixir -
output:
0a19c6e305a2 exadra37/phoenix-dev:1.5.3_elixir-1.10.3_erlang-23.0.2_git "iex -S mix phx.serv…" 7 days ago Up 7 days 127.0.0.1:2000-2001->2000-2001/tcp Projects_todo-tasks_app
65ef527065a8 exadra37/st3-3211-elixir:latest "bash" 7 days ago Up 7 days SUBL3_1600981599
232d8cfe04d5 exadra37/phoenix-dev:1.5.3_elixir-1.10.3_erlang-23.0.2_git "mix phx.server" 8 days ago Up 8 days 127.0.0.1:4000-4001->4000-4001/tcp Staging_todo-tasks_app
Now let's find their ids:
command:
sudo docker ps | grep -i elixir - | awk '{print $1}'
output:
0a19c6e305a2
65ef527065a8
232d8cfe04d5
Let's get the first container ID:
command:
sudo docker ps | grep -i elixir - | awk '{print $1}' | head -1
NOTE: replace head -1 with head -2 to get the second line in the output...
output:
0a19c6e305a2
Let's see the logs for the first container in the list
command:
sudo docker ps | grep -i elixir - | awk '{print $1}' | head -1 | xargs sudo docker logs
NOTE: replace head -1 with tail -1 to get the logs for the last container in the list.
output:
[info] | module=WebIt.Live.Calendar.Socket function=mount/1 line=14 | Mount Calendar for date: 2020-09-30 23:29:38.229174Z
[debug] | module=Tzdata.ReleaseUpdater function=poll_for_update/0 line=40 | Tzdata polling for update.
[debug] | module=Tzdata.ReleaseUpdater function=poll_for_update/0 line=44 | Tzdata polling shows the loaded tz database is up to date.
Combining the different replies, I used:
function dlog() { docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs docker logs | grep -i -A 4 "$2";}
to get the best of both worlds. So I can have a function that will have me type 4 letters instead of 2 commands and with no case sensitivity
I can then use dlog keyword to get my logs.
I hardcoded track and -A 4 as I usually use that query but I could have passed them as arguments to add on modularity (my goal here was really simplicity)
Thanks for your help!

Killing multiple Mac OS processes dynamically in one command? [duplicate]

This question already has answers here:
How to kill all processes with the same name using OS X Terminal
(6 answers)
Closed 4 years ago.
I have this issue where sometimes .Net Core crashes when I run the debugger in VS Code, and I'm unable to restart the debugger. Even if I quit out of VS Code and go back in, the processes are still open. Rather than restart the machine, I can kill each process manually, but sometimes there are a whole bunch of them, and it gets pretty tedious. For example:
$ ps -eaf | grep dotnet | grep -v grep
16528 ?? 0:02.65 /usr/local/share/dotnet/dotnet /Users/ceti-alpha-v/Documents/NGRM/application/NGRM.Web/bin/Debug/netcoreapp2.0/NGRM.Web.dll
16530 ?? 0:02.75 /usr/local/share/dotnet/dotnet /Users/ceti-alpha-v/Documents/NGRM/application/NGRM.Web/bin/Debug/netcoreapp2.0/NGRM.Web.dll
16532 ?? 0:02.85 /usr/local/share/dotnet/dotnet /usr/local/share/dotnet/sdk/2.1.403/Roslyn/bincore/VBCSCompiler.dll
$ kill 16528 16530 16532
I'd like to delete the processes automatically with a single command, but I'm not sure how to pipe each PID to kill.
You can use xargs like this
ps -eaf | grep dotnet | grep -v "grep" | awk '{print $2}' | xargs kill
or if you want to just kill all dotnet processes
killall dotnet
You can use command Substitution
kill $(ps -eaf | grep dotnet | grep -v grep | awk '{ print $2 }')

Bash script works locally but not on a server

This Bash script is called from a SlackBot using a Python script that uses the paramiko library. This works perfectly when I run the script locally from the bot. It runs the application and then kills the process after the allotted time. But when I run this same script on a server from the SlackBot, it doesn't kill the process. Any suggestions?? The script name is "runslack.sh" which is what is called with the grep command below.
#!/bin/bash
slack-export-viewer -z "file.zip"
sleep 10
ps aux | grep runslack.sh | awk '{print $2}' | xargs kill
Please try this and let me know if this helps you
ps -ef | grep runslack.sh | grep -v grep | awk '{print $2}' |xargs -i kill -9 {}
and i hope your user has sufficient permission to execute/kill this action on your server environment.

KSH - Linux - ps -ef - Return code versus number of processes found

Regarding the *nix ps -ef command. We have a number of shell scripts on an older AIX that use the ps -ef command to search out and see if a specific process "name" is currently running or not. The typical usage I see it :
ps -ef | grep java | grep RUDaemon | grep -v grep
rc=$?
if (( rc > 0 ))
then
...do process-exists stuff...
else
...do process-does-not-exists stuff...
fi
Thing is, the code doesn't appear to be working on our newer Linux...i.e., the rc now appears to be returning a simple 'status' outcome of the command itself, not the number of processes it found...Since I didn't write the original scripts, I'm not sure the original code EVER worked correctly. But the requirements state we need to utilize native *nix commands, so I re-wrote the code in the following manner and TESTED it for both 'exist' and 'does-not-exist' conditions.
rc=$(ps -ef | grep java | grep RUDaemon | grep -v grep | wc -l)
if (( rc > 0 ))
then
...do process-exists stuff...
else
...do process-does-not-exists stuff...
fi
My question is, what is the proper usage of ps -ef to discover the number of processes running with a specific name or partial name?
tia, Adym
Since you want not only discover … processes running with a specific name, but also with a specific argument, the principle usage is what you showed. But it can be optimized:
ps -ef | grep java | grep RUDaemon | grep -v grep
can be replaced with
ps -fCjava | grep RUDaemon
if indeed java processes with RUDaemon among the arguments are wanted.

How do I get "awk" to work correctly within a "su -c" command?

I'm running a script at the end of a Jenkins build to restart Tomcat. Tomcat's shutdown.sh script is widely known not to work all in many instances and so my script is supposed to capture the PID of the Tomcat process and then attempt to manually shut it down. Here is the command I'm using to capture the PID:
ps -ef | grep Bootstrap | grep -v grep | awk '{print $2}' > tomcat.pid
The output when manually runs retrieves the PID perfectly. During the Jenkins build I have to switch users to run the command. I'm using "su user -c 'commands'" like this:
su user -c "ps -ef | grep Bootstrap | grep -v grep | awk '{print $2}' > tomcat.pid"
Whenever I do this however, the "awk" portion doesn't seem to be working. Instead of just retrieving the PID, it's capturing the entire process information. Why is this? How can I fix the command?
The issue is that $2 is being processed by the original shell before being sent to the new user. Since the value of $2 in the shell is blank, the awk command at the target shell essentially becomes awk {print }. To fix it, you just escape the $2:
su user -c "pushd $TOMCAT_HOME;ps -ef | grep Bootstrap | grep -v grep | awk '{print \$2}' > $TOMCAT_HOME/bin/tomcat.pid"
Note that you want the $TOMCAT_HOME to be processed by the original shell so that it's value is set properly.
You don't need the pushd command as you can replace the awk command with:
cut -d\ -f2
Note: two 2 spaces between -d\ and -f2

Resources