My Linux bash skills are rudimentary so I am seeking some help. I am attempting to insert some CPU temperature data into an influx database so that it can be displayed on a Grafana dashboard.
So far I have been able to retrieve the CPU temperatures via ipmitool in Linux, see below example... which shows the command I run, and the resulting temperature figures.
ipmitool sdr elist full | grep CPU | grep Temp | cut -d "|" -f 5 | cut -d " " -f 2
40
47
I want to feed these numbers into variables so I can insert them into the influx database using a command something like below.
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'server_temps,sensor=CPU1, value=23'
The command above works manually, and inserts the value 23 into the influx database. What I'd like to do is automate the data collection and insertion into the influx database... but for both CPU1 and CPU2.
Initially I am thinking I may need several curl sections, to enable the temperatures for CPU1 and CPU2 to be added to the database every 30 seconds or so. If I can get this to work it is possible I will want to add additional data from ipmitool.
I suspect I may not be doing this the best method? So all ideas, help, very much appreciated.
Would using simple bash loop sufficient?
for temperature in $(ipmitool sdr elist full | grep CPU | grep Temp | cut -d "|" -f 5 | cut -d " " -f 2)
do
curl -i -XPOST "http://localhost:8086/write?db=mydb' --data-binary 'server_temps,sensor=inlet, value=$temperature"
done
Related
I have a bash script with contents-
#!/bin/bash
while true;do
netstat -antp | grep LISTEN | tr -s ' ' | cut -d ' ' -f 4 > /tmp/log
sleep 100
done
Say I create a service which executes the script on boot.But when I use ps -eo command I'm able to see the commands being executed.For eg -
netstat -antp
grep LISTEN
tr -s ' '
cut -d ' ' -f 4
But I wish to suppress this output and hide the execution of these commands.Is there a way to do it?
Any other suggestions are welcome too.Thanks in advance!
You can't hide running processes from the system, at least not without some kernel hooks. Doing so is something typically only found in malware, so you'll not likely get much help.
There's really no reason to hide those processes from the system. If something in your script gets hung up, you'll want to see those processes to give you an idea of what's happening.
If there's a specific problem the presence of those processes is causing, you need to detail what that is.
I have a log that I need to audit every hour going back an hour. I currently run a command and just keep track of what I have accounted for in it's output but having an hourly audit would make things a lot easier. I've left off the additional grep filters for this output since I do not believe they are necessary.
How can I make this command output lines from the last hour up to now.
cat error.log | grep `date -u "+%Y-%m-%d"`| grep -i dangling*
You can use the -d option to adjust the current time.
cat error.log | grep `date -d '-1 hour' -u "+%Y-%m-%dT%H"` | grep -i dangling*
See https://unix.stackexchange.com/questions/49053/linux-add-x-days-to-date-and-get-new-virtual-date
I am doing a bash script and i am essaying to show not logged users processes,which are typically daemon processes, for this,in the exercise, they recommend me:
To process the command line, we will use the cut command, which allows
selecting the different columns of the list through a filter.
I used:
ps -A | grep -v w
ps -A | grep -v who
ps -A | grep -v $USER
but trying all these options all the processes of all users are printed in the output file, and I only want the processes of users who are not logged.
I appreciate your help
Thank you.
grep -v w will remove lines matching the regular expression w (which is simply anything which contains the string w). To run the command w you have to say so; but as hinted in the instructions, you will also need to use cut to post-process the output.
So as not to give the answer away completely, here's rough pseudocode.
w | cut something >tempfile
ps -A | grep -Fvf tempfile
It would be nice if you could pass the post-processed results of w in a pipe, but standard input is already tied to ps -A. If you have a shell which supports process substitution, you can use that.
ps -A | grep -Fvf <(w | cut something)
Unfortunately, the output from w is not properly machine-readable -- you will properly want to cut out the header line(s), too. (On my machine, there are two header lines. Yours might differ.) You'll probably learn a bit of Awk later on in the course, but until then, maybe something like
ps -A | grep -Fvf <(w | tail -n +3 | cut something)
This still doesn't completely handle all possible situations. What if someone's account name is grep?
Is there way of counting the number of processes being run by a user in the unix/linux/os x terminal?
For instance, top -u taha lists my processes. I want to be able to count these.
This will show all of the users with their counts (I believe this would be close enough for you. :)
ps -u "$(echo $(w -h | cut -d ' ' -f1 | sort -u))" o user= | sort | uniq -c | sort -rn
You can use ps to output it and count the number using wc, as:
ps -u user | sed 1d | wc -l
You can also dump top output and grep it, something like:
top -u user -n1 | grep user | wc -l
I'm somewhat new to *nix, so perhaps I did not fully understand the context of your question, but here is a possible solution:
jobs | wc -l
The output of the above command is a count of all the processes reported by the jobs command. You can manipulate the parameters of the jobs command to change which processes get reported.
EDIT: Just FYI, this would only work if interested in commands originating from a particular shell. If you want more control in looking at system-wide processes you probably want to use ps as others have suggested. However, if you use wc to do your counting, make sure you take into account any extraneous white space jobs, ps or top may have generated as that will affect the output of wc.
Im looking to monitor some aspects of a farm of servers that are necessary for the application that runs on them.
Basically, Im looking to have a file on each machine, which when accessed via http (on a vlan), with curl, that will spit out information Im looking for, which I can log into the database with dameon that sits in a loop and checks the health of all the servers one by one.
The info Im looking to get is
<load>server load</load>
<free>md0 free space in MB</free>
<total>md0 total space in MB</total>
<processes># of nginx processes</processes>
<time>timestamp</time>
Whats the best way of doing that?
EDIT: We are using cacti and opennms, however what Im looking for here is data that is necessary for the application that runs on these servers. I dont want to complicate it by having it rely on any 3rd party software to fetch this basic data which can be gotten with a few linux commands.
Make a cron entry that:
executes a shell script every few minutes (or whatever frequency you want)
saves the output in a directory that's published by the web server
Assuming your text is literally what you want, this will get you 90% of the way there:
#!/usr/bin/env bash
LOAD=$(uptime | cut -d: -f5 | cut -d, -f1)
FREE=$(df -m / | tail -1 | awk '{ print $4 }')
TOTAL=$(df -m / | tail -1 | awk '{ print $2 }')
PROCESSES=$(ps aux | grep [n]ginx | wc -l)
TIME=$(date)
cat <<-EOF
<load>$LOAD</load>
<free>$FREE</free>
<total>$TOTAL</total>
<processes>$PROCESSES</processes>
<time>$TIME</time>
EOF
Sample output:
<load> 0.05</load>
<free>9988</free>
<total>13845</total>
<processes>6</processes>
<time>Wed Apr 18 22:14:35 CDT 2012</time>