Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Can someone help me to force this script work please
IFLOAD=`ssh root#$REMOTE_HOST 'awk '\''/$IFACE/ {i++; rx[i]=\$2; tx[i]=\$10}; END{print rx[2]-rx[1] " " tx[2]-tx[1]}'\'' <(cat /proc/net/dev; sleep 1; cat /proc/net/dev)'
echo "$IFLOAD"
Now it returns
0 0
But on $2 and $10 columns i have a real data with IFACE rx and tx bytes. Without ssh this script works fine, as i see.
Or maybe you know more easy way to get current measure of interafce load over ssh.
First, let's simplify the awk command a little, before worrying about how to pass it through ssh.
awk_script='
ifc ~ $1 {
i++;
rx[i]=$2;
tx[i]=$10
};
END {
print rx[2]-rx[1] " " tx[2] - tx[1]
}
'
awk -v ifc="$IFACE" "$awk_script" <(cat /proc/dev/net; sleep 1; cat /proc/dev/net)
Passing IFACE as an awk variable lets us enclose the entire awk script in one single-quoted string. As long as we double-quote the expansion, it will work as intended.
Now it should be relatively simple to send via ssh:
IFLOAD=$(ssh root#$REMOTE_HOST "awk -v ifc='$IFACE' '$awk_script' <(cat /proc/dev/net; sleep 1; cat /proc/dev/net)")
Note the entire command line is one double-quoted string. That means $IFACE and $awk_script will be expanded locally. That string, though, uses single quotes around the expansions, so that when the entire thing is sent to the remote shell, each value is seen as a single-quoted string, and won't be incorrectly processed by the remote shell.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am trying to translate markdown file into confluence markup as a complete beginner.
I need to make [Title](https:// site.com) into [Title|https:// site.com]. If it was just one link, i could add it to a var and printf it, but I am having trouble figuring out how to do it if I have 10 links for example.
Previously I used CONTENT=$(echo "${CONTENT//# /h1. }") to replace strings but since now every string is different, I am stuck at how to solve this. I found the solution written in javascript: http://chunpu.github.io/markdown2confluence/browser but fail to understand how to do it in bash.
For this test file
$ cat file
[Title](https://site1.com)
[Title](https://site2.com)
[Title](https://site3.com)
[Title](https://site4.com)
[Title](https://site5.com)
[Title](https://site6.com)
[Title](https://site7.com)
[Title](https://site8.com)
[Title](https://site9.com)
[Title](https://site10.com)
Sed variant:
$ sed 's/\](/|/;s/)/\]/' file
[Title|https://site1.com]
[Title|https://site2.com]
[Title|https://site3.com]
[Title|https://site4.com]
[Title|https://site5.com]
[Title|https://site6.com]
[Title|https://site7.com]
[Title|https://site8.com]
[Title|https://site9.com]
[Title|https://site10.com]
Bash variant:
while read -r line; do
line=${line//](/|}
line=${line//)/]}
echo $line
done < file
[Title|https://site1.com]
[Title|https://site2.com]
[Title|https://site3.com]
[Title|https://site4.com]
[Title|https://site5.com]
[Title|https://site6.com]
[Title|https://site7.com]
[Title|https://site8.com]
[Title|https://site9.com]
[Title|https://site10.com]
Awk variant:
$ awk '{ sub(/\]\(/, "|"); sub(/\)/, "]"); print }' file
[Title|https://site1.com]
[Title|https://site2.com]
[Title|https://site3.com]
[Title|https://site4.com]
[Title|https://site5.com]
[Title|https://site6.com]
[Title|https://site7.com]
[Title|https://site8.com]
[Title|https://site9.com]
[Title|https://site10.com]
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
as I do some SDR project but also want to watch TV on my computer some time, I need a script to comment blacklist lines in a file in modprobe, when I want to use SDR, and I need to uncomment those lines when I want to watch TV.
So I worked on a script, based on my multiple reading on internet, but unfortunately it doesn't work on my machine.
I hope that you can help me fix it.
Note, individual sed commands did work (with simple quote, as in bash we need double quotes)
#!/bin/bash
# your target file
FILE="/etc/modprobe.d/blacklist-dvb.conf"
# comment target
comment() {
sed -i "s/^/#/g" $FILE # comment all lines
}
# uncomment target
uncomment() {
sed -i "s/^#//g" $FILE
}
I then launch the script as: ./my_script.sh comment (or uncomment, depending on the case)
The main problem is that your script doesn't really do anything. It defines a variable and two functions, and then just exits.
If you want your script to inspect command line arguments and invoke corresponding functions, you'll have to do that manually.
For example:
case "${1:?missing command argument}" in
comment) comment;;
uncomment) uncomment;;
*) echo "$0: bad command: $1" >&2; exit 1;;
esac
Other notes:
Don't use ALL_UPPERCASE for your shell variables. Those are by convention reserved for the system and the shell itself. Better:
file="/etc/modprobe.d/blacklist-dvb.conf"
As a general rule, variable expansions should be quoted ("$file") unless you really know what you're doing.
Your regexes are anchored to the beginning of the string (^). The /g flag is pointless.
In general it's better to use single quotes than double quotes. There are fewer surprises with '...' because everything is taken literally:
sed -i 's/^/#/' "$file"
sed -i 's/^#//' "$file"
OK, as I couldn't get it working in bash, I used python for that, and it does the job perfectly :)
For others who need help, here is my script:
#!/usr/bin/python
import subprocess
import sys
def comment():
subprocess.call(["sed -i 's/^/#/g' /etc/modprobe.d/blacklist-dvb.conf"], shell=True)
def uncomment():
subprocess.call(["sed -i 's/^#//g' /etc/modprobe.d/blacklist-dvb.conf"], shell=True)
# Print
print("Script name ", sys.argv[0])
print("Argument 1 ", sys.argv[1])
argument_1 = sys.argv[1]
if argument_1 == "comment":
print("in comment")
comment()
elif argument_1 =="uncomment":
uncomment()
else:
print("usage = python switcher.py comment")
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
Can someone tell me what am I doing wrong here? It seems to work on my mac shell but does not work on linux box it seems. Looks like different version of awk? I want to make sure my code works on the linux version.
echo -e "${group_values_with_counts}" | awk '$1>='${value2}' { print "{\"count\":\""$1"\",\"type\":\""$2"\"}" }'
21:19:41 awk: $1>= { print "{\"count\":\""$1"\",\"type\":\""$2"\"}" }
21:19:41 awk: ^ syntax error
You're trying to pass the value of a shell variable into awk the wrong way and using a non-portable echo. The right way (assuming value2 doesn't contain any backslashes) is:
printf '%s\n' "$group_values_with_counts" |
awk -v value2="$value2" '$1>=value2{ print "{\"count\":\""$1"\",\"type\":\""$2"\"}" }'
If value2 can contains backslashes and you want them treated literally (e.g. you do not want \t converted to a tab character) then you need to pass it in using ENVIRON or ARGV. See http://cfajohnson.com/shell/cus-faq-2.html#Q24.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have a text file of 75000 items, 2 lines for each item. line 1 has an identifier, line 2 a text string.
I need to remove 130 items, random identifiers that I have in a list or can put in a file.
I can carry out the removal for one item, but not for more than one.
I tried piping the identifiers and get an empty output file.
I tried repeated commands of sed -e 'expression' inputfile > outfile. This works, but requires a new output file that then becomes the inputfile for the next iteration and so on. this might be the last resort.
I tried sed -i in iteration; this crashes and the error is that there is no file by the name of the inputfile. Which is clearly not the case, as I can see it, ls it and grep the number of identifiers in it. Only sed can't seem to read it.
I even found a python/biopython script online for this exact problem, it is very simple and does not give error messages, but it also removes only the first item.
I think it has something to do with file properties/temporary files that don't really exist (?).
I am using Ubuntu 12.04 'Precise'
How can I get around this issue?
quick and dirty (no check if modification file is created, ...)
sed
Assuming there is no special meta character in your pattern list
sed 's#.*#/&/{N;d;}#' YourListToExclude > /tmp/exclude.sed
sed -f /tmp/exclude.sed YourDataFile > /tmp/YourDataFile.tmp
mv /tmp/YourDataFile.tmp YourDataFile
rm /tmp/exclude.sed
awk
awk 'FNR==NR{ex=(ex==""?"":ex"|")$0;next}$0!~ex{print;getline;print;next}{getline}' YourListToExclude YourDataFile > /tmp/YourDataFile.tmp
mv /tmp/YourDataFile.tmp YourDataFile
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm working on redHat linux.
I've a file which looks like :
$vi filename
Jan,1,00:00:01,someone checked your file
Jan,3,09:38:02,applebee
Jan,16,10:20:03, ****************
Jan,18,03:04:03, ***************
I want the output to look like:
2015/01/01,00:00:01,someone checked your file
2015/01/03,3,09:38:02,applebee
2015/01/16,16,10:20:03, ****************
2015/01/18,03:04:03, ***************
Please help me to do this. Thanks
If you have GNU date, try:
$ awk -F, '{cmd="date -d \""$1" "$2"\" +%Y/%m/%d"; cmd|getline d; print d","$3","$4; close(cmd)}' file
2015/01/01,00:00:01,someone checked your file
2015/01/03,09:38:02,applebee
2015/01/16,10:20:03, ****************
2015/01/18,03:04:03, ***************
This approach cannot be used with the BSD (OSX) version of date because it does not support any comparable -d option.
How it works
awk implicitly loops over lines of input, breaking each line into fields.
-F,
This tells awk to use a comma as the field separator
cmd="date -d \""$1" "$2"\" +%Y/%m/%d"
This creates a string variable, cmd, and contains a date command. I am assuming that you have GNU date.
cmd|getline d
This runs the command and captures the output in variable d.
print d","$3","$4
This prints the output that you asked for.
close(cmd)
This closes the command.