how to get some particular words from a line using cut command - linux

I have a file with the following text
and I want to get the 1st,2nd,and 4th value only**1343311371204**,*****210*****,http://172.16.1.139/CC_WEB/jsp/Login.jsp,**200**,OK,true,9,9
1343311371044,304,http://172.16.1.139/CC_WEB/jsp/Login.jsp,200,OK,true,8,8
1343311371109,239,http://172.16.1.139/CC_WEB/jsp/Login.jsp,200,OK,true,8,8
1343311371083,263,http://172.16.1.139/CC_WEB/jsp/Login.jsp,200,OK,true,8,8 in every line. So How can I do it?

Using cut
cut -d, -f1,2,4 your_file should do it fine.
Using read (bash builtin function)
Using read in a while loop, you can use variables to do things with those values :
while IFS=',' read timestamp value2 url code remains ; do
# use those variables
done < your_file`

awk -F, '{print $1,$2,$4}' your_file

Related

How to get first word of every line and pipe it into dmenu script

I have a text file like this:
first state
second state
third state
Getting the first word from every line isn't difficult, but the problem comes when adding the extra \n required to separate every word (selection) in dmenu, per its syntax:
echo -e "first\nsecond\nthird" | dmenu
I haven't been able to figure out how to add the separating \n. I've tried this:
state=$(awk '{for(i=1;i<=NF;i+=2)print $(i)'\n'}' text.txt)
But it doesn't work. I also tried this:
lol=$(grep -o "^\S*" states.txt | perl -ne 'print "$_"')
But same deal. Not sure what I'm doing wrong.
Your problem is in the AWK script. You need to identify each input line as a record. This way, you can control how each record in the output is separated via the ORS variable (output record separator). By default this separator is the newline, which should be good enough for your purpose.
Now to print the first word of every input record (each line in the input stream in this case), you just need to print the first field:
awk '{print $1}' textfile | dmenu
If you need the output to include the explicit \n string (not the control character), then you can just overwrite the ORS variable to fit your needs:
awk 'BEGIN{ORS="\\n"}{print $1}' textfile | dmenu
This could be more easily done in while loop, could you please try following. This is simple, while is reading the file and during that its creating 2 variables 1st is named first and other is rest first contains first field which we are passing to dmenu later inside.
while read first rest
do
dmenu "$first"
done < "Input_file"
Based on the text file example, the following should achieve what you require:
awk '{ printf "%s\\n",$1 }' textfile | dmenu
Print the first space separated field of each line along with \n (\n needs to be escaped to stop it being interpreted by awk)
In your code
state=$(awk '{for(i=1;i<=NF;i+=2)print $(i)'\n'}' text.txt)
you attempted to use ' inside your awk code, however code is what between ' and first following ', therefore code is {for(i=1;i<=NF;i+=2)print $(i) and this does not work. You should use " for strings inside awk code.
If you want to merely get nth column cut will be enough in most cases, let states.txt content be
first state
second state
third state
then you can do:
cut -d ' ' -f 1 states.txt | dmenu
Explanation: treat space as delimiter (-d ' ') and get 1st column (-f 1)
(tested in cut (GNU coreutils) 8.30)

How to get the rest of the Pattern using any linux command?

I am try to update a file and doing some transformation using any linux tool.
For example, here I am trying with awk.
Would be great to know how to get the rest of the pattern?
awk -F '/' '{print $1"/raw"$2}' <<< "string1/string2/string3/string4/string5"
string1,rawstring2
here I dont know how many "/" is there and I want to get the output:
string1/rawstring2/string3/string4/string5
Something like
awk -F/ -v OFS=/ '{ $2 = "raw" $2 } 1' <<< "string1/string2/string3/string4/string5"
Just modify the desired field, and print out the changed line (Have to set OFS so it uses a slash instead of a space to separate fields on output, and a pattern of 1 uses the default action of printing $0. It's an idiom you'll see a lot of with awk.)
Also possible with sed:
sed -E 's|([^/]*/)|\1raw|' <<< "string1/string2/string3/string4/string5"
The \1 in the replacement string reproduces the bit inside parenthesis and appends raw to it.
Equivalent to
sed 's|\([^/]*/\)|\1raw|' <<< "string1/string2/string3/string4/string5"

How do I add the first 2 letters of every line in a file to a list using bash?

I have a file ($ScriptName). I want the first 2 charactors of every line to be in a list (Starters). I am using a bash script.
How would I do this?
I have declared my array like this:
array=() #Empty array
Using guidence from this: https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays
I am using manjaro 19 and the latest kernel.
To get the first two characters from each line, you can use
cut -c1,2 "$ScriptName"
-c1,2 means "output characters in positions 1 and 2"
I'm not sure what you mean by a "list". If you just want to create a file with the results, use redirection:
cut -c1,2 "$ScriptName" > Starters
If you want to populate an array, just use
while IFS= read -r starter ; do Starters+=("$starter") ; done < <(cut -c1,2 "$ScriptName")
Moreover, if you're interested in letters rather than characters, you can use sed to remove non-letters from each line and then use the solution shown above.
sed 's/[^[:alpha:]]//g' "$ScriptName" | cut -c1,2
Try this Shellcheck-clean (except for a missing initialization of ScriptName) pure Bash code:
Starters=()
while IFS= read -r line || [[ -n $line ]]; do
Starters+=( "${line:0:2}" )
done < "$ScriptName"
See Arrays [Bash Hackers Wiki] for information about using arrays in Bash.
See BashFAQ/001 (How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?)
for information about reading files line-by-line in Bash.
See Removing part of a string (BashFAQ/100 (How do I do string manipulation in bash?)) (particularly the bit about "range notation") for an explanation of ${line:0:2}".
The mapfile bash built-in command combined with cut makes it simple:
cut -c1,2 "$ScriptName" | mapfile Starters

using grep lookup/cut function instead of source to load config file in bash

I have a script that I'm using now that loads all my config variables in by means of source command. It's simple quick and effective. But i understand that it's not a very secure option.
I've read that I can use the $include directive to achieve the same results. Is that any different or safer than source or are they essentially the same?
As a final alternative if the above two options are not safe ones, I'm trying to understand a lookup function I found in a shell scripting book. It basically used grep, a delimiter and cut to perform a variable name lookup from the file and retrieve the value. This seems safe and I can use it to modify my scripts.
It almost works as is. I think I just need to change the delimiter to "=" from $TAB but I'm not sure how it works or if it even will.
My config file format:
Param=value
Sample function (from notes)
lookup() {
grep "^$1$TAB" "$2" | cut -f2
}
Usage:
lookup [options] KEY FILE
-f sets field delimiter
-k sets the number of field which has key
-v specifies which field to return
I'm using Debian version of Raspbian Jessie Lite in case that matters on syntax.
Instead of grep and cut you should consider using awk that can do both search and cut operations based on a given delimiter easily:
lookup() {
key="$1"
filename="$2"
awk -F= -v key="$key" '$1 == key{print $2}' "$filename"
# use this awk if = can be present in value part as well
# awk -v key="^$key=" '$0 ~ key && sub(key, "")' "$filename"
}
This can be called as:
lookup search configfile.txt
-F= sets delimiter as = for awk command.
Also note that $1 and $2 inside single quotes are columns #1 and #2 and one shouldn't be confused with positional shell variables $1, $2 etc.
You should look into getopts to make it accept -f, -k etc type arguments.

how to cut CSV file

I have the following CSV file
more file.csv
Number,machine_type,OS,Version,Mem,CPU,HW,Volatge
1,HG652,linux,23.12,256,III,LOP90,220
2,HG652,linux,23.12,256,III,LOP90,220
3,HG652,SCO,MK906G,526,1G,LW1005,220
4,HG652,solaris,1172,1024,2Core,netra,220
5,HG652,solaris,1172,1024,2Core,netra,220
Please advice how to cut CSV file ( by cut or sed or awk command )
in order to get a partial CSV file
Command need to get value that represent the fields that we want to cut from the CSV
According to example 1 ( value should be 6 )
Example 1
on this example we cut the 6 fields from left to right , ( in this case CSV will look like this )
Number,machine_type,OS,Version,Mem,CPU
1,HG652,linux,23.12,256,III
2,HG652,linux,23.12,256,III
3,HG652,SCO,MK906G,526,1G
4,HG652,solaris,1172,1024,2Core
5,HG652,solaris,1172,1024,2Core
cut is your friend:
$ cut -d',' -f-6 file
Number,machine_type,OS,Version,Mem,CPU
1,HG652,linux,23.12,256,III
2,HG652,linux,23.12,256,III
3,HG652,SCO,MK906G,526,1G
4,HG652,solaris,1172,1024,2Core
5,HG652,solaris,1172,1024,2Core
Explanation
-d',' set comma as field separator
-f-6 print up to the field number 6 based on that delimiter. It is equivalent to -f1-6, as 1 is default.
Also awk can make it, if necessary:
$ awk -v FS="," 'NF{for (i=1;i<=6;i++) printf "%s%s", $i, (i==6?RS:FS)}' file
Number,machine_type,OS,Version,Mem,CPU
1,HG652,linux,23.12,256,III
2,HG652,linux,23.12,256,III
3,HG652,SCO,MK906G,526,1G
4,HG652,solaris,1172,1024,2Core
5,HG652,solaris,1172,1024,2Core
the cut commandline is rather simple and well suited in your case:
cut -d, -f1-6 yourfile
So everybody agrees to say that the cut way is the best way to go in this case. But we can also talk about the awk solution, and there I may point out that in fedorqui's answer, a clever trick is used to silence empty lines (NF as a selection pattern), but it has the disadvantage of e.g. removing blank lines from the original file. I propose below another solution (en passant, using the -F option instead of the variable passing mechanism on FS that preserves any empty line and also respects lines with less than 6 fields, e.g. prints these lines without adding extra commas there:
awk -F, '{min=(NF>6?6:NF); for (i=1;i<=min-1;i++) printf "%s,", $i; printf "%s\n", $6}' yourfile
This works nicely because printf-ing $6 is never an error, even in case the line has less than 6 fields. This is true with my gawk 4.0.1, at least...

Resources