how can I remove some numbers at the end of line in a text file [closed] - linux

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
Improve this question
I have a text file which contains a series of same line except at the end.
eg
lesi-1-1500-1
lesi-1-1500-2
lesi-1-1500-3
how can I remove the last number? it goes upto 250

to change in the file itself
sed -i 's/[0-9]\+$//' /path/to/file
or
sed 's/[0-9]\+$//' /path/to/file > /path/to/output
see example

You can do it with Awk by breaking it into fields.
echo "lesi-1-1500-2" > foo.txt
echo "lesi-1-1500-3" >> foo.txt
cat foo.txt | awk -F '-' '{print $1 "-" $2 "-" $3 }'
The -F switch allows us to set the delimiter which is -. Then we just print the first three fields with - for formatting.

Related

How to remove a word and following characters to the next coma [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 months ago.
Improve this question
I have a string. Part of it contains "Log":true. which I would like to remove using bash and sed.
Original line
[...]\"Date\":\"1661731200000\",\"Log\":true,\"$$type\":\"system\",\"created\":\"2022-08-01T13:37:43+0[...]
Modified line
[...]\"Date\":\"1661731200000\",\"$$type\":\"system\",\"created\":\"2022-08-01T13:37:43+0[...]
I'm struggling to find the right expression. Is is possible to achieve it with sed?
Match ,\"Log\": followed by any sequence of alphabetic characters.
sed 's/,\"Log\":[a-z]*//' filename
#!/bin/sh
cat << EOF >> edpop
2d
wq
EOF
cat file | tr ',' '\n' > file2
ed -s file2 < edpop
cat file2 | tr '\n' ',' > file
rm -v ./file2
rm -v ./edpop
This replaces the commas with newlines, deletes the second line with ed, (which corresponds with the second comma field) and then replaces the newlines with commas again.

How to create a Unix script to segregate data Line by Line? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I have some data in a MyFile.CSV file like this:
id,name,country
100,tom cruise,USA
101,Johnny depp,USA
102,John,India
What will be the shell script to take the above file as input and segregate the data in 2 different files as per the country?
I tried using the FOR loop and then using 2 IFs inside it but I am unable to do so. How to do it using awk?
For LINE in MyFile.CSV
Do
If grep "USA" $LINE >0 Then
$LINE >> Out_USA.csv
Else
$LINE >> Out_India.csv
Done
You can try with this
grep -R "USA" /path/to/file >> Out_USA.csv
grep -R "India" /path/to/file >> Out_India.csv
Many ways to do:
One way:
$ for i in `awk -F"," '{if(NR>1)print $3}' MyFile.csv|uniq|sort`;
do
echo $i;
egrep "${i}|country" MyFile.csv > Out_${i}.csv;
done
This assumes that the country name would not clash with other columns.
If it does, then you can fine tune that by adding additional regex.
For example, it country will be the last field, then you can add $ to the grep

Grep the most recent value of a particular column from a CSV file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
"cola","colb","colc","cold","cole","colf"
"a","b","c","d","e","f"
"a1","b1","c1","d1","e1","f1"
"a2","b2","c2","d2","e2","f2"
Assuming this is the CSV file, I want to grep the value "e" from the column "cole" and store it into a shell variable. And then use the shell variable as a part of a wget command.
How would I do this?
set -f # disable globbing
variable="$(awk 'NR==2 {print $5}' file)"
set +f
Awk is well suited to this. If you know the column number you can simply do:
$ awk 'NR==2{print $5}' file.csv
e
This will print the fifth field on the second line. If you want to use the column name then:
$ awk 'NR==1{for(i=1;i<=NF;i++)c[$i]=i}NR==2{print $c[col]}' col="cole" file.csv
e
Just set col="<name of column to use>".
You can use command substitution to store the value in variable:
$ val="$(awk 'NR==2{print $5}' file.csv)"
$ wget --what-ever-option "$val"
Or just use it in place:
$ wget --what-ever-option "$(awk 'NR==2{print $5}' file.csv)"

Change the path address in a text file by shell scripting [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
In my Bash script, I have to change a name to a path address(new address) in a text file:
(MYADDREES) change to ( /home/run1/c1 ) and save it as new file.
I did like this: defined a new variable = new address and tried to replace it in previous address in text file.
I use sed but it has problem.
My script was:
#!/bin/bash
# To debug
set -x
x=`pwd`
echo $x
sed "s/MYADDRESS/$x/g" < sample1.txt > new.txt
exit
The output of pwd is likely to contain / characters, making your sed expression look something like s/MYADDRESS//home/user/somewhere/. This makes it impossible for sed to sort out what should be replaced with what. There are two solutions:
Use a different delimiter for sed:
sed "s,MYADDRESS,$x,g" < sample1.txt > new.txt
...although this will have the same problem if the current path contains a comma character or something else that is a special character for sed, so the more robust approach is to use awk instead:
awk -v curdir="$(pwd)" '{ gsub("MYADDRESS", curdir); print }' < sample1.txt > new.txt

Bash script manipulation [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I work with Bash script and I want to get line from big text by special text
for example i have these lines
first fffffffffffffffffffffffffff
.................................
second ssssssssssssssssssssssssss
.................................
third ttttttttttttttttttttttttttt
and I want to get ssssssssssssssssssssssssss string .
Can anybody help me?
Is this what you want?
echo "$longstring" | awk '$1 == "second" { print $2 }'
since you seem to not have any criterion as to which line you want to output, i suggest something like:
echo "ssssssssssssssssssssssssss"
this is pretty robust regarding the content of your input, doesn't depend on a "file", and is a fast solution.
cat filename | grep "^second" | cut -d " " -f 2
Or, if you are ALF:
<filename grep "^second" | cut -d " " -f 2
Or
grep "^second" filename | cut -d " " -f 2

Resources