Extract all unique URL from log using sed - linux

Can you help me with correct regexp from the sed syntaxis point of view? For now every regexp that i can write is marked by terminal as invalid.

If your log syntax is uniform, use this command
cut -f4 -d\" < logfile | sort -u
If you want to skip the query string from uniqness, use this
cut -f4 -d\" < logfile | cut -f1 -d\? | sort -u
Explanation
Filter the output with the cut command, take the 4th field (-f4) using " as separator (-d\"). The same with the second filter, using ? as separator

Related

How to find a substring from some text in a file and store it in a bash variable?

I have a file named config.txt which has following data:
ABC_PATH=xxx/xxx
IMAGE=docker.name.net:3000/apache:1.8.109.1
NAMESPACE=xxx
Now I am running a shell script in which I want to store 1.8.109.1 (this value may differ, rest will remain same) in a variable, maybe using sed, awk or any other linux tool.
How can I achieve that?
The following will work.
ver="$(cat config.txt | grep apache: | cut -d: -f3)"
grep apache: will find the line that has the text 'apache:' in it.
-d specifies what delimiters to use. In this case : is set as the delimiter.
-f is used to select the specific field (array index, starting at 1) of the resulting list obtained after delimiting by :
Thus, -f3 selects the 3rd occurence of the delimited list.
The version info is now captured in the variable $ver
I think this should work:
cat config.txt | grep apache: | cut -d: -f3

What do back brackets do in this bash script code?

so i'm doing a problem with bashscript, this one: ./namefreq.sh ANA should return a list of two names (on separate lines) ANA and RENEE, both of which have frequency 0.120.
Basically I have a file from table.csv shown in the code below that have names and a frequency number next to them e.g. Anna, 0.120
I'm still unsure what the `` does for this code, and I'm also struggling to understand how this code is able to print out two names with identical frequencies. The way I read the code is:
grep compares the word (-w) typed by the user (./bashscript.sh Anna) to the value of (a), which then uses the cut command to be able to compare the 2nd field of the line separated by the delimiter "," which is the frequency from the file table.csv and then | cut -f1 -d"," prints out the first fields which are the names with the same frequency
^ would this be correct?
thanks :)
#!/bin/bash
a=`grep -w $1 table.csv | cut -f2 -d','`
grep -w $a table.csv | cut -f1 -d',' | sort -d
When a command is in backticks or $(), the output of the command is subsituted back into the command in place of it. So if the file has Anna,0.120
a=`grep -w Anna table.csv | cut -f2 -d','`
will execute the grep and cut commands, which will output 0.120, so it will be equivalent to
a=0.120
Then the command looks for all the lines that match 0.120, extracts the first field with cut, and sorts them.

Sed, Awk for combining the output of two cut statements

I'm trying to combine the below outputs into one command. The issue is that the field I'm trying to grab is in reverse order. I was told that cut doesn't support a "reverse" option and to use AWK for this purpose but it didn't end up working for my purpose. I'm trying to take the output of the ls- l against the /dev/block to return the partitions and automatically build a dd if= / of= for each outputted line based on the output of the command.
I tried piping the output to awk:
cut -d' ' -f23,25 ... | awk '{print $2,$1}'
however, the result was when using sed to input the prefix and suffix, it wasn't in the appropriate order.
I built the two statements below which individually return the expected output, just looking for the "right" way to combine both of these statements in the most efficient manner using sed / awk.
ls -l /dev/block/platform/msm_sdcc.1/by-name/ | cut -d' ' -f 25 | sed "s/^/dd if=/"
ls -l /dev/block/platform/msm_sdcc.1/by-name/ | cut -d' ' -f 23 | sed "s/.*/of=\/external_sd\/&.dsk/"
Any assistance will be appreciated.
Thank you.
If you're already using awk, I don't think you'll need cut or sed. You can probably do something like the following, though I'll have to trust you on the field numbers
ls -l /dev/block/platform/msm_sdcc.1/by-name | awk '{print "dd if=/"$25 " of=/" $23 ".dsk"}'
awk will split on all whitespace, not just the space character, so it's possible the fields will shift some, though it may be more reliable too.

How to Substring a string in bash?

I want to cut following string
abcd|xyz
number of characters before and afer pipe "|" symbol can vary.
I know cut command but it requires specific number 'from' and 'to' and i dont think will work here.
Can anyone help?
The command cut has the following parameters:
-d, --delimiter=DELIM use DELIM instead of TAB for field delimiter
-f, --fields=LIST select only these fields; also print any line
that contains no delimiter character, unless
the -s option is specified
Following on from this, the below commands should do what you wish:
echo 'abcd|xyz' | cut -d'|' -f1 # Prints abcd
echo 'abcd|xyz' | cut -d'|' -f2 # Prints xyz

How to grep only the content that contains x and y?

I have 2mill lines of content and all lines look like this:
--username:orderID:email:country
I already added a -- prefix to all usernames.
What I need now is to get ONLY the usernames from the file. I think its possible with grep file starting with "--" ending with ":", but I have absolutely no idea.
So output should be:
usernameThank you all for the help.
THIS WORKED:
cut -d: -f1
Even without adding the prefix, you should be able to get the usernames with cut:
cut -d: -f1
-d says what the delimiter is, -f says which field(s) to return.
Try this:
cat YOUR_FILE | sed "s/:/\n/g" | grep "\-\-"

Resources