Display only the users in Linux [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm trying to display only the users' name from a downloaded file in Linux named users.csv. The format of file is users; /home/directory.
I tried the following:
cut -d: -f1 users.csv
And also
awk -F: '{printf $1}' users.csv
None of them works. After enter home directory shows too.

Since the field delimiter is ";" and not ":", you need to specify it in the commands and so:
cut -d\; -f1 users.csv
awk -F\; '{ print $1 }' users.csv

Related

Using Awk, Cut and sed in the same pipe? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
We are doing a Linux workshop for college, and I was looking for a way to demonstrate using awk, sed and cut in the same pipe. I ve been thinking of using them in a apache server context (apache logs file), but is there other contexts I can use awk and sed and cut in?
here is one use
assume we want to convert all some vowels to uppercase sort some words based on the length
given file
$ cat file
apple
pear
banana
$ sed 'y/aeiu/AEIU/' file | awk '{print length "\t" $0}' | sort -n | cut -f2
pEAr
ApplE
bAnAnA
sed can be replaced with tr as well.

How to extract file size of the files within .tar.gz using linux command? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have one file with name config.tar.gz. Inside this tar file I have couple of files. Out of which I need to get size of one file.
I am trying following
tar -vtf config.tar.gz | grep sgr.txt
Output:
-rw-r--r-- root/DIAGUSER 109568 2019-11-26 10:16:21 sgr.txt
From this I need to extract only size in human readable format. Something similar to "ls -sh sgr.txt"
You could try:
tar -ztvf file.tar.gz 'specific_file' | awk '{print $3}'

Exclude rows on "shuf" command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have a csv with 100 rows, i want shuffle all rows skipping the first 2, but i dont find how exclude the first 2 lines
Now it is like this:
shuf words.txt > shuffled_words.txt
Can somebody help me?
The shell lets you easily combine text and file manipulation commands using pipes.
sed 1,2d words.txt | shuf >shuffled_words.txt
There are many ways to skin this cat; tail -n +2 words.txt or awk 'FNR>2' words.txt are also common and idiomatic ways to remove the first two lines.
Or something like this:
( head -n 2 words.txt ; tail -n +2 words.txt|shuf ) > shuffled_words.txt

Who try to access root# [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I couldn't find how i can have the list of IP that try to access my root# (it is a command in Linux but i couldn't find it). And than how can I block an IP from this access.
There is someone that try to access my root# on the server. I need to resolve this problem.
I tried this but don't work :
cat access.log| awk '{print $1}' | sort | uniq -c |sort -n
Just type:
last root
This will give you details of the IP addresses of machines where users logged in as root.
Without knowing your Input_file I am providing this solution, so could you please try following and let me know if this helps you.
awk '{match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/);array[substr($0,RSTART,RLENGTH)]} END{for(i in array){print i,array[i]}}' Input_file
If above is not helping you then kindly show us sample Input_file and expected output file too in code tags, so that we could help you in same.

compare two files and get the positions in third file in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I need help with comparison of two files and get the positions in third file, both files will have the same fields but the order will be unsorted in 2nd file, third file will give the line number where the data is found.
eg. file1.txt
A
B
C
D
file2.txt
B
D
A
C
outputfileposition.txt
3
1
4
2
Any help appreciated, thanks in advance
In awk
awk 'FNR==NR{a[$0]=FNR;next}{print a[$0] > "outputfileposition.txt"}' file{2,1}.txt
This will do the trick :
while read line
do
grep -n $line file2.txt | grep -o ^[0-9]* >> outputfileposition.txt
done < file1.txt

Resources