Piping stdout to specific location, Linux [closed] - linux

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
i hope you'll get my point here :)
without piping the command is:
aircrack-ng handshakes.cap -w wordlist.txt
redirecting crunch stdout to aircrack:
these commands are not working:
crunch 8 8 abc123 | aircrack-ng handshakes.cap -w -
crunch 8 8 abc123 | aircrack-ng handshakes.cap -w-
crunch 8 8 abc123 | aircrack-ng handshakes.cap -w

crunch 8 8 abc123 | aircrack-ng -w "-" handshakes.cap

The other thing to try would be process substitution:
aircrack-ng handshakes/PTCL-Broadband.cap -w <(crunch 8 8 abc123)
Whether this works will depend on exactly what aircrack wants to do with the input file.
You can also do it in two goes:
crunch 8 8 abc123 > /tmp/somefile
aircrack-ng handshakes/PTCL-Broadband.cap -w /tmp/somefile

Related

why use "use -elf" the result return username with "systemd+"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 months ago.
Improve this question
when I use docker run -itd mysql,then to use ps -elf check the process infomation with "4 S systemd+ 257584 257561 1 80 0 - 712611 poll_s Jul17 ? 00:40:16 mysqld".
root#xx:/proc/257584/ns# ps -elf | grep mysqld
4 S systemd+ 257584 257561 1 80 0 - 712611 poll_s Jul17 ? 00:40:20 mysqld
root#xx:/proc/257584/ns# ps -el | grep mysqld
4 S 999 257584 257561 1 80 0 - 712611 poll_s ? 00:40:21 mysqld
But I use "cat /cat/passwd" can't find username equal to "systemd+".
docker Version: 20.10.12
os ubuntu20.04
ps (sadly) trims the username to 8 (if i'm counting right) characters and adds a + after the user name initial part. The username could be systemd-mysql or systemd-something that you can find in passwd.
From manual:
If the length of the username is greater than the length of the display column, the username will be truncated. See the -o and -O formatting options to customize length

how to open all shortcuts files from a directory with chromium-browser in one command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
On ubuntu 16.04, i have a directory with these files :
-rw-rw-r-- 1 user0 user0 86 jui 7 21:32 vim html picker.url
-rw-rw-r-- 1 user0 user0 104 jui 7 21:32 cocoonjs build android apk.url
-rw-rw-r-- 1 user0 user0 61 jui 7 21:32 Simple Modal Window - Codepad.url
-rw-rw-r-- 1 user0 user0 96 jui 7 21:32 cocoon.js android build apk+++.url
-rw-rw-r-- 1 user0 user0 44 jui 7 21:32 CodePen - Front End Developer Playground & Code Editor in the Browser (1).url
The file "vim html picker.url" have this information :
--> cat vim\ html\ picker.url
[InternetShortcut]
URL=https://github.com/KabbAmine/vCoolor.vim/blob/master/README.md
what i want to do is open all of theses files from this directory in tab in my chromium-browser.
i have tried this in my gnome-terminal :
chromium-browser *.*
but chrome open the text information : URL=https://github.com/KabbAmine/vCoolor.vim/blob/master/README.md and not the url itself :https://github.com/KabbAmine/vCoolor.vim/blob/master/README.md.
wich command allow my desired behaviour ?
grep "^URL=" *.url | cut -d= -f2 | xargs chromium-browser
should do the trick.
Explanation:
grep "^URL=" *.url - cut the line beginning with URL= from each file ending in .url
cut -d= -f2 - split each remaining line into parts delimited by '=' and output the second and all subsequent parts (i.e. the part after the first '=')
xargs chromium-browser - use the list of URLs as arguments to chromium.

Using nc to transfer large file [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have a compressed file size of about 9.5 GB and want to transfer from one server to another server, I tried to use like the below,
server2:
nc -lp 1234 > file.tar.gz
server1:
nc -w 1 1234 < file.tar.gz
its not working.
I tried so many ways.
One machine is CentOS 6.4 and the other one is Ubuntu 12.04 LTS
Thanks in advance.
On receiving end:
nc -l 1234 > file.tar.gz
On sending end:
cat file.tar.gz | nc <reciever's ip or hostname> 1234
That should work. Depending on the speed, it may take a while but both processes will finish when the transfer is done.
From the nc(1) man page:
-l Used to specify that nc should listen for an incoming connection rather than initiate
a connection to a remote host. It is an error to use this option in conjunction with
the -p, -s, or -z options.
So your use of -p is wrong.
Use on server2:
nc -l 1234 > file.tar.gz
And on server1:
nc server2 1234 < file.tar.gz
from the sender
nc -v -w 30 1337 - l < filename
where "-v" from verbose, "-w 30" for a wait before and after 30 sec for the connection, "1337" port number, "-l" tell nc that this is a sender
from the receiver
nc -v -w 2 ip_add_of_sender 1337 > filename

Reading entropy_avail file appears to consume entropy [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The question have been asked in here http://www.gossamer-threads.com/lists/linux/kernel/1210167 but I don't see an answer.
AFAIK /proc/sys/kernel/random/entropy_avail should return the size of available entropy but should not consume it. At least I don't see any reason for that.
However, I have been noticing the same thing as OP for at least a year and now I executed in quick succession
% cat /proc/sys/kernel/random/entropy_avail
3918
% cat /proc/sys/kernel/random/entropy_avail
3447
% cat /proc/sys/kernel/random/entropy_avail
2878
% cat /proc/sys/kernel/random/entropy_avail
2377
% cat /proc/sys/kernel/random/entropy_avail
1789
% cat /proc/sys/kernel/random/entropy_avail
1184
% cat /proc/sys/kernel/random/entropy_avail
577
% cat /proc/sys/kernel/random/entropy_avail
161
% cat /proc/sys/kernel/random/entropy_avail
133
% cat /proc/sys/kernel/random/entropy_avail
171
a while later I did the same with the same result, so I'm pretty sure the depletion of entropy is caused by the cat command.
Can anyone explain why this happens?
Found an answer in here http://blog.flameeyes.eu/2011/03/entropy-broken
Starting a process consumes entropy

Removing old users’ home directories from Linux server [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
We have an NFS server with thousands of users home directories. I did a lot of searching and man page reading and I can't seem to figure this out.
I want to remove the home directories of the users that are no longer with us. Basically anyone that hasn't logged in and made changes to their home folders in over a year.
The snag i keep hitting is every tool i see (ls, find, etc) will give me the listing on the last time a directory was modified but not the contents inside.
Take the user Joe for example.
/data/Users/joe/Windows# ls -lt
drwxrwx---+ 2 1079 nhsstaff 4096 2008-07-31 15:13 Cookies
So judging from this output you would think this folder had not been access since July 7th 2008.
But when you look inside the directory:
`root#smb0:/data/Users/joe/Windows/Cookies# ls -ltr
-rwx------+ 1 1079 nhsstaff 92 2009-02-17 03:16 default#sun[1].txt
-rwx------+ 1 1079 nhsstaff 86 2009-02-17 03:16 default#ig[1].txt
-rwx------+ 1 1079 nhsstaff 136 2009-02-17 03:16 default#google[1].txt
-rwx------+ 1 1079 nhsstaff 104 2009-02-17 03:16 default#dell[1].txt
-rwxrwx---+ 1 1079 nhsstaff 32768 2010-04-26 07:53 index.dat`
You can see files have been changed since April 26th 2010.
So to sum up, i need a way to search and sort when the last time a home directory was used.
run this command:
find /data/Users -mtime +365 | awk '{print $1}' | cut -f2 -d"/" | sort\
| uniq -c | awk '{print $2}'
This set of commands will give you list of all those users, who have not modified their home folders for more than a year.
If you want script to auto delete those folders through script, I can provide it as well.

Resources