Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
So, to tail a log and watch it in real-time, one uses
tail -f <filename>
but what if you want to follow something else? Follow here is not really accurate I suppose, more like refresh the same set of data repeatedly.. is this possible?
I'm thinking of things like this: if you want to watch to see when a file moves from one folder to another, say as part of a function run through cron, you can use
ls -lR
repeatedly, but what I'm thinking of is something like the equivalent of:
ls -lR | tail -f
or
date | tail -f
to waste time watching the clock tick.
Is there anything like this, or is that just the limitation of the console.
Do you mean watch command?
watch -n 1 'date'
This way you can watch the time change for every second. Default is 2 seconds.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I get an exit behind a series of commands that returns me a huge list of servers.
Which start like this:
...linux.sapsmftexp01 ...linux.sappiftexp01 ...linux.sapbwftexp01
..linux.radiuswifiexp01 ..linux.gitlabexp01 ..linux.redisccexp01
I need to get only the name information, i.e .:
sapsmftexp01
sappiftexp01
sapbwftexp01
When I have tried to do it with cut -d
It deprives me of others, the same happens with awk, but someone has told me that I can do it from right to left, but I don't know how to do it.
Could someone help me please?
With sed:
sed -r 's/(linux)|(\.)//g;s/ /\n/gp' file
First remove any occurrences of "linux" or full stop and then replace spaces for new lines.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm newbie to linux back round process, for example i have the below linux command, Maybe the question will duplicate here but i couldn't find answer so posting it.
cat test.txt | grep hello
how many back round process(s) will run? It would be great if insight on this.
There are two processes: cat and grep.
If you just launch the command line likt that, both processes are not background processes. (I guess you are asking background jobs?)
However, this example is not good, since you can just grep hello test.txt to save one process.
But if you just want to ask the number of processes, it's fine.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I want to sort a file in linux. sort -n file.txt doesn't work!
The file that i want to sort is this. Between each number there are 3 space. I want to sort according to the last number of each row.
20.799999 13.760000 -15.200000 -10.560000 20.000000 -5.00000
3.90001 -9.7705E-02 -0.95687 -0.167488 0.12431613 -0.7140
How do I sort the file?
Use the -g option to make numbers with exponentials work. To sort on the 6th field, use -k6. Put together, sort -g -k6 file.txt.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
On a Linux terminal, is there a way for me to list all the files that are either in the parent directory or any number of subdirectory levels below, all sorted by modification time across the entire files list?
ls -Rlt doesn't quite suffice, since it sorts by modification time for files per each subdirectory..
This will do what you want
find . -type f -printf '%TY%Tm%Td%TH%TM%TS %p\n'|sort
Sample output:
20130312134959.5090000000 ./servlets/servlet/target/reporting_app.servlets.servlet/WEB-INF/lib/java-foundation-1.1.20.jar
20130312134959.7580000000 ./servlets/servlet/target/reporting_app.servlets.servlet/WEB-INF/lib/log4j-rolling-appender-1.2.15.jar
20130312134959.8050000000 ./servlets/servlet/target/reporting_app.servlets.servlet/WEB-INF/lib/commons-logging-1.1.1.jar
20130312134959.9140000000 ./servlets/servlet/target/reporting_app.servlets.servlet/WEB-INF/lib/commons-digester-1.8.1.jar
...
What about using find like this:
find /my/dir/to/scan -type f -exec ls -lt --time-style=+"%F-%T" {} ";" | sort -k 6
This might take some time until it returns due to the final | sort.
In case you change the format by adding/removing options to ls you propably need to adjust the sort column, which currently is 6, the date.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Kind of a stupid question, but I'm just covering my bases and I haven't found anything with a quick google search. I have an rsync that runs hourly and is kicked off by a cron. Then once a week I need kick this rsync off from another script to make sure I have the latest files I need.
What happens if the rsync is already running from the cron and I kick the same rsync off again? In a quick test it looks like the second rsync picks up where the first left off, but I want to be sure so I thought I would ask.
Each rsync will run independently and will take their own "snapshots" of the local and remote sides to compare. I think in many cases where you are just pushing updates from one side to the other you will probably be fine, the only caveat being you many end up transferring duplicate data.