Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
On my Linux server, I need to copy all .html files recursively (from under the currect directory), into a single file, "all.html", for testing purposes.
Could anyone advise on a command that might get me close to doing so?
I just use the 'copy' command, I think, but what parameters do I need to pass?
Thanks in advance!
This will copy all the files into a single HTML file:
cat *.html > all.html
He said "recursively" so a simple "cat *.html" won't do. But try this:
find -name "*.php" -print0 | xargs -0 cat > all.php.new
(beware that you get an error if the output file also ends in *.php as "find" will then match it, too)
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Say i have list of files contain the word 'contain' in the same directory and i want to find those files at one command . I have tried with grep command to find out like
grep 'contain' file name
I tried following command also.
locate 'contain'
find 'contain'
grep 'contain' file name
Currently i am not able to find any thing . Kindly help as its very important to find the files in a one command . Currently i am not able to find any thing . Kindly help as its very important to find the files in a one command . Currently i am not able to find any thing . Kindly help as its very important to find the files in a one command .
Answer is within the question.
cd to the directory.
grep –l ‘contain’ *
Hope it helps .
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm a green hand to linux using the vmware called Parallels on my mac and the edition I use is CentOS7.When I use the ls -al command, I found some files don't have name as follow in surprise:
I just want to know as these files are seemingly generated at a same time, what are they? how to delete them?
On *nix system every file has an atrribute called i-node. You can find with command
ls -i
when you have i=node number you can delete file by
find . -inum 782263 -exec rm -i {} \;
You could use any other commands not only rm.
more details you can find here
http://www.cyberciti.biz/tips/delete-remove-files-with-inode-number.html
As the d in drwxr-xr-x states, those are folders (or at least the filesystem thinks they are). You may use Midnight Commander to delete them. You may already have it installed on your machine, try to run mc to see if it's there.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I try
ls */ | grep "\.txt$"
to find all .txt file in the subdirectory but it seems that it can't work well all the time.
The pattern you want can easily be matched with a single glob:
ls */*.txt
The ls isn't necessary; it just demonstrates that it works. You can also use
echo */*.txt
printf '%s\n' */*.txt
files=( */*.txt )
for f in */*.txt; do ....
The pattern itself (*/*.txt) will expand to the list of the matching files; what you can do with that list is fairly broad.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Can anyone help explain what the command does on Ubuntu/Debian? Note there is no file called default. But there is one call default-ssl.conf.
sed -i '/AllowOverride None/c AllowOverride All' /etc/apache2/sites-available/default
[Added]I searched the help page already but I am too new to understand the texts.
[Added 2]I conclude it is an ill command.
It won't do anything if there is no file called default.
However, the -i flag means edit in-place, so it changes the file sed was run on.
In place means make changes to the file by actually changing the file, rather than leaving it intact and printing a new copy with the changes sed would make to stdout.
For more information on sed in general, I recommend reading the sed info page - info sed
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I was playing with tail, head, cut and awk commands on a text file and somehow these commands created empty files with names "-d" and "-f2" (It could be due to ). Now I am not able to delete these files from command line since all commands take these as options. Of course I can delete these from Finder but I am wondering how to delete these from command line.
Use -- to separate the files from the command line arguments. That is
rm -- -d -f2
Or, you can use the full path or a relative path containing at least a /:
rm ./-d ./-f2