If I have a directory containing hundreds of files, using ls, ls-l, or dir gives me a list that's too long for the command terminal screen, so I'm unable to see most of the files in the directory.
I recall there being some argument for ls that allows one to scroll through the list in short increments, but can't seem to find it.
One option is to pipe the output to less or more
ls | less
or
ls | more
Try doing this in a shell :
ls -1 | less
One more way is to redirect the output of ls into a temporary file and then view that file with any editor of your choice - that way you can do searches etc. as well:
ls > res.tmp
vim res.tmp
emacs res.tmp
gedit res.tmp
grep "pattern" res.tmp
Related
I just want to save only the command into file, without the long output. For example, i type ls, terminal output a.txt b.txt, if i type ls > command.txt, the content of command.txt will be
a.txt
b.txt
command.txt
But what i want is :
ls
Can we achieve this ?
All shells store the history of commands run in dotfiles in the home directory.
Assuming you're using bash, i think you should be looking at the ~/.bash_history file
I am trying to get the most recent .CSV or .csv file name among other comma separated value files where the extension name is case insensitive.
I am achieving this with the following command, provided by someone else without any explanation:
ls -t ~(i:*.CSV) | head -1
or
ls -t -- ~(i:*.CSV) | head -1
I have two questions:
What is the use of ~ and -- in this case? Does -- helps here?
How can I get a blank response when there is no .csv or .CSV file in
the folder? At the moment I get:
/bin/ls: cannot access ~(i:*.CSV): No such file or directory
I know I can test the exit code of the last command, but I was wondering maybe there is a --silent option or something.
Many thanks for your time.
PS: I made my research online quite thorough and I was unable to find an answer.
The ~ is just a literal character; the intent would appear to be to match filenames starting with ~ and ending with .csv, with i: being a flag to make the match case-insensitive. However, I don't know of any shell that supports that particular syntax. The closest thing I am aware of would be zsh's globbing flags:
setopt extended_glob # Allow globbing flags
ls ~(#i)*.csv
Here, (#i) indicates that anything after it should be matched without regard to case.
Update: as #baptistemm points out, ~(i:...) is syntax defined by ksh.
The -- is a conventional argument, supported by many commands, to mean that any arguments that follow are not options, but should be treated literally. For example, ls -l would mean ls should use the -l option to modify its output, while ls -- -l means ls should try to list a file named -l.
~(i:*.CSV) is to tell to shell (this is only supported apparently in ksh93) the enclosed text after : must be treated as insensitive, so in this example that could all these possibilites.
*.csv or
*.Csv or
*.cSv or
*.csV or
*.CSv or
*.CSV
Note this could have been written ls -t *.[CcSsVv] in bash.
To silent errors I suggest you to look for in this site for "standard error /dev/null" that will help.
I tried running commands like what you have in both bash and zsh and neither worked, so I can't help you out with that, but if you want to discard the error, you can add 2>/dev/null to the end of the ls command, so your command would look like the following:
ls -t ~(i:*.CSV) 2>/dev/null | head -1
This will redirect anything written to STDERR to /dev/null (i.e. throw it out), which, in your case, would be /bin/ls: cannot access ~(i:*.CSV): No such file or directory.
If I have two strings, for example "class" and "btn", what is the linux command that would allow me to search for these two strings in the entire directory.
To be more specific, lets say I have directory that contains few folders with bunch of .php files. My goal is to be able to search throughout those .php files so that it prints out only files that contain "class" and "btn" in one line. Hopefully this clarifies things better.
Thanks,
I normally use the following to search for strings inside my source codes. It searches for string and shows the exact line number where that text appears. Very helpful for searching string in source code files. You can always pipes the output to another grep and filter outputs.
grep -rn "text_to_search" directory_name/
example:
$ grep -rn "angular" menuapp
$ grep -rn "angular" menuapp | grep some_other_string
output would be:
menuapp/public/javascripts/angular.min.js:251://# sourceMappingURL=angular.min.js.map
menuapp/public/javascripts/app.js:1:var app = angular.module("menuApp", []);
grep -r /path/to/directory 'class|btn'
grep is used to search a string in a file. With the -r flag, it searches recursively all files in a directory.
Or, alternatively using the find command to "identify" the files to be searched instead of using grep in recursive mode:
find /path/to/your/directory -type f -exec grep "text_to_search" {} \+;
Hello
I'm looking some script or program that use keywords or pattern search in files ex. php, html, etc and show where is this file
I use command cat /home/* | grep "keyword"
but i have too many folders and files and this command causes big uptime :/
I need this script to find fake websites (paypal, ebay, etc)
find /home -exec grep -s "keyword" {} \; -print
You don't really say what OS (and shell) you are using. You might want to retag your question to help us out.
Because you mention cat | ... , I am assuming you are using a Unix/Linux variant, so here are some pointers for looking at files. (bmargulies solution is good too).
I'm looking some script or program that use keywords or pattern search in files
grep is the basic program for searching files for text strings. Its usage is
grep [-options] 'search target' file1 file2 .... filen
(Note that 'search target' contains a space, if you don't surround spaces in your searchTarget with double or single quotes, you will have a minor error to debug.)
(Also note that 'search target' can use a wide range of wild-card characters, like .,?,+,,., and many more, that is beyond the scope of your question). ... anyway ...
As I guess you have discovered, you can only cram so many files at a time into the comand-line, even when using wild-card filename expansion. Unix/linux almost always have a utiltiyt that can help with that,
startDir=/home
find ${startDir} -print | xargs grep -l 'Search Target'
This, as one person will be happy to remind you, will require further enhancements if your filenames contain whitespace characters or newlines.
The options available for grep can vary wildly based on which OS you are using. If you're lucky, you type the following to get the man page for your local grep.
man grep
If you don't have your page buffer setup for a large size, you might need to do
man grep | page
so you can see the top of the 'document'. Press any key to advance to the next page and when you are at the end of the document, the last key press returns you to the command prompt.
Some options that most greps have that might be useful to you are
-i (ignore case)
-l (list filenames only (where txt is found)
There is also fgrep, which is usually interpretted to mean 'file' grep
becuase you can give it a file of search targets to scan for, and is used like
fgrep [-other_options] -f srchTargetsFile file1 file2 ... filen
I need this script to find fake websites (paypal, ebay, etc)
Final solution
you can make a srchFile like
paypal.fake.com
ebay.fake.com
etc.fake.com
and then combined with above, run the following
startDir=/home
find ${startDir} -print | xargs fgrep -il -f srchFile
Some greps require that the -fsrchFile be run together.
Now you are finding all files starting /home, searching with fgrep for paypay, ebay, etc in all files. The -l says it will ONLY print the filename where a match is found. You can remove the -l and then you will see the output of what is found, prepended with the filename.
IHTH.
I'm using ls -a command to get the file names in a directory, but the output is in a single line.
Like this:
. .. .bash_history .ssh updater_error_log.txt
I need a built-in alternative to get filenames, each on a new line, like this:
.
..
.bash_history
.ssh
updater_error_log.txt
Use the -1 option (note this is a "one" digit, not a lowercase letter "L"), like this:
ls -1a
First, though, make sure your ls supports -1. GNU coreutils (installed on standard Linux systems) and Solaris do; but if in doubt, use man ls or ls --help or check the documentation. E.g.:
$ man ls
...
-1 list one file per line. Avoid '\n' with -q or -b
Yes, you can easily make ls output one filename per line:
ls -a | cat
Explanation: The command ls senses if the output is to a terminal or to a file or pipe and adjusts accordingly.
So, if you pipe ls -a to python it should work without any special measures.
Ls is designed for human consumption, and you should not parse its output.
In shell scripts, there are a few cases where parsing the output of ls does work is the simplest way of achieving the desired effect. Since ls might mangle non-ASCII and control characters in file names, these cases are a subset of those that do not require obtaining a file name from ls.
In python, there is absolutely no reason to invoke ls. Python has all of ls's functionality built-in. Use os.listdir to list the contents of a directory and os.stat or os to obtain file metadata. Other functions in the os modules are likely to be relevant to your problem as well.
If you're accessing remote files over ssh, a reasonably robust way of listing file names is through sftp:
echo ls -1 | sftp remote-site:dir
This prints one file name per line, and unlike the ls utility, sftp does not mangle nonprintable characters. You will still not be able to reliably list directories where a file name contains a newline, but that's rarely done (remember this as a potential security issue, not a usability issue).
In python (beware that shell metacharacters must be escapes in remote_dir):
command_line = "echo ls -1 | sftp " + remote_site + ":" + remote_dir
remote_files = os.popen(command_line).read().split("\n")
For more complex interactions, look up sftp's batch mode in the documentation.
On some systems (Linux, Mac OS X, perhaps some other unices, but definitely not Windows), a different approach is to mount a remote filesystem through ssh with sshfs, and then work locally.
you can use ls -1
ls -l will also do the work
You can also use ls -w1
This allows to set number of columns.
From manpage of ls:
-w, --width=COLS
set output width to COLS. 0 means no limit
ls | tr "" "\n"
Easy, as long as your filenames don't include newlines:
find . -maxdepth 1
If you're piping this into another command, you should probably prefer to separate your filenames by null bytes, rather than newlines, since null bytes cannot occur in a filename (but newlines may):
find . -maxdepth 1 -print0
Printing that on a terminal will probably display as one line, because null bytes are not normally printed. Some programs may need a specific option to handle null-delimited input, such as sort's -z. Your own script similarly would need to account for this.
-1 switch is the obvious way of doing it but just to mention, another option is using echo and a command substitution within a double quote which retains the white-spaces(here \n):
echo "$(ls)"
Also how ls command behaves is mentioned here:
If standard output is a terminal, the output is in columns (sorted
vertically) and control characters are output as question marks;
otherwise, the output is listed one per line and control characters
are output as-is.
Now you see why redirecting or piping outputs one per line.