Why can't i detect this file? - linux

I have this file in a directory say test.php whose contents are below
< ? php $XZKsyG=’as’;
I want to pick up the file test.php with a search based on its content. So from the directory containing it I do:
grep 'php \$[a-zA-Z]*=.as.;'
However I get no result...what am I doing wrong?
Thanks

It works for me:
$ cat file
< ? php $XZKsyG=’as’;
$ grep 'php \$[a-zA-Z]*=.as.;' file
< ? php $XZKsyG=’as’;
Are you sure the contents of the file are exactly what you showed us?
Try cat -A file or od -c file to see whether the file really looks the way you think it does.
(Note that you don't need to escape the $ character; it's only a metacharacter at the end of a line. But escaping it should be ok.)
EDIT :
The characters around the as in your file are not ASCII apostrophes; they're Unicode RIGHT SINGLE QUOTATION MARK characters (0x2019). If the file is stored in UTF-8, each of them is represented as a 3-byte sequence. The grep command works for me because my locale settings "en_US.UTF-8" are such that a UTF-8 character is matched by . in a regexp, even if it has a multi-byte representation. I suspect your locale is such that it would be matched by ....
Probably the simplest solution is to edit the file to use ASCII apostrophes.
You might also want to play around with your locale settings. Try the grep command with $LANG set to "en_US.UTF-8".
What's the output of the locale command?

That works fine for me, though you may want to look into those "funny" single quotes you have around as:
pax$ cat testfile
< ? php $XZKsyG='as';
pax$ grep 'php \$[a-zA-Z]*=.as.;' testfile
< ? php $XZKsyG='as';
Failing that, there's some things you can look at. Some of these may sound silly but I'm really just checking all bases.
Are you sure the file contains only what you think it does? Executing od -xcb file will give you a hex dump of it for better checking.
Are you sure you're accessing the right file, in the right directory?
Have you done something silly like aliasing grep to be something else?
That's if you're looking for a file containing that string. If instead you're looking for a file named like that, you can use something like:
ls -1 | grep 'php \$[a-zA-Z]*=.as.;'
The ls -1 command gives you one file per line, and piping that through grep will filter out those not matching the pattern.
I suppose I should mention that I'm not really a big fan of file names with spaces in them, but I'm violently opposed to file names made up of PHP scripts :-)

Related

How do I exclude a character in Linux

Write a wildcard to match all files (does not matter the files are in which directory, just ask for the wildcard) named in the following rule: starts with a string “image”, immediately followed by a one-digit number (in the range of 0-9), then a non-digit char plus anything else, and ends with either “.jpg” or “.png”. For example, image7.jpg and image0abc.png should be matched by your wildcard while image2.txt or image11.png should not.
My folder contained these files imag2gh.jpeg image11.png image1agb.jpg image1.png image2gh.jpg image2.txt image5.png image70.jpg image7bn.jpg Screenshot .png
If my command work it should only display image1agb.jpg image1.png image2gh.jpg image5.png image70.jpg image7bn.jpg
This is the command I used (ls -ad image[0-9][^0-9]*{.jpg,.png}) but I'm only getting this image1agb.jpg image2gh.jpg image7bn.jpg so I'm missing (image1.png image5.png)Kali Terminal and what I did
ls -ad image[0-9][!0-9]*{.jpg,.png}
Info
Character ranges like [0-9] are usually seen in RegEx statements and such. They won't work as shell globs (wildcards) like that.
Possible solution
Pipe output of command ls -a1
to standard input of the grep command (which does support RegEx).
Use a RegEx statement to make grep filter filenames.
ls -a1|grep "image"'[[:digit:]]\+[[:alpha:]]*\.\(png\|jpg\)'

Is it possible to display a file's contents and delete that file in the same command?

I'm trying to display the output of an AWS lambda that is being captured in a temporary text file, and I want to remove that file as I display its contents. Right now I'm doing:
... && cat output.json && rm output.json
Is there a clever way to combine those last two commands into one command? My goal is to make the full combined command string as short as possible.
For cases where
it is possible to control the name of the temporary text file.
If file is not used by other code
Possible to pass "/dev/stdout" as the.name of the output
Regarding portability: see stack exchange how portable ... /dev/stdout
POSIX 7 says they are extensions.
Base Definitions,
Section 2.1.1 Requirements:
The system may provide non-standard extensions. These are features not required by POSIX.1-2008 and may include, but are not limited to:
[...]
• Additional character special files with special properties (for example,  /dev/stdin, /dev/stdout,  and  /dev/stderr)
Using the mandatory supported /dev/tty will force output into “current” terminal, making it impossible to pipe the output of the whole command into different program (or log file), or to use the program when there is no connected terminals (cron job, or other automation tools)
No, you cannot easily remove the lines of a file while displaying them. It would be highly inefficient as it would require removing characters from the beginning of a file each time you read a line. Current filesystems are pretty good at truncating lines at the end of a file, but not at the beginning.
A simple but extremely slow method would look like this:
while [ -s output.json ]
do
head -1 output.json
sed -i 1d output.json
done
While this algorithm is plain and simple, you should know that each time you remove the first line with sed -i 1d it will copy the whole content of the file but the first line into a temporary file, resulting in approximately 0.5*n² lines written in total (where n is the number of lines in your file).
In theory you could avoid this by do something like that:
while [ -s output.json ]
do
line=$(head -1 output.json)
printf -- '%s\n' "$line"
fallocate -c -o 0 -l $((${#len}+1)) output.json
done
But this does not account for variable newline characters (namely DOS-formatted newlines) and fallocate does not always work on xfs, among other issues.
Since you are trying to consume a file alongside its creation without leaving a trace of its existence on disk, you are essentially asking for a pipe functionality. In my opinion you should look into how your output.json file is produced and hopefully you can pipe it to a script of your own.

Like a vlookup but in bash to match filenames in a directory against a ref file and return full description

I am aware there isn't a special bash function to do this and we will have to build this with available tools -- e.g. sed, awk, grep, etc.
We dump files into a directory and while their filename looks random, they can be mapped to their full description. For example:
/tmp/abcxyz.csv
/tmp/efgwaz.csv
/tmp/mnostu.csv
In filemapping.dat, we have:
abcxyz, customer_records_abcxyz
efgwaz, routernodes_logs_efgwaz
mnostu, products_campaign
We need to go through each of them in the directory recursively and rename the file with its full description. Final outcome:
/tmp/customer_records_abcxyz.csv
/tmp/routernodes_logs_efgwaz.csv
/tmp/products_campaign_mnostu.csv
I found something similar here but not sure how to work it out at directory level dealing with only one file as the lookup/referece file. Please help. Thanks!
I would try something like this:
sed 's/,/.csv/;s/$/.csv/' filemapping.dat | xargs -n2 mv
Either cd to tmp beforehand, or modify the sed command to include the path name.
The sed commands simply replace the comma and the line end with the string ".csv".

How to open a "-" dashed filename using terminal?

I tried gedit, nano, vi, leafpad and other text editors , it won't open, I tried cat and other file looking commands, and I ensure you it's a file not a directory!
This type of approach has a lot of misunderstanding because using - as an argument refers to STDIN/STDOUT i.e dev/stdin or dev/stdout .So if you want to open this type of file you have to specify the full location of the file such as ./- .For eg. , if you want to see what is in that file use cat ./-
Both cat < - and ./- command will give you the output
you can use redirection
cat < -file_name
It looks like the rev command doesn't treat - as a special character.
From the man page
The rev utility copies the specified files to standard output, reversing the order of characters in every line.
so
rev - | rev
should show what's in the file in the correct order.
I tried with pico or vi command.pico readme which allowed me open in editor and read the contents.
if you want to open this type of file you have to specify the full location of the file such as ./- .For eg. , if you want to see what is in that file use cat ./-
cat ./- is the syntax that reveals the correct password for bandit the "rev -" reveals something else

"grep" offset of ascii string from binary file

I'm generating binary data files that are simply a series of records concatenated together. Each record consists of a (binary) header followed by binary data. Within the binary header is an ascii string 80 characters long. Somewhere along the way, my process of writing the files got a little messed up and I'm trying to debug this problem by inspecting how long each record actually is.
This seems extremely related, but I don't understand perl, so I haven't been able to get the accepted answer there to work. The other answer points to bgrep which I've compiled, but it wants me to feed it a hex string and I'd rather just have a tool where I can give it the ascii string and it will find it in the binary data, print the string and the byte offset where it was found.
In other words, I'm looking for some tool which acts like this:
tool foobar filename
or
tool foobar < filename
and its output is something like this:
foobar:10
foobar:410
foobar:810
foobar:1210
...
e.g. the string which matched and a byte offset in the file where the match started. In this example case, I can infer that each record is 400 bytes long.
Other constraints:
ability to search by regex is cool, but I don't need it for this problem
My binary files are big (3.5Gb), so I'd like to avoid reading the whole file into memory if possible.
grep --byte-offset --only-matching --text foobar filename
The --byte-offset option prints the offset of each matching line.
The --only-matching option makes it print offset for each matching instance instead of each matching line.
The --text option makes grep treat the binary file as a text file.
You can shorten it to:
grep -oba foobar filename
It works in the GNU version of grep, which comes with linux by default. It won't work in BSD grep (which comes with Mac by default).
You could use strings for this:
strings -a -t x filename | grep foobar
Tested with GNU binutils.
For example, where in /bin/ls does --help occur:
strings -a -t x /bin/ls | grep -- --help
Output:
14938 Try `%s --help' for more information.
162f0 --help display this help and exit
I wanted to do the same task. Though strings | grep worked, I found gsar was the very tool I needed.
http://tjaberg.com/
The output looks like:
>gsar.exe -bic -sfoobar filename.bin
filename.bin: 0x34b5: AAA foobar BBB
filename.bin: 0x56a0: foobar DDD
filename.bin: 2 matches found

Resources