My Foremost Results, does not exist visually, but exists with command - linux

im new in hacking, especially ctf and want to learn it... but i have a trouble here using foremost... when i tried to extract a jpg file with command foremost -v -i file.jpg
the result says its finished with extracted data. but, when i check it visually by folder, there's no results on my output folder. But, when i check it by command, ls -la i can see a file with format "00000000.png".

Related

(Linux) Recursively overwrite all files in folder with data from another file

I find myself in a situation similar to this question:
Linux: Overwrite all files in folder with specified data?
The answers there work nicely, however, they are for typed-out text. Allow me to provide context.
I have a Linux terminal which the following file structure: (with files & folders irrelevant to the question removed)
root/
empty.svg
svg/
257238.svg
297522.svg
a7yf872.svg
236y27fh.svg
38277.svg
... (~200 other .svg files with arbitrary names)
2903852.svg
The framework I am working with requires those .svg files to exist with those specific filenames, but obviously, it does not care about SVG image they contain. I do not plan on using such files and they take up a hefty amount of space on disk, so I wish to convert them all into empty SVGs, aka the empty.svg file on my root directory, which is a 12x12 transparent SVG file (124 bytes). This way the framework shouldn't error out like it did when I tried simply overwriting the raw data of those SVGs with plaintext using the answer of the question linked at the top of this question. I've tried many methods by trying to be creative with my basic Linux command-line knowledge but no success. How do I accomplish this?
TL;DR: How to recursively overwrite all files in a folder with the raw data of another file from Linux CLI?
Similar to the link, you can use tee command, but instead of echo use cat to copy file contents, where cat is the command to read the contents of the file.
cat empty.svg | tee svg/257238.svg svg/297522.svg <etc>
But if there are a lot of files in svg directory it will be useful to use loop to automate the previous command:
for f in svg/*; do
if [[ "$f" == *.svg ]]; then
cat empty.svg > "$f"
fi
done
Here we use pipes and redirections to connect commands and redirect previous command output.

How to preserve folder structure and filename while downloading using wget?

There's a file named links.txt which has different links from i am trying to fetch the links and download them using wget
Command shown below
wget -i links.txt -nc -c -x
The folder depth is quite a lot and the end result is that the file being downloaded have their names shortened for some reason.
I don't know if its a character limit in linux or something or a wget thing but i would ideally want the exact folder structure and filename to be retained.
Well the folder structure is fine as of now , only issue is the files names are getting shortened .
Lets say for example the actual fileName is - this_is_the_whole_fileName.mp4
the downloaded file would be named - this_is
FileName is only partially mentioned and the file extension is also absent
On another note - Aria2 doesn't shorten the filesnames and work well for most part , only issue there is that some files take time to start downloading and Aria2 gives error in such cases

ripMIME not working when using variable as file name

I need to extract an attachment that I receive every day via email, on a linux server.
I'm using ripMIME for this task and have a script like this:
theFile=$(ls -t * | head -n 1)
ripmime -i $theFile -d /home/myDirectory/
First line assigns the name of the newest file (email) to the variable "theFile"
Second line should extract it's attachments to the /home/myDirectory/ path, however it doesn't extracts anything.
However, if I execute this line: (including the file name instead of the variable)
ripmime -i 1536138112.M623890P26484.myDomain.com,S\=1345977,W\=1363482:2,S -d /home/myDirectory/
...then the files are successfully extracted and copied to the specified directory.
I need to use a variable since I can't possibly know the name of the file, I just need to extract the files from the newest email using a script.
Also, I don't get any output when the instruction fails, so I'm in the dark here.
The ripMIME tool documentation can be found here
Any help will be appreciated.
When I included those lines inside a script file (.sh) and executed it, then everything worked like a charm. That didn't happen when I was trying to execute it directly from command line.

implementing ls -lh command

I am new to linux bash , i was trying to do a problem which asks to edit bash.rc file and add an alias .The alias should be able to get the space usage of all the files and folders and display it on your screen. After digging out on internet i found that command for such thing is ls -lh but along with file or folder name and disk usage it is also showing the date at which file created and other unuseful things.enter image description here
So how can remove those things so that i only get file and folder name when i execute ls -lh command.
You have the right command, you can use tools like awk and sed to parse the output.
See this topic for example:
How to get the second column from command output?
Edit
Like Benjamin W. said in the comments, the output of ls should not be parsed.
This as been discussed on the following page: https://unix.stackexchange.com/questions/128985/why-not-parse-ls

Adding timestamp to a filename with mv in BASH

Well, I'm a linux newbie, and I'm having an issue with a simple bash script.
I've got a program that adds to a log file while it's running. Over time that log file gets huge. I'd like to create a startup script which will rename and move the log file before each run, effectively creating separate log files for each run of the program. Here's what I've got so far:
pastebin
DATE=$(date +"%Y%m%d%H%M")
mv server.log logs/$DATE.log
echo program
When run, I see this:
: command not found
program
When I cd to the logs directory and run dir, I see this:
201111211437\r.log\r
What's going on? I'm assuming there's some syntax issue I'm missing, but I can't seem to figure it out.
UPDATE: Thanks to shellter's comment below, I've found the problem to be due to the fact that I'm editing the .sh file in Notepad++ in windows, and then sending via ftp to the server, where I run the file via ssh. After running dos2unix on the file, it works.
New question: How can I save the file correctly in the first place, to avoid having to perform this fix every time I resend the file?
mv server.log logs/$(date -d "today" +"%Y%m%d%H%M").log
The few lines you posted from your script look okay to me. It's probably something a bit deeper.
You need to find which line is giving you this error. Add set -xv to the top of your script. This will print out the line number and the command that's being executed to STDERR. This will help you identify where in your script you're getting this particular error.
BTW, do you have a shebang at the top of your script? When I see something like this, I normally expect its an issue with the Shebang. For example, if you had #! /bin/bash on top, but your bash interpreter is located in /usr/bin/bash, you'll see this error.
EDIT
New question: How can I save the file correctly in the first place, to avoid having to perform this fix every time I resend the file?
Two ways:
Select the Edit->EOL Conversion->Unix Format menu item when you edit a file. Once it has the correct line endings, Notepad++ will keep them.
To make sure all new files have the correct line endings, go to the Settings->Preferences menu item, and pull up the Preferences dialog box. Select the New Document/Default Directory tab. Under New Document and Format, select the Unix radio button. Click the Close button.
A single line method within bash works like this.
[some out put] >$(date "+%Y.%m.%d-%H.%M.%S").ver
will create a file with a timestamp name with ver extension.
A working file listing snap shot to a date stamp file name as follows can show it working.
find . -type f -exec ls -la {} \; | cut -d ' ' -f 6- >$(date "+%Y.%m.%d-%H.%M.%S").ver
Of course
cat somefile.log > $(date "+%Y.%m.%d-%H.%M.%S").ver
or even simpler
ls > $(date "+%Y.%m.%d-%H.%M.%S").ver
I use this command for simple rotate a file:
mv output.log `date +%F`-output.log
In local folder I have 2019-09-25-output.log
Well, it's not a direct answer to your question, but there's a tool in GNU/Linux whose job is to rotate log files on regular basis, keeping old ones zipped up to a certain limit. It's logrotate
You can write your scripts in notepad but just make sure you convert them
using this ->
$ sed -i 's/\r$//' yourscripthere
I use it all they time when I'm working in cygwin and it works. Hope this helps
First, thanks for the answers above! They lead to my solution.
I added this alias to my .bashrc file:
alias now='date +%Y-%m-%d-%H.%M.%S'
Now when I want to put a time stamp on a file such as a build log I can do this:
mvn clean install | tee build-$(now).log
and I get a file name like:
build-2021-02-04-03.12.12.log

Resources