I have been trying to extract an ISO image through 7zip and WinRar.
Here is the command line that I used:
7z x -y "%Isocontents%\ISO.iso" -o%Newfolder%
WinRar.exe x -y -ow "%Isocontents%\ISO.iso" "%Newfolder%"
Entire project uses generic batch / 4NT scripting commands. The above commandline works perfectly when used locally but when run on the actual machine extraction is pretty partial.
This is really strange and I have no clue on what the reason could be.
While extracting it just says like there are only 2 files
readme.txt
bootable_noemulation.img
I am totally frustrated as this is totally not going per the logic.
If you're trying to extract the files from an ISO to a directory then 7zip can do this:
7z x -y -oC:\OutputDirectory X:\VRMPVOL_EN.iso
If you're not using the same ISO file, it's possible that the other file is in UDF format.
Check the readme.txt file which might have a note indicating such.
For 7z use the -t switch like this 7z x -tudf -y "%Isocontents%\ISO.iso" -o%Newfolder%
Everyone using MacOS with this question, you can run this command to extract ISO's:
7zz x -tiso -y alpine-standard-3.15.0-aarch64.iso -oNewfolder
(used an Alpine ISO)
Related
I've tried multiple ways of creating a zip or a tar.gz on the mac using GUI or command lines, and I have tried decompressing on the Linux side and gotten various errors, from things like "File.XML" and "File.xml" both appearing in a directory, to all sorts of others about something being truncated, etc.
Without listing all my experiments on the command line on the Mac and Linux (using tcsh), what should 2 bullet proof commands be to:
1) make a zip file of a directory (with no __MACOSX folders)
2) unzip / untar (whatever) the Mac zip on Linux with no errors (and no __MACOSX folders)
IT staff on the Linux side said they "usually use .gz and use gzip and gunzip commands".
Thanks!
After much research and experimentation, I found this works every time:
1) Create a zipped tar file with this command on the Mac in Terminal:
tar -cvzf your_archive_name.tar.gz your_folder_name/
2) When you FTP the file from one server to another, make sure you do so with binary mode turned on
3) Unzip and untar in two steps in your shell on the Linux box (in this case, tcsh):
gunzip your_archive_name.tar.gz
tar -xvf your_archive_name.tar
On my Mac and in ssh bash I use the following simple commands:
Create Zip File (-czf)
tar -czf NAME.tgz FOLDER
Extract Zip File (-xzf)
tar -xzf NAME.tgz
Best, Mike
First off, the File.XML and File.xml cannot both appear in an HFS+ file system. It is possible, but very unusual, for someone to format a case-sensitive HFSX file system that would permit that. Can you really create two such files and see them listed separately?
You can use the -X option with zip to prevent resource forks and extended attributes from being saved. You can also throw in a -x .DS_Store to get rid of those files as well.
For tar, precede it with COPYFILE_DISABLE=true or setenv COPYFILE_DISABLE true, depending on your shell. You can also throw in an --exclude=.DS_Store.
Your "IT Staff" gave you a pretty useless answer, since gzip can only compress one file. gzip has to be used in combination with tar to archive a directory.
We currently have a challenge where the ideal solution would be to symlink a file to a web URL...
image.jpg -> http://www.host.com/images/image.jpg
Is this possible?
Maybe a named pipe that you feed with a wget for the file?
Edit - Not wget. You can work with linx -dump. So -
mkfifo reddit
links -dump reddit.com > reddit
cat reddit
There are several nice and interesting solutions here. I especially like #ArjunShankar's fuse solution. In the spirit of keeping it simple though, perhaps a file in /etc/cron.daily with
#!/bin/sh
cd /your/dir && wget -N http://www.host.com/images/image.jpg
would be a lot simpler and Good Enough(TM)?
On mac I successfully used this great tool by maxogden, which also using FUSE:
https://github.com/maxogden/mount-url
brew install osxfuse
npm install -g mount-url
Then
mount-url "https://url-to-10-gb-video-file-on-some-external-cloud-storage/video.mp4?xxx=yyy"
This would create a symlink for the file named video.mp4 in the current directory.
Not too fast access speed, but works.
I would like to read the actual source code which the linux commands are written with. I've gained some experience using them and now I think it's time to interact with my machine at a deeper level.
I've found some commands here http://directory.fsf.org/wiki/GNU. Unfortunately I wasn't able to find basic commands such as 'ls' which seems to me easy enough to begin.
How exactly do I read the source code of the simple shell commands like 'ls'?
I'm running on Ubuntu 12.04
All these basic commands are part of the coreutils package.
You can find all information you need here:
http://www.gnu.org/software/coreutils/
If you want to download the latest source, you should use git:
git clone git://git.sv.gnu.org/coreutils
To install git on your Ubuntu machine, you should use apt-get (git is not included in the standard Ubuntu installation):
sudo apt-get install git
Truth to be told, here you can find specific source for the ls command:
http://git.savannah.gnu.org/cgit/coreutils.git/tree/src/ls.c
Only 4984 code lines for a command 'easy enough' as ls... are you still interested in reading it?? Good luck! :D
Direct links to source for some popular programs in coreutils:
cat (767 lines)
chmod (570 lines)
cp (2912 lines)
cut (831 lines)
date (570 lines)
df (1718 lines)
du (1112 lines)
echo (272 lines)
head (1070 lines)
hostname (116 lines)
kill (312 lines)
ln (651 lines)
ls (4954 lines)
md5sum (878 lines)
mkdir (306 lines)
mv (512 lines)
nice (220 lines)
pwd (394 lines)
rm (356 lines)
rmdir (252 lines)
shred (1325 lines)
tail (2301 lines)
tee (220 lines)
touch (437 lines)
wc (801 lines)
whoami (91 lines)
Full list here.
ls is part of coreutils. You can get it with git :
git clone git://git.sv.gnu.org/coreutils
You'll find coreutils listed with other packages (scroll to bottom) on this page.
Actually more sane sources are provided by http://suckless.org look at their sbase repository:
git clone git://git.suckless.org/sbase
They are clearer, smarter, simpler and suckless, eg ls.c has just 369 LOC
After that it will be easier to understand more complicated GNU code.
CoreUtils referred to in other posts does NOT show the real implementation of most of the functionality which I think you seek. In most cases it provides front-ends for the actual functions that retrieve the data, which can be found here:
It is build upon Gnulib with the actual source code in the lib-subdirectory
You can have it on github using the command
git clone https://github.com/coreutils/coreutils.git
You can find all the source codes in the src folder.
You need to have git installed.
Things have changed since 2012, ls source code has now 5309 lines
BSD distributions are actually a nice way of reading the source code, by using their repositories, since it is all packed into one place, and you can view how historically the source code has evolved, or changed. So why not use BSD repos , such as NetBSD or OpenBSD for this task.
cd ~ && apt-get source coreutils && ls -d coreutils*
You should be able to use a command like this on ubuntu to gather the source for a package, you can omit sudo assuming your downloading to a location you own.
We're searching a programm that allows us to convert a doc or docx document to a txt file. We're working with linux and we want to start a website that converts user uploaded doc files. We don't wanna use open office/libre office cause we have bad experience with that. Pandoc can't handle doc files :/
Anyone have a idea?
You will have to use two different command-line tools, depending if you are working with .doc or .docx format.
For .doc use catdoc:
catdoc foo.doc > foo.txt
For .docx use docx2txt:
docx2txt foo.docx
The latter will produce a file called foo.txt in the same directory as the original.
I'm not sure which Linux distribution you are using, but both catdoc and docx2txt are available from the Ubuntu repositories, for example:
apt-get install docx2txt
Or with Homebrew on Mac:
brew install docx2txt
here is a perl project which claims to do it. I have done a lot of this by hand also, using XSLT on the document.xml. the Docx file itself is just a zip file, you can unzip it and inspect the elements. I will say that this is not hard to do for specific files, but is very hard to do in the general case, because of the lack of documentation for how Word internally stores things, and the variance of internal representation.
For doc files you may use antiword, it's available on Homebrew and Ubuntu.
You can also use pandoc:
Keep the layout (newline as in the visualization of the document):
pandoc -s mydocument.docx -o ouput.txt
Newline only when the original text has a newline command:
pandoc --wrap=none -s mydocument.docx -o ouput.txt
Some software (for ex. the NetBeans IDE) ship the Linux installers in .sh files. Curious about how exactly they 'package' a whole IDE into a 'shell script', I opened the file in an editor. I saw some plain text shell scripting code and then some random gibberish, which I reckon is 'binary' or non-plain text.
I am wondering how they mix plain shell scripts and then probably call the 'non-readable' stuff, which would be the binaries.
Any insight on this?
Basically, it's a shell script prepended to a compressed archive of some sort, such as a tar archive. You use the tail or sed command on yourself (the $0 variable in Bourne shell) to strip off the shell script at the front and pass the rest to your unarchiver.
For example, create the following script as self-extracting:
#!/bin/sh -e
sed -e '1,/^exit$/d' "$0" | tar xzf - && ./project/Setup
exit
The sed command above deletes all lines from the first line of the file to the first one that starts with "exit", and then passes the rest on through. If what starts immediately after the "exit" line is a tar file, the tar command will extract it. If that's successful, the ./project/Setup file (presumably extracted from the tarball) will be executed.
Then:
mkdir project
echo "#!/bin/sh" > project/Setup
echo "echo This is the setup script!" >> project/Setup
chmod +x project/Setup
tar czf - project >> self-extracting
Now, if you get rid of your old project directory, you can run self-extracting and it will extract that tar file and run the setup script.
You might want to check out makeself.sh
From the authors' notes.
makeself.sh is a small shell script that generates a self-extractable tar.gz archive from a directory. The resulting file appears as a shell script (many of those have a .run suffix), and can be launched as is. The archive will then uncompress itself to a temporary directory and an optional arbitrary command will be executed (for example an installation script).
Makeself archives also include checksums for integrity self-validation (CRC and/or MD5 checksums).
The makeself.sh script itself is used only to create the archives from a directory of files. The resultant archive is actually a compressed (using gzip, bzip2, or compress) TAR archive, with a small shell script stub at the beginning. This small stub performs all the steps of extracting the files, running the embedded command, and removing the temporary files when it's all over. All what the user has to do to install the software contained in such an archive is to "run" the archive [that is execute the script]
I am trying to keep the code of this script as portable as possible, i.e it's not relying on any bash-specific features and only calls commands that are installed on any functioning UNIX-compatible system. This script as well as the archives it generates should run on any Unix flavor, with any compatible Bourne shell, provided of course that the compression programs are available.
Finally, the makeself package itself comes as a self-extracting script called makeself.run.
Add a Binary Payload to your Shell Scripts
GNU sharutils:
http://www.gnu.org/software/sharutils/
is a toolset for creating shell archives, and provides some additional features that may be helpful (such as checksums to ensuring that the payload is not damaged in transit).
Protecting against malicious modifications is not really feasible when the final product has to be interpretable by the shell - anyone who understood the generation technique could modify the checksum as well.
There are also other/commercial software installer builder (like InstallAnywhere) they basically have their own version of shar/makeself.
Netbeans has their own installer engine, and part of it, which does the unpacking and launching is done in the NBI native launcher component: http://wiki.netbeans.org/NBINativeLaunchers
Creates a shell(script) archive for Linux/Unix/MacOS and native executable for Windows. You can use that tool for your own projects, also.