How to pass parameters to a .run file in linux - linux

I have example.run file (binary file) that will install a simple software in my linux environment. I want to automate the installation with chef but the problem is that during the installation the software is asking to accept the license( so I have to type yes ) I want to see is there a way to pass a parameter with the .run file or chef can type the yes for me or etc.
file Talend-Installer-20150508_1414-V5.6.2-linux64.run
Talend-Installer-20150508_1414-V5.6.2-linux64.run: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, stripped

It depends on the file (and there is no reason that every *.run installer behave similarly). Try Talend-Installer-20150508_1414-V5.6.2-linux64-installer.run --help or perhaps Talend-Installer-20150508_1414-V5.6.2-linux64-installer.run -h and read its documentation... sometimes there is an option to accept the license. You might also consider using yes(1) in a pipe:
yes | yourfile.run
But be cautious. What if yourfile.run asked politely:
can I remove every file in /home/ ? [yN]
(Of course, as for any script or executable, you'll need to enable executability and reading with chmod u+rx and either change your PATH or use ./yourfile.run or its absolute or relative file path, etc...)
You might also try to use strings(1) on that executable, to perhaps guess (thru some string messages inside), what is possible.
Argument passing is done thru execve(2) and your shell is in charge of globbing -before doing execve- so there is nothing specific about running *.run files.
I strongly suggest to take a few days to learn more about Linux. Perhaps read first Advanced Linux Programming & Advanced Bash Scripting Guide (and of course, documentation of Chef and of the Talend product you are installing); if you experiment sysadmin things without understanding, you might mess your system to the point of losing data and having to reinstall everything. Both echo(1) & strace(1) might also be useful.

Related

PE/ELF executable file format- Malware sample Execution

I have downloaded some malware samples both in Linux and Windows VM. when I check file type of the samples in Linux using file * command, the type is displayed as PE32 executable. However, when I check the same in windows VM , the file type is mentioned as "file".
Does that mean those samples are not executable?
Do I have to change the extension to .exe to make it as an executable?
I would recommend reading up on the PE format, as not only .exe file extensions are PE32 executables(e.g. .dll).
If I were you I would also start with learning about how to construct safe Lab environments, and how to use some static analysis tools before running anything (make sure your VM networking is set up safely).
To your question:
The linux file command should be accurate in identifying the files you are looking at, not sure what exact check you did in Win.
Yes, changing the file extension helps, but also doesn't guarantee the malware will run as you would expect. DLLs for instance need to be loaded with rundll32, there can be sandbox/VM checks, packing that won't execute etc.
You can check which file you are dealing with in a hex editor and comparing the magic bytes.

Program that runs on windows and linux

Is it possible to write a program (make executable) that runs on windows and linux without any interpreters?
Will it be able to take input and print output to console?
A program that runs directly on hardware, pure machine code as this should be possible in theory
edit:
Ok, file formats are different, system calls are different
But how hard or is it possible for kernel developers to introduce another executable format called raw for fun and science? Maybe raw program wont be able to report back but it should be able to inflict heavy load on cpu and raise its temperature as evidence of running for example
Is it possible to write a program (make executable) that runs on windows and linux without any interpreters?
in practice, no !
Levine's book Linkers and loaders explain why it is not possible in practice.
On recent Linux, an executable has the elf(5) format.
On Windows, it has some PE format.
The very first bytes of executables are different. And these two OSes have different system calls. The Linux ones are listed in syscalls(2).
And even on Linux, in practice, an executable is usually dynamically linked and depends on shared objects (and they are different from one distribution to the next one, so it is likely that an executable built for Debian/Testing won't run on Redhat). You could use the objdump(1), readelf(1), ldd(1) commands to inspect it, and strace(1) with gdb(1) to observe its runtime behavior.
Portability of software is often achieved by publishing it (in source form) with some open source license. The burden of recompilation is then on the shoulders of users.
In practice, real software (in particular those with a graphical user interface) depends on lots of OS specific and computer specific resources (e.g. fonts, screen size, colors) and user preferences.
A possible approach could be to have a small OS specific software base which generate machine code at runtime, like e.g. SBCL or LuaJit does. You could also consider using asmjit. Another approach is to generate opaque or obfuscated C or C++ code at runtime, compile it (with the system compiler), and load it -at runtime- as a plugin. On Linux, use dlopen(3) with dlsym(3).
Pitrat's book: Artificial Beings, the conscience of a conscious machine describes a software system (some artificial mathematician) which generates all of its C source code (half a million lines). Contact me by email to basile#starynkevitch.net for more.
The Wine emulator allows you to run some (but not all) simple Windows executables on Linux. The WSL layer is rumored to enable you to run some Linux executable on Windows.
PS. Even open source projects like RefPerSys or GCC or Qt may be (and often are) difficult to build.
No, mainly because executable formats are different, but...
With some care, you can use mostly the same code to create different executables, one for Linux and another one for windows. Depending on what you consider an interpreter Java also runs on both Windows and Linux (in a Java Virtual Machine though).
Also, it is possible to create scripts that can be interpreted both by PowerShell and by the Bash shell, such that running one of these scripts could launch a proper application compiled for the OS of the user.
You might require the windows user to run on WSL, which is maybe an ugly workaround but allows you to have the same executable for both Windows and Linux users.

How to use mod_exec proftpd linux

i used this code to execute external script, from mod_exec proftpd.
ExecEngine on
ExecLog /opt/proftpd_mod_exec.log
ExecOptions logStderr logStdout
<IfUser yogi>
ExecBeforeCommand STOR,RETR /home/yogi/Desktop/kab.sh EVENT=BeforeCommand FILE='%f'
ExecOnCommand STOR,RETR /home/yogi/Desktop/kab.sh EVENT=OnCommand FILE='%f'
</IfUser>
but i get error code like this on proftpd_mod_exec.log file. STOR ExecBeforeCommand '/home/yogi/Desktop/kab.sh' failed: Exec format error
how can i fix it?
from http://www.proftpd.org/docs/contrib/mod_exec.html
This module will not work properly for logins, or for logins that are affected by DefaultRoot. These directives use the chroot(2) system call, which wreaks havoc when it comes to scripts. The path to script/shell interpreters often assume a certain location that is no longer valid within a chroot. In addition, most modern operating systems use dynamically loaded libraries (.so libraries) for many binaries, including script/shell interpreters. The location of these libraries, when they come to be loaded, are also assumed; those assumptions break within a chroot. Perl, in particular, is so wrought with filesystem location assumptions that it's almost impossible to get a Perl script to work within a chroot, short of installing Perl itself into the chroot environment.
From the error message it sounds like that just that. You have enabled chroot and the script cannot get executed because of files not available at expected places within chroot.
Author suggest not to use the module because of this.
To get it work You need to figure out the dependencies You need in the chroot target and set them up there at the appropriate places. Or disable chroot for the users and try again. Third possibility: build a statically linked binary with almost no dependencies.
Or try, as the author of the module suggest, to use a FIFO and proftpd logging functionality to trigger the scripts outside of the chroot environment.

How do I determine the commands being issued when I use a GUI?

I am working on a Linux machine (running openSUSE 13.1 w/ KDE, specifically) and I would like to determine what commands are actually being issued in the background when I do something with an application's GUI.
My question is very similar to the following one which has received no answer:
https://stackoverflow.com/questions/20930239/how-can-i-see-the-commands-being-passed-in-backend-of-a-gui-application
If it helps at all, the specific task I am trying to accomplish is figuring out what the command line-equivalent is for sending a file to the Trash in KDE's Dolphin utility. I would like to make an alias for this functionality in my .bashrc so that I have a "gentler" alternative to rm. But I would rather know the answer to my more general question so that I can do similar things in the future.
My naive guess was that a log file might exist somewhere. Then I could do a task with a GUI and just tail that log file afterward to see what the underlying commands were for what I just did in the GUI. As far as I can tell, however, no such log exists.
To move a file foo to your trash bin, try
mv foo $HOME/Trash/
so you could make that a shell function in your .bashrc
function movetotrash() {
mv $* $HOME/Trash/
}
AFAIK, most GUI applications don't have log files. They are generally free software (and using free software libraries), so you could study their source code and improve it. Try to interact with their communities (and use strace as I commented)
BTW, not every GUI application is using commands. Some are (e.g. IDE are indeed forking commands like gcc) but others just do directly syscalls (probably a file manager won't fork an mv but just would copy contents or call the rename(2) syscall).

Is a core dump executable by itself?

The Wikipedia page on Core dump says
In Unix-like systems, core dumps generally use the standard executable
image-format:
a.out in older versions of Unix,
ELF in modern Linux, System V, Solaris, and BSD systems,
Mach-O in OS X, etc.
Does this mean a core dump is executable by itself? If not, why not?
Edit: Since #WumpusQ.Wumbley mentions a coredump_filter in a comment, perhaps the above question should be: can a core dump be produced such that it is executable by itself?
In older unix variants it was the default to include the text as well as data in the core dump but it was also given in the a.out format and not ELF. Today's default behavior (in Linux for sure, not 100% sure about BSD variants, Solaris etc.) is to have the core dump in ELF format without the text sections but that behavior can be changed.
However, a core dump cannot be executed directly in any case without some help. The reason for that is that there are two things missing from a simple core file. One is the entry point, the other is code to restore the CPU state to the state at or just before the dump occurred (by default also the text sections are missing).
In AIX there used to be a utility called undump but I have no idea what happened to it. It doesn't exist in any standard Linux distribution I know of. As mentioned above (#WumpusQ) there's also an attempt at a similar project for Linux mentioned in above comments, however this project is not complete and doesn't restore the CPU state to the original state. It is, however, still good enough in some specific debugging cases.
It is also worth mentioning that there exist other ELF formatted files that cannot be executes as well which are not core files. Such as object files (compiler output) and .so (shared object) files. Those require a linking stage before being run to resolve external addresses.
I emailed this question the creator of the undump utility for his expertise, and got the following reply:
As mentioned in some of the answers there, it is possible to include
the code sections by setting the coredump_filter, but it's not the
default for Linux (and I'm not entirely sure about BSD variants and
Solaris). If the various code sections are saved in the original
core-dump, there is really nothing missing in order to create the new
executable. It does, however, require some changes in the original
core file (such as including an entry point and pointing that entry
point to code that will restore CPU registers). If the core file is
modified in this way it will become an executable and you'll be able
to run it. Unfortunately, though, some of the states are not going to
be saved so the new executable will not be able to run directly. Open
files, sockets, pips, etc are not going to be open and may even point
to other FDs (which could cause all sorts of weird things). However,
it will most probably be enough for most debugging tasks such running
small functions from gdb (so that you don't get a "not running an
executable" stuff).
As other guys said, I don't think you can execute a core dump file without the original binary.
In case you're interested to debug the binary (and it has debugging symbols included, in other words it is not stripped) then you can run gdb binary core.
Inside gdb you can use bt command (backtrace) to get the stack trace when the application crashed.

Resources