Replace a file temporarily during an sh script run - linux

I use an sh script to start an application in the background after setting several environment variables. I use temporary variables to start binaries from different places using LD_LIBRARY_PATH and temporary variables. Problem is that the application loads one *.so file from a hardcoded path which I cannot change. Currently I solve this problem manually by replacing the hardcoded file location with a symbolic link.
Can you tell me if there is a clean solution to solve this from the sh script? Basically what I want is that a certain file location is switched with a different binary only for open calls from the application the script starts, for all other processes it should stay the same.
Regards.

Example methods to replace open("/the/original") with open("/some/other"):
One simple method is modifying a pathname within an executable.
First copy the original executable to something like "modified" and run
a utility like bvi (similar to vim).
Consider 2 cases from comparing the length of the new vs. original pathnames:
when (new-length <= original-length), overwrite the original pathname in place
when (new-length > original-length), create a short symlink that references
the new longer pathname like the example below, then overwrite the original pathname
with the short symlink pathname,
ln -s /full/path/some/new/file /shorter
For both cases, remember to include a trailing NULL byte.
After saving changes with bvi, test the newly copied+altered executable.
Once "modified" is working to plan, could also rename for convenience:
mv xyz xyz.orig
mv modified xyz
Another method can serve a transparency requirement.
Create a dynamic library (eg custom.so) with a wrapper routine for
open() which conditionally replaces the pathname and calls the real
libc open(). Run the unmodified original executable (xyz) with another
environment variable, eg:
LD_PRELOAD=/path/to/custom.so xyz
There are some tradeoffs with versatility and modest complexity;
the original xyz is left unchanged and can always be run with/without LD_PRELOAD;
some might consider added overhead from a wrapper as undesirable;
doesn't work with statically-linked or set-uid executables.
Many articles provide instructions for creating a preload library, re-using
an original symbol like open() (frequent example is malloc()), call dlsym() once to find the regular libc open(), saving the result as a function pointer, and calling libc open() indirectly.

Related

Flag to Create Missing Directories During fs.promises.writeFile

As I review these file system flags, I'm I correct in concluding that there is no flag you can pass to fs.promises.writeFile that will automatically create all missing directories leading up to a filename? If not, which flag does this?
I don't like solutions that check for the existence of the folders first before attempting writeFile, because after the folders are created that check happens every time you write to a file in that folder.
In my program, after the folders are created once, it should always be there, so it seems more efficient to only create the folders if there is an exception. However, I'm hoping there is a flag that avoids all this micro-management.
If a flag for auto-creating the folders doesn't exist for writeFile, then I'd like to attempt writeFile first, and then (only if there is an exception) create the folders recursively.
fs.promises.writeFile() does not automatically create the directory structure for you. That must exist first.
If you want to automatically create the path because you received an error indicative of a path problem, you can use fs.promises.mkdir() and pass the recursive flag.
And you could, of course, create your own wrapper function that calls fs.promises.writeFile() and if it gets whatever error you get when the path doesn't exist (you'd have to test to see exactly what that error is), then call fs.promises.mkdir() and then repeat the fs.promises.writeFile(). It could all be wrapped in your own utility function.

How to share a variable between 2 pyRevit scripts?

I am using the latest version of pyRevit, v45.
I'm writing some info in temporary files with
myTempFile = script.get_instance_data_file("id")
This creates a file named pyRevit_2018_xxxx_id.tmp in which I store useful info. If I'm not mistaken, the "xxxx" part is changing every time I reload Revit. Now, I need to get access to this information from another pyRevit script.
How can I retrieve the name of the temp file I need to read? In other words, how do I access "myTempFile" from within the second script, which has no idea of the name of "myTempFile"?
I guess I can share somehow that variable between my script, but what's the proper way to do this? I know this must be a very basic programming question, but I'm indeed not a programmer ;)
Thanks a lot,
Arnaud.
Ok, I realise now that my variables in the 1st script cease to exist after its execution.
So for now I wrote the file name in another file, of which I know the name.. That works.
But if there's a cleaner way to do this, I'd be glad to learn ;)
Arnaud
pyrevit.script module provides 4 different methods for creating temporary files based on their use case:
get_instance_data_file:
for data files marked with Revit instance pid. This means that scripts running on another instance will not see this temp file.
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_instance_data_file
get_universal_data_file:
for temp files accessible to all Revit instances and versions
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_universal_data_file
get_data_file:
Base method to get a standard temp file for current revit version
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_data_file
get_document_data_file:
temp file marked with active document (so scripts working on another document will not see this)
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_document_data_file
Each method uses a pattern to create the temp file name. So as long as the call to the method is the same of different scripts, the method generates the same file name.
Example:
Script 1:
from pyrevit import script
tfile = script.get_data_file('mydata')
Script 2:
from pyrevit import script
tempfile = script.get_data_file('mydata')
In this example tempfile = tfile since the file id is the same.
There is documentation on each so make sure you take a look at those and pick the flavor that serves your purpose.

When I create a Temporary File/Directory, when will it be removed?

Julia contains a number of methods for making temporary files and directories.
I'm making fairly heavy use of them (and /dev/shm), to inferface with libraries that really want to work with actual files (JLD/HDF5, and OpenStack Swift).
I had been assuming they would be deleted when their finalisers on the pointer to there name were called.
But then after exiting julia it seemed like they were all still there.
Will linux delete them?
If the app didn't clean after itself, the OS will delete the files eventually. It depends on system settings when temp files are deleted. For example, it can happen on boot or nightly (via cron job) or some another way.
See this answer, for example: How is the /tmp directory cleaned up?
What you are likely looking for,
given your surprise that they were not removed, based on going out of scope, as the do block versions of mktemp.
In the very documentation you linked.
mktemp(f::Function[, parent=tempdir()])
Apply the function f to the result of mktemp(parent) and remove the temporary file upon completion.
mktempdir(f::Function[, parent=tempdir()])
Apply the function f to the result of mktempdir(parent) and remove the temporary directory upon completion.
Which you can use like:
mktempdir("/dev/shm") do tdir
fname = joinpath(tdir, name)
#Do some things with your new temp filename `fname` in your tempdir `tdir`
end
#the directory referenced by `tdir`, and `fname`, have now been deleted.

Reversing/Debugging - Identifying symlinks in applications

I was wondering, is there any guidlines for identifying symlink related function in an application binary?
Let's take BusyBox for example, /bin/ping is a symlink to /bin/BusyBox.
How can I know identify the ping related functions within the BusyBox binary?
Thanks in advance :)
You can't generally do that.
In case of BusyBox, it checks upon startup which commandline was invoked to execute the binary (including the path to the binary itself). It then calls functions that provide the functionality that was selected based on the basename of the binary / symlink.
Again in case of BusyBox, most of the times the funktion names are closely related to the command name. But this is basically just coincidence: it could well be that someone created an exectable "A" that would call a function "X" when started via a symlink name "B" and function "Y" when called as "C".

build variable in Linux kernel Makefile

I am currently trying to understand the build process for the Linux kernel. While looking through the Makefiles, I found several rules in the form
scripts_basic:
$(Q)$(MAKE) $(build)=scripts/basic
$(Q)rm -f .tmp_quiet_recordmcount
which all recursively call other make processes and also pass the directory to process. Simultaneously, there seems to be a variable, which is passed, indicating what to do with the subdirectory (the $(build) part.
Looking at the make process, as far as I can see, this always seems to be obj, I cannot find any other value for this variable so far during the make process. Also, I cannot seem to find any place where this variable is set.
So what exactely is this variable for and how is it used (e.g., where is set and processed).
Not exactly. The relevant bit is in scripts/Kbuild.include, where it says
build := -f $(if $(KBUILD_SRC),$(srctree)/)scripts/Makefile.build obj
What this means is that if $(KBUILD_SRC) is not empty, the path to scripts/Makefile.build is given as an absolute path (or at least with a path that can be found from the working directory) by prepending the path to the top of the kernel source tree. As far as I can tell, this is to make the sub-makes all use the same Makefile and avoid having the same make code several dozen times.

Resources