Does "strings" against a binary list all usable variables? - string

I've had a request from a vendor to set a specific environment variable against their software. I'm currently awaiting an explanation of what this actually does. However, I decided to check to see exactly what environment variables were available within the binary using "strings" (on Solaris in this case). It doesn't list the one that they're talking about though.
I think this means that the setting they're asking for isn't actually picked up in any way by the binary mentioned (or any of that vendor's binaries - I checked through the lot of them). However, I'm unsure and can't find an answer to whether running "strings" against a compiled binary will list all of the variables that it can pick up and use from the OS.
Can anyone help to confirm this?
Thanks in advance.

The fact that the variable name does not appear as a readable string in the binary does not guarantee that the program does not get its value. The environment variable name may, for example, be constructed at runtime by concatenating substrings.

Related

Why do MacOS apps store a lot of strings?

Why is it that in MacOS, my application has lots of strings in the executable. Like it's like a bunch of binary non-human-readable nonsense, then I see a bunch of function and variable names, type names, like NSString and other NS+something strings, a lot of OBJC+something IIRC, but why? Why does it store all that? Other than bloating the executable size?
I can't specifically answer this for MacOS binaries, but there are generally 2 reasons for a binary to have things like function names:
It allows someone to debug the program. Without these symbols, every variable, function, stacktrace would contain information that's not that useful.
It allows in some cases for other processes to call functions inside another binary. If the function name is not encoded, it's not possible for some other program to call that function by its name (because it doesn't have one).
It's often possible to create binaries stripped with debug symbols, which would reduce the binary size.

How to obfuscate string of variable, function and package names in Golang binary?

When use command "nm go_binary", I find the names of variables, functions and packages and even the directory where my code is located are all displayed, is there any way to obfuscate the binary generated by the command "go build" and prevent go binary from being exploited by hackers?
Obfuscating can't stop reverse engineering but in a way prevent info leakage
That is what burrowers/garble (Go 1.16+, Feb. 2021):
Literal obfuscation
Using the -literals flag causes literal expressions such as strings to be replaced with more complex variants, resolving to the same value at run-time.
This feature is opt-in, as it can cause slow-downs depending on the input code.
Literal expressions used as constants cannot be obfuscated, since they are resolved at compile time. This includes any expressions part of a const declaration.
Tiny mode
When the -tiny flag is passed, extra information is stripped from the resulting Go binary.
This includes line numbers, filenames, and code in the runtime that prints panics, fatal errors, and trace/debug info.
All in all this can make binaries 2-5% smaller in our testing, as well as prevent extracting some more information.
With this flag, no panics or fatal runtime errors will ever be printed, but they can still be handled internally with recover as normal.
In addition, the GODEBUG environmental variable will be ignored.
But:
Exported methods are never obfuscated at the moment, since they could be required by interfaces and reflection. This area is a work in progress.
I think the best answer to this question is here How do I protect Python code?, specifically this answer.
While that question is about Python, it applies to all code in general.
I was gonna mark this question as a duplicate, but maybe someone will provide more insight into it.

Why Spreadsheet::XLSX parses dates inconsistently on different machines?

I created a Perl script reading information from an XLSX sheet. Since on one machine it worked well, and on another it did not, I included a short debug section:
$sheetdate = ($sheet -> {Cells} [0] [$sheet->{MaxCol}]) -> value();
print "value: $sheetdate\n";
$sheetdate = ($sheet -> {Cells} [0] [$sheet->{MaxCol}]) -> get_format();
print "getformat: $sheetdate\n";
On one machine it printed:
value: 2016-01-18
getformat: yyyy-mm-dd
While on the other:
value: 1-18-16
getformat: m-d-yy
Same script, same worksheet, different results. I believe that something in the environment makes the difference, but I do not know what exactly.
Any hints?
"Same script, same worksheet, different results. I believe that something in the environment makes the difference, but I don not know what exactly."
You sort-of indicate here yourself that you're not really seeking the solution to a perl or XLSX problem so much as some assistance with troubleshooting your environment.
Without access to the environment its difficult to offer a solution per se, but I can say this - you need to;
1) Re-arrange things so that you do get the same result from both environments;
2) Identify a list of differences between the original, problem environment and the one that now "works"; and
3) Modify one thing on the list at a time - moving towards the environment that works - checking each time until it becomes clear what the key variable (not in a programming sense) is.
With regards to (1), take a look at Strawberry Perl. Using Strawberry, its relatively easy to set up what some call Perl on a stick (see Portable ZIP edition) - a complete perl environment on a USB stick. Put your document on the same USB and then try the two environments - this time with absolute certainty of having the same environment. If different results persist, try booting from a "live environement" DVD (linux or widows as appropriate), and then using the USB.
Ultimately, I'd suggest there's something (such as a spreadsheet template ) at play that is different between the environments. You just need to go through a process of elimination to find out what it is.
With the benefit of hindsight, I think its worth revisiting this to produce a succinct answer for those who come across this problem in the future.
The original question was how could a perl script produce two different results when the excel data file fed into it is identical (which was confirmed with MD5 checksums). As programmers, our focus tends to be on the scripts we write and the data that goes into them. What slips to the back of the mind is the myriad of ways that perl itself can be installed and configured.
The three things that should assist in determining where the difference between two installs lie are;
(1) Use strawberry perl on a stick as described above to take the environment out of the equation and thereby (if the problem "disappears") confirm that the problem is something to do with the environment.
(2) Use Data::Dumper liberally throughout to find where the flow of execution "forks."
(3) Compare the output of perl -V (note capital V) to find out if there are differences in how the respective perls were built and configured.
The root cause of the problem was an outdated Spreadsheet::XLSX cpan module installed as RPM from the distribution repository. The RPM included version 0.13 of this module, in CPAN there was already version 0.15, and since version 0.14 the module's behaviour was changed in this particular respect. So once I replaced the pre-compiled module with the version downloaded directly from CPAN and compiled locally, the problem was solved.

bash - print env vars in the order they were set

This question pertains to the bash shell
First off, I know how to look at the env vars that are currently set.
I want to know how to list the currently set environment variables in the order they were set. Kind of like "ls -lt" but for env vars.
Is this possible?
EDIT: many were asking why I need this.
I do a lot of debugging, code porting, fixing etc. It requires me to experiment with third party codes that are not always well written. During the process of getting to a successful build, I might need to set, overwrite some env vars. I am pretty good at documenting what I am doing so I can retrace my steps. But sometimes I forget or miss to record my steps.
For very good reasons, our env has a ton of env vars.
I can capture the entire env vars at that moment, but that doesnt help me much. If bash had a way to list env vars in the order they were set, I can clearly identify what I had set.
Also, I agree that there is no reason for bash to track this. But I was hoping it has an internal stack of env vars, which automatically is ordered as last-in-first-out. But I guess that was just too optimistic to expect.
thanks to everyone.
As #pmos suggested in a comment, you might be able to hack some shell function that would manually track when you export something, but the shell itself cannot do this. Here's why. Export makes a name available to the environment. That is only meaningful to the exec*e family of functions. In other words, export is really only meaningful to new processes following the standard fork/exec pattern. But this also means the data structure holding the exported names is not up to the shell, but POSIX C. Here's a fragment of documentation about exec environments:
The argument envp is an array of character pointers to null-terminated strings. These strings shall constitute the environment for the new process image. The envp array is terminated by a null pointer.
and
extern char **environ; is initialized as a pointer to an array of character pointers to the environment strings.
It might seem reasonable to assume that processes add strings to the environment in order, but it doesn't really seem to work that way in fact, and POSIX systems being as complex as they are, it's not surprising they do a lot of setting, resetting and unsetting.
Despite your question focusing on environment variables, your phrasing makes me think you're also interested in tracking when variables get set, which is different from when they get exported. That actually is entirely the shell's problem, but alas, bash (at least) seems not to track this either.
set seems to display the names in alphabetical order. I can't even figure out what ordering the external env command displays them in.

Can a LabVIEW VI tell whether one of its output terminals is wired?

In LabVIEW, is it possible to tell from within a VI whether an output terminal is wired in the calling VI? Obviously, this would depend on the calling VI, but perhaps there is some way to find the answer for the current invocation of a VI.
In C terms, this would be like defining a function that takes arguments which are pointers to where to store output parameters, but will accept NULL if the caller is not interested in that parameter.
As it was said you can't do this in the natural way, but there's a workaround using data value references (requires LV 2009). It is the same idea of giving a NULL pointer to an output argument. The result is given in input as a data value reference (which is the pointer), and checked for Not a Reference by the SubVI. If it is null, do nothing.
Here is the SubVI (case true does nothing of course):
And here is the calling VI:
Images are VI snippets so you can drag and drop on a diagram to get the code.
I'd suggest you're going about this the wrong way. If the compiler is not smart enough to avoid the calculation on its own, make two versions of this VI. One that does the expensive calculation, one that does not. Then make a polymorphic VI that will allow you to switch between them. You already know at design time which version you want (because you're either wiring the output terminal or not), so just use the correct version of the polymorphic VI.
Alternatively, pass in a variable that switches on or off a Case statement for the expensive section of your calculation.
Like Underflow said, the basic answer is no.
You can have a look here to get the what is probably the most official and detailed answer which will ever be provided by NI.
Extending your analogy, you can do this in LV, except LV doesn't have the concept of null that C does. You can see an example of this here.
Note that the code in the link Underflow provided will not work in an executable, because the diagrams are stripped by default when building an EXE and because the RTE does not support some of properties and methods used there.
Sorry, I see I misunderstood the question. I thought you were asking about an input, so the idea I suggested does not apply. The restrictions I pointed do apply, though.
Why do you want to do this? There might be another solution.
Generally, no.
It is possible to do a static analysis on the code using the "scripting" features. This would require pulling the calling hierarchy, and tracking the wire references.
Pulling together a trial of this, there are some difficulties. Multiple identical sub-vi's on the same diagram are difficult to distinguish. Also, terminal references appear to be accessible mostly by name, which can lead to some collisions with identically named terminals of other vi's.
NI has done a bit of work on a variation of this problem; check out this.
In general, the LV compiler optimizes the machine code in such a way that unused code is not even built into the executable.
This does not apply to subVIs (because there's no way of knowing that you won't try to use the value of the indicators somehow, although LV could do it if it removes the FP when building an executable, and possibly does), but there is one way you can get it to apply to a subVI - inline the subVI, which should allow the compiler to see the outputs aren't used. You can also set its priority to subroutine, which will possibly also do this, but I wouldn't recommend that.
Officially, in-lining is only available in LV 2010, but there are ways of accessing the private VI property in older versions. I wouldn't recommend it, though, and it's likely that 2010 has some optimizations in this area that older versions did not.
P.S. In general, the details of the compiling process are not exposed and vary between LV versions as NI tweaks the compiler. The whole process is supposed to have been given a major upgrade in LV 2010 and there should be a webcast on NI's site with some of the details.

Resources