Time virtualisation on linux - linux

I'm attempting to test an application which has a heavy dependency on the time of day. I would like to have the ability to execute the program as if it was running in normal time (not accelerated) but on arbitrary date/time periods.
My first thought was to abstract the time retrieval function calls with my own library calls which would allow me to alter the behaviour for testing but I wondered whether it would be possible without adding conditional logic to my code base or building a test variant of the binary.
What I'm really looking for is some kind of localised time domain, is this possible with a container (like Docker) or using LD_PRELOAD to intercept the calls?
I also saw a patch that enabled time to be disconnected from the system time using unshare(COL_TIME) but it doesn't look like this got in.
It seems like a problem that must have be solved numerous times before, anyone willing to share their solution(s)?
Thanks
AJ

Whilst alternative solutions and tricks are great, I think you're severely overcomplicating a simple problem. It's completely common and acceptable to include certain command-line switches in a program for testing/evaluation purposes. I would simply include a command line switch like this that accepts an ISO timestamp:
./myprogram --debug-override-time=2014-01-01Z12:34:56
Then at startup, if set, subtract it from the current system time, and indeed make a local apptime() function which corrects the output of regular system for this, and call that everywhere in your code instead.
The big advantage of this is that anyone can reproduce your testing results, without a big readup on custom linux tricks, so also an external testing team or a future co-developer who's good at coding but not at runtime tricks. When (unit) testing, that's a major advantage to be able to just call your code with a simple switch and be able to test the results for equality to a sample set.
You don't even have to document it, lots of production tools in enterprise-grade products have hidden command line switches for this kind of behaviour that the 'general public' need not know about.

There are several ways to query the time on Linux. Read time(7); I know at least time(2), gettimeofday(2), clock_gettime(2).
So you could use LD_PRELOAD tricks to redefine each of these to e.g. substract from the seconds part (not the micro-second or nano-second part) a fixed amount of seconds, given e.g. by some environment variable. See this example as a starting point.

Related

Running a brief asm script inline for dynamic analysis

Is there any good reason not to run a brief unknown (30 line) assembly script inline in a usermode c program for dynamic analysis directly on my laptop?
There's only one system call to time, and at this point I can tell that it's a function that takes a c string and it's length, and performs some sort of encryption on it in a loop, which only iterates through the string as long as the length argument tells it.
I know that the script is (supposed to be) from a piece of malicious code, but for the life of me I can't think of any way it could possibly pwn my computer barring some sort of hardware bug (which seems unlikely given that the loop is ~ 7 instructions long and the strangest instruction in the whole script is a shr).
I know it sounds bad running an unknown piece of assembly code directly on the metal, but given my analysis up to this point I can't think of any way it could bite me or escape.
Yes, you can but I won't recommend it.
The problem is not how dangerous is the code this time (assuming you really understand all of the code and you can predict the outcome of any system call), the problem is that it's a slippery slope and it's not worth it considering what's at stake.
I've done quite a few malware analysis and rarely happened that a piece of code caught me off guard but it happened.
Luckily I was working on a virtual machine within an isolated network: I just restored the last snapshot and stepped through the code more carefully.
If you do this analysis on your real machine you may take the habit and one day this will bite you back.
Working with VMs, albeit not as comfortable as using your OS native GUI, is the way to go.
What could go wrong with running a 7 lines assembly snippet?
I don't know, it really depends on the code but a few things to be careful about:
Exceptions. An instruction may intentionally fault to pass the control to an exception handler. This is why it very important that you totally understand the code: both the instruction and the data.
System calls exploits. A specially crafted input to a system call may trigger a 0-day or an unpatched vulnerability in your system. This is why is important that you can predict the outcome of every system call.
Anti debugger techniques. There are a lot of way a piece of code could escape a debugger (I'm thinking Windows debugging here), it's hard to remember them all, be suspicious of everything.
I've just named a few, it's catastrophically possible that an hardware bug could lead to privileged code execution but if that's really a possibility then nothing but a spare sacrificable machine will do.
Finally, if you are going to run the malware (because I assume the work of extracting the code and its context is too much of a burden) up to a breakpoint on your machine, think of what's at stake.
If you place the break point on the wrong spot, if the malware takes another path or if the debugger has a glitchy GUI, you may loose your data or the confidentiality of your machine.
I'n my opinion is not worth it.
I had to make this premise for generality sake but we all sin something, don't we?
I've never run a piece of malware on my machine but I've stepped through some with a virtual machine directly connected on the company network.
It was a controlled move, nothing happened, the competent personnel was advised and it was an happy ending.
This may very well be your case: it can just be a decryption algorithm and nothing more.
However, only you have the final responsibility to judge if it is acceptable to run the piece of code or not.
As I remarked above, in general it is not a good idea and it presupposes that you really understand the code (something that is hard to do and be honest about).
If you think these prerequisites are all satisfied then go ahead and do it.
Before that I would:
Create an unprivileged user and deny it access to my data and common folders (ideally deny it everything but what's is necessary to make the program work).
Backup the the critical data, if any.
Optionally
Make a restore point.
Take an hash of the system folders, a list of installed services and the value of the usual startup registry keys (Sysinternals have a tool to enum them all).
After the analysis, you can check that nothing important system-wide has changed.
It may be helpful to subst a folder and put the malware there so that a dummy path traversal stops in that folder.
Isn't there better solution?
I like using VMs for their snapshotting capabilities, though you may stumble into an anti-VM check (but they are really dumb checks, so it's easy to skip them).
For a 7-line assembly I'd simply rewrite it as a JS function and run it directly in a browser console.
You can simply transform each register in a variable and transcript the code, you don't need to understand it globally but only locally (i.e. each instruction).
JS is handy if you don't have to work with 64-bit quantities because you have an interpreter in front of you right now :)
Alternatively I use any programming language I have at hand (One time even assembly it self, it seems paradoxical but due to a nasty trick I had to convert a 64-bit piece of code to a 32-bit one and patch the malware with it).
You can use Unicorn to easily emulate a CPU (if the architecture is supported) and play with your shellcode without any risk.

How to protect my script from copying and modifying in it?

I created expect script for customer and i fear to customize it like he want without returning to me so I tried to encrypt it but i didn't find a way for it
Then I tried to convert it to excutable but some commands was recognized by active tcl like "send" command even it is working perfectly on red hat
So is there a way to protect my script to be reading?
Thanks
It's usually enough to just package the code in a form that the user can't directly look inside. Even the smallest of speed-bump stops them.
You can use sdx qwrap to parcel your script up into a starkit. Those are reasonably resistant to random user poking, while being still technically open (the sdx tool is freely available, after all). You can convert the .kit file it creates into an executable by merging it with a packaged runtime.
In short, it's basically like this (with some complexity glossed over):
tclkit sdx.kit qwrap myapp.tcl
tclkit sdx.kit unwrap myapp.kit
# Copy additional assets into myapp.vfs if you need to
tclkit sdx.kit wrap myapp.exe -runtime C:\path\to\tclkit.exe
More discussion is here, the tclkit runtimes are here, and sdx itself can be obtained in .kit-packaged form here. Note that the runtime you use to run sdx does not need to be the same that you package; you can deploy code for other platforms than the one you are running from. This is a packaging phase action, not a compilation or linking.
Against more sophisticated users (i.e., not Joe Ordinary User) you'll want the Tcl Compiler out of the ActiveState TclDevKit. It's a code-obscurer formally (it doesn't actually improve the performance of anything) and the TDK isn't particularly well supported any more, but it's the main current solution for commercial protection of Tcl code. I'm on a small team working on a true compiler that will effectively offer much stronger protection, but that's not yet released (and really isn't ready yet).
One way is to store the essential code running in your server as back-end. Just give the user a fron-end application to do the requests. This way essential processes are on your control, and user cannot access that code.

Code instrumentation in haskell

Suppose I maintain complex application connected to external systems. One day it starts to return unexpected results for certain input and I need to find out why. It could be DNS problem, filesytem related problem, external system change, anything.
Assuming that amount of processing is extensive, before I can identify possible locations of the problem I would need to obtain detailed traces which original application does not produce.
How can I instrument existing code so that I can (for example) provide non-volatile proof (not a live debug session) that certain component or function has a bug.
This sounds like more of an architecture/best practices type question than anything Haskell-specific, unless I'm misunderstanding something.
It sounds like your application needs to use a logging system, such as hslogger. The general approach is to have each component of your code create logging messages with an attached priority. You can then have the application handle different priority levels differently, so for example critical errors could be displayed on the console, while debug and info-level errors go to logfiles.
It's sometimes useful to use Debug.Trace.traceEvent and Debug.Trace.traceEventIO instead of a logging system, particularly if you suspect a concurrency issue, as the ghc eventlog also logs information about thread spawning/switching and garbage collection. But in general it's not a substitution for an actual logging framework.
Also, you may want to make use of assert as a sanity check that "impossible" conditions really don't occur.

Catching resource filename errors at compile time

I'm doing my first iOS App with Monotouch and I'm loading quite a lot of images from my resources directory. Every now and then I get a typo in a filename and the app will then crash on me spewing out some unintelligible error message. (I'll try adding deciphering stack traces to my skill set any day now ...)
I was thinking that there must be a smarter way to handle this. For example one could have a utility script that goes through the resources directory and constructs a list of global constants based on its contents. Each file in the resources gets an entry.
So that MyResources/Icons/HomeIcon.png will be represented by the constant MyResources.Icons.HomeIcon_png. Then one could have something like Inotify (don't know what that would be on Mac) watch the resources directory and regenerate the constants file on every change.
This would of course also give nice autocompletion for resources.
Maybe there's already something like this is already in Monodevelop or online somewhere? Otherwise how would I go about setting it up?
Or maybe there's some other smart way of mitigating the problem?
Your primary problem is typos in resource names are not caught early, and only cause crashes when the app is actually ran.
Your proposed solution of a list of global constants generated based on the available resources is kind of neat, but as far as I know this does not exist yet.
In the mean time, you could manually construct this list of global constants, and create a unit test that verifies all the elements in this list are valid resources (by looping through them automatically, of course - adding a resource to the list should not require a change to the test).
This way you can catch typos earlier (when you run the unit test rather than when you run the app), which is your primary concern. Additionally, if you ever find/write the script you envision, your application code is already prepared.
I filed an enhancement bug on Xamarin bugzilla for you: https://bugzilla.xamarin.com/show_bug.cgi?id=3760
So I spent four precious hours to cook up this little python script that sort of solves my problem. For now it's the best solution to my problem.
http://github.com/oivvio/Monodevelop-Resources-as-Constants

Convert MFC Doc/View to?

My question will be hard to form, but to start:
I have an MFC SDI app that I have worked on for an embarrassingly long time, that never seemed to fit the Doc/View architecture. I.e. there isn't anything useful in the Doc. It is multi-threaded and I need to do more with threading, etc.
I dream about also porting it to Linux X Windows, but I know nothing about that programming environment as yet. Maybe Mac also.
My question is where to go from here?
I think I would like to convert from MFC Doc/View to straight Win API stuff with message loops and window procedures, etc. But the task seems to be huge.
Does the Linux X Windows environment use a similar kind of message loop, window procedure architecture?
Can I go part way? Like convert a little at a time without rendering my program unusable for long periods of work?
Added later:
My program is a file compare program (sounds simple enough.) So, stating my confusion in a simple way, normally a document can have multiple views, but in this app, I have one view with multiple (two) documents (files). I have a "compare engine" that I first wrote back in the DOS days, that is the heart of the program and the view is just looking at the output of that routine. Sometimes I think that some of my "view" code could make sense in a "document" class but I hardly know where to begin to separate it into more classes. I have recently started reading "Programming Windows" 5th Ed. by Charles Petzold, (I know that is quite out of date (C) 1998) hoping to get a better understanding of direct Windows programming.
I get overwhelmed with the proliferation of options like C#, NET, MFC, MVC, Qt, wxWidgets, etc.
I find I am often stuck trying to understand something going on in the MFC framework because something in my code doesn't work as it seems it should, but the problem is that I don't really understand how MFC is handling things in the background. That is why I am trying to learn "straight Windows programming" where my program has all the message passing code that I write. I hope this helps give enough insight into my question so someone can guide me on my way.
X works enough differently that a raw Windows program and a raw X program probably wouldn't be able to share much UI code at all.
If you want portability between the two, chances are pretty good that you want to use something like Qt or wxWidgets. Of the two, wxWidgets is more similar to MFC, so it would probably require less rewriting, but would maintain (more or less) the same "disconnect" you're seeing between what you want and what it provides.
Without knowing more about your application, and why it doesn't fit well with MFC, it's impossible to guess whether Qt would be a better fit or not. An immediate guess would be "probably not".
MFC uses a "document/view" architecture, where Qt uses the original Model-View-Controller architecture. For the most part, MFC's Document class is equivalent basically a Model and a Controller rolled into one -- so if your Document contains nothing useful, in Qt you'd apparently have both a Model and a Controller, neither of which did much that was useful.
That said, I have to raise a question about why your Document currently doesn't do much. The MVC pattern has proven applicable to a wide variety of problems, so while it's possible it can't work well for your problem, it's also possible that it could work well, and you're simply not using it. Without knowing more about what you're doing, it's impossible to even guess at that though.
Edit: Okay, the clarification helps quite a bit. The first thing to realize is that a Document does not necessarily equate to a file. Quite the contrary, a document can perfectly reasonably relate to an arbitrary number of files.
Just for example, consider a web browser. All the data needed to compose the page its currently displaying would reasonably be part of the same document. Depending on your viewpoint, that's either zero files, or a whole bunch of them (it will start as an arbitrary number of files coming from the server(s), but won't necessarily be stored as files locally at all). Storing any of it as a file locally will be a (more or less) accidental by-product of caching, and mostly unrelated to browsing per se.
In your case, you're presumably reading the two (or three?) files into memory and storing them along with some sort of data structure to hold the result of the comparison. After the comparison is complete, you might or might not discard the contents of the files themselves. I think it's safe to say that the "normal" separation of responsibilities would be for that data and the code that produces that data to be in the Document.
The View should contain only the code to take that result from that data structure, and display it on screen. Nearly the only data you normally want to store in the View would be things related to how the data is presented (e.g., things like a zoom level or current scroll position). Likewise, the code in the view should relate only to displaying the result and reacting to user input, NOT to "creating" the data in the first place.
As such, I think your program could be rewritten to use the Document/View pattern more effectively, or could be rewritten to use MVC. That, in turn, means a port to Qt could/would probably work just fine -- provided you're willing to put some time and effort into understanding how it's intended to work and then make what may be fairly substantial changes to your code to work the way it's designed to.
As I commented previously, wxWidgets is more like MFC in this respect -- it uses a Document and View, not a Model, View, and Controller. It's also going to work best if you do some rewriting to separate responsibilities the way it's designed for. The good point is that it's probably a bit easier to do that one step at a time: rewrite the code in MFC, which which you're already familiar, and then port it to wxWidgets -- but given the similarity between the two, that "Port" will probably be little more than minor editing -- often just changing some names from C* to wx* is just about enough. To my recollection, the only place I've run into much work was in creating menus -- with MFC they're normally handled via resources, but (at least a few years ago when I used it) wxWidgets normally directly exposed the code that created the menu entries.
Porting to Qt would probably be more work -- you pretty much have to learn a new framework, and substantially reorganize your code at the same time. The good point is that when you're done, the result will probably be somewhat cleaner, though given what you're doing, the difference may be pretty minor. In a Document/View, the View displays data, and reacts to user input. In a Model/View/Controller, the View only displays data, but user input (that modifies the underlying data) goes through the Controller. Since you (presumably) don't expect to modify the underlying data, the only user input involved probably belongs in the view in any case (e.g., things like scrolling). It's barely possible you might have a few things you could put in the Document/Model that would be open to change (e.g., things like the current font or colors the user has selected).

Resources