How to simulate change of system date/time for testing purposes? - linux

I use gitlab-ci to run my test suite, in one test I would like to simulate a
change of system date (like, going to the day after) in order to see if
everything works as expected, for example:
redis keys are deleted (TTL)
some files are written with a new 'run number' since the date as changed
date and time calculation are ok
...
How do I simulate change of system date/time for unit-testing purposes?
EDIT: since my code base is in Python, I found freezegun and python-libfatketime; the former can probably do the trick for my Python code, and the latter is more powerful since it is intercepting system calls so theoretically I can use it for testing redis TTL if I start redis after the patches are applied.

You can override gettimeofday and clock_gettime to return any time you want. However, that would not affect pthread waiting functions with timeouts.

How about not picking the Date from System and providing a Date from a Variable you can change by a Script ?

When writing a (unit) test suite, you need to clearly define the system under test and its boundaries. In this case your application belongs to the system under test, redis does not.
Things which do not belong to the system under test should be mocked. For python start here: https://docs.python.org/3/library/unittest.mock.html

Related

Best way to implement background “timer” functionality in Python/Django

I am trying to implement a Django web application (on Python 3.8.5) which allows a user to create “activities” where they define an activity duration and then set the activity status to “In progress”.
The POST action to the View writes the new status, the duration and the start time (end time, based on start time and duration is also possible to add here of course).
The back-end should then keep track of the duration and automatically change the status to “Finished”.
User actions can also change the status to “Finished” before the calculated end time (i.e. the timer no longer needs to be tracked).
I am fairly new to Python so I need some advice on the smartest way to implement such a concept?
It needs to be efficient and scalable – I’m currently using a Heroku Free account so have limited system resources, but efficiency would also be important for future production implementations of course.
I have looked at the Python threading Timer, and this seems to work on a basic level, but I’ve not been able to determine what kind of constraints this places on the system – e.g. whether the spawned Timer thread might prevent the main thread from finishing and releasing resources (i.e. Heroku Dyno threads), etc.
I have read that persistence might be a problem (if the server goes down), and I haven’t found a way to cancel the timer from another process (the .cancel() method seems to rely on having the original object to cancel, and I’m not sure if this is achievable from another process).
I was also wondering about a more “background” approach, i.e. a single process which is constantly checking the database looking for activity records which have reached their end time and swapping the status.
But what would be the best way of implementing such a server?
Is it practical to read the database every second to find records with an end time of “now”? I need the status to change in real-time when the end time is reached.
Is something like Celery a good option, or is it overkill for a single process like this?
As I said I’m fairly new to these technologies, so I may be missing other obvious solutions – please feel free to enlighten me!
Thanks in advance.
To achieve this you need some kind of scheduling tasks functionality. For a fast simpler implementation is a good solution to use the Timer object from the
Threading module.
A more complete solution is tu use Celery. If you are new, deeping in it will give you a good value start using celery as a queue manager distributing your work easily across several threads or process.
You mentioned that you want it to be efficient and scalable, so I guess you will want to implement similar functionalities that will require multiprocessing and schedule so for that reason my recommendation is to use celery.
You can integrate it into your Django application easily following the documentation Integrate Django with Celery.

NCA R12 with LoadRunner 12.02 - nca_get_top_window returns NULL

Connection successfully established by nca_connect_server() but i am trying to capture current open window by using nca_get_top_window() but it returns NULL. Due to this all subsequent requests fail
It depends on how you obtained your script, whether it recorded or manually written.
If script is written manually there is guarantee that it could be replayed, since it may happen that sequence of API (or/and its parameters) is not valid. If script is recorded – there might be missed correlation or something like this, common way to spot the issue – is to compare recording and replaying behavior (by comparing log files related to these two stages, make sure you are using completely extended kind of log files) to find out what and why goes wrong on replay, and how it digress from recording activity.

How to find the time when a Puppet manifest is executed

I'm wondering if anyone knows a good way to get the date and time when a portion of code in a Puppet manifest is actually executed. Sometimes my manifests take a long time to run, and I need to schedule a task to occur soon after the end of the run, no matter when that occurs.
I have tried the time() function, setting a variable using generate() (using the date function on the Puppet master), and even creating a custom fact, but everything I've tried gets evaluated when the manifests are parsed on the server, rather than when they actually execute on the client.
Any ideas? The clients are all Windows, FWIW.
Thanks in advance!
I am not sure I understand what you mean, but you can't get this information during catalog compilation (obviously), so you can't use it to change the way the catalog will be applied.
If you need to trigger another process on the same host, then you should use any IPC mechanism you have available. You can exec anything, and have it happen just after any other resources is applied, so it is just a matter of finding the proper command.

Control Linux Application Launch/Licensing

I need to employ some sort of licensing on some Linux applications that I don't have access to their code base.
What I'm thinking is having a separate process read the license key and check for the availability of that application. I would then need to ensure that process is run during every invocation of the respected application. Is there some feature of Linux that can assist in this? For example something like the sudoers file in which I detect what user and what application is trying to be launched, and if a combination is met, run the license process check first.
Or can I do something like not allow the user to launch the (command-line) application by itself, and force them to pipe it to my license process as so:
/usr/bin/tm | license_process // whereas '/usr/bin/tm' would fail on its own
I need to employ some sort of licensing on some Linux applications
Please note that license checks will generally cost you way more (in support and administration) than they are worth: anybody who wants to bypass the check and has a modicum of skill will do so, and will not pay for the license if he can't anyway (that is, by not implementing a licensing scheme you are generally not leaving any money on the table).
that I don't have access to their code base.
That makes your task pretty much impossible: the only effective copy-protection schemes require that you rebuild your entire application, and make it check the license in so many distinct places that the would be attacker gets bored and goes away. You can read about such schemes here.
I'm thinking is having a separate process read the license key and check for the availability of that application.
Any such scheme will be bypassed in under 5 minutes by someone skilled with strace and gdb. Don't waste your time.
You could write a wrapper binary that does the checks, and then link in the real application as part of that binary, using some dlsym tricks you may be able to call the real main function from the wrapper main function.
IDEA
read up on ELF-hacking: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html
use ld to rename the main function of the program you want to protect access to. http://fixunix.com/aix/399546-renaming-symbol.html
write a wrapper that does the checks and uses dlopen and dlsym to call the real main.
link together real application with your wrapper, as one binary.
Now you have an application that has your custom checks that are somewhat hard to break, but not impossible.
I have not tested this, don't have the time, but sort of fun experiment.

Suspend Linux Date Time

I'm working on a product where the business logic changes based on the date and in order to help UAT testing it would be great if we could freeze the date/time on our Linux server.
Is it possible to suspend the date/time on the server from rolling over to the next day ?
Maybe the only way is to create a script which runs daily to adjust the date/time, any thoughts appreciated.
Thanks
Use LD_PRELOAD and redirect the library functions that retrieve time - An example can be found e.g. here
I think your best bet is, as you suggest, setting up a script to reset the time. There may be more exotic ways to do this but in the end the result is the same. Just be aware that there will be side-effects to "freezing" the time. Build systems that rely on file modification dates may be confused, as well as daemon processes that assume the clock is always moving forward.

Resources