I am testing some scripts to compare our Tenable.io reports. I need to compare today's report with yesterday's report on the same Linux target.
I need to put some deliberate/test vulnerabilities to show the scripts actually work. Is there any short piece of code that Tenable.io can flag easily? Any piece of code or signature files that when put on the Linux target will show up in the Tenable.io scan report?
Related
I have an Excel workbook with a lot going on – macros, external and realtime data sources, etc. – that has been breaking itself roughly once a week for the last month.
The breakage usually manifests itself when running a macro and getting:
Run-time error '-2147319767 (80028029)':
Automation error
Invalid forward reference, or reference to uncompiled type.
The point of failure identified by the debugger never makes sense – the same code has been working for weeks. And the fix I have been using has been to roll back to a saved version of the workbook that didn't throw errors running macros, and it always contains the exact same VBA code that was breaking. So I conclude that something behind the scenes is getting corrupted.
What's going on? Is there a way to avoid this? Is there a way to fix it that's better than rolling back to an earlier saved version of the workbook?
There are plenty of questions regarding this error, and the code changes that fix them never make sense either. The one thing they all have in common is that they make a change to the VBA code that, one deduces, forces Excel to regenerate its pseudo-code.
Further research leads one to frequent mentions of Excel workbook corruption, as well as to a free utility called Excel VBA Code Cleaner. The authors of that utility explain what's going on:
During the process of creating VBA programs a lot of junk code builds
up in your files. If you don't clean your files periodically you will
begin to experience strange problems caused by this extra baggage.
Cleaning a project involves exporting the contents of all its
VBComponents to text files, deleting the components and then importing
the components back from the text files.
Unfortunately they haven't published a version of their utility that works in 64-bit Excel. But one can perform the same thing manually – save all VB code, delete all modules, then recreate them and past the code back in.
UPDATE: VBA Code Decompiler is another freeware utility that appears to accomplish the same thing. There is also a more detailed description of how Office compiles and persists VBA code in its files.
I am executing a python script which actually analyze the system related metrics based on the threshold defined in the threshold_config.ini file.
The program can analyze data for metrics like disk, memory, swap and cpu.
For each metric i have two threshold value, one is for warning and another is for critical.
The script analysis and produce the report in text file saying Critical or warning for each of the systems.
I want to display this in jenkins, like a junit result, did anyone have any idea how to take custom reports and shows them in the jenkins junit format. Also I need to mark the build stable or unstable based on the warn and critical values.
Please help.
For the first bit (JUnit result format), you may want to translate results of comparison of your metrics against thresholds into a JUnit XML file, one per comparison. This requires low-level implementation, but you wouldn't be the first person doing that. If there is a better way to do that would depend on the exact format of results you've got.
For the second bit (passing/failing the build), you could use popular Jenkins JUnit plugin which will detect all your JUnit XML files and mark the build accordingly.
SpecFlow has the ability to generate a StepDefinitionReport. Unfortunately, it doesn't seem to list steps for which there is code, but the step is not actually used in any *.feature file. SpecFlow source code doesn't look like it's actually parsing the C# code, only the *.feature files, so it will never report a step with 0 uses.
Is there any other tool out there that will report orphaned steps? We have several hundred steps and multiple feature files that I'd rather not have to crawl them manually to find orphans.
I just tried the StepDefinitionReport with a trivial example in 5 minutes and it does report the orphaned steps. There must be another problem in your case. Also in the source code you can find the place where it collects the bindings: https://github.com/techtalk/SpecFlow/blob/master/TechTalk.SpecFlow.Reporting/StepDefinitionReport/StepDefinitionReportGenerator.cs#L38
I am trying to implement some additional functionality to the LibreOffice printing process (some special info should be added automatically to the margins of every printed page). I am using RHEL 6.4 with LibreOffice 4.0.4 and Gnome 2.28.
My purpose is to research the data flow between LibreOffice and system components and determine which source codes are responsible for printing. After that I will have to modify these parts of code.
Now I need an advice on the methods of source code research. I found a plenty of tools and from my point of view:
strace seem to be very low-level;
gprof requires binaries recompiled with "-pg" CFLAGS; have no idea how to do it with LibreOffice;
systemtap can probe syscalls only, isn't it?
callgrind + Gprof2Dot are quite good together but perform strange results (see below);
For instance here is the call graph from callgrind output with Gprof2Dot visualisation. I started callgrind with such a command:
valgrind --tool=callgrind --dump-instr=yes --simulate-cache=yes --collect-jumps=yes /usr/lib64/libreoffice/program/soffice --writer
and received four output files:
-rw-------. 1 root root 0 Jan 9 21:04 callgrind.out.29808
-rw-------. 1 root root 427196 Jan 9 21:04 callgrind.out.29809
-rw-------. 1 root root 482134 Jan 9 21:04 callgrind.out.29811
-rw-------. 1 root root 521713 Jan 9 21:04 callgrind.out.29812
The last one (pid 29812) corresponds to the running LibreOffice Writer GUI application (i determined it with strace and ps aux). I pressed CTRL+P and OK button. Then I closed the application hoping to see the function responsible for printing process initialisation in logs.
The callgrind output was processed with a Gprof2Dot tool according to this answer. Unfortunately, I cannot see on the picture neither the actions I am interested in, nor the call graph as is.
I will appreciate for any info about the proper way of resolving such a problem. Thank you.
The proper way of solving this problem is remembering that LibreOffice is open source. The whole source code is documented and you can browse documentation at docs.libreoffice.org. Don't do that the hard way :)
Besides, remember that the printer setup dialog is not LibreOffice-specific, rather, it is provided by the OS.
What you want is a tool to identify the source code of interest. Test Coverage (TC) tools can provide this information.
What TC tools do is determine what code fragments have run, when the program is exercised; think of it as collecting as set of code regions. Normally TC tools are used in conjunction with (interactive/unit/integration/system) tests, to determine how effective the tests are. If only a small amount of code has been executed (as detected by the TC tool), the tests are interpreted as ineffective or incomplete; if a large percentage has been covered, one has good tests asd reasonable justification for shipping the product (assuming all the tests passed).
But you can use TC tools to find the code that implements features. First, you execute some test (or perhaps manually drive the software) to exercise the feature of interest, and collect TC data. This tells you the set of all the code exercised, if the feature is used; it is an overestimation of the code of interest to you. Then you exercise the program, asking it to do some similar activity, but which does not exercise the feature. This identifies the set of code that definitely does not implement the feature. Compute the set difference of the code-exercised-with-feature and ...-without to determine code which is more focused on supporting the feature.
You can naturally get tighter bounds by running more exercises-feature and more doesn't-exercise-feature and computing differences over unions of those sets.
There are TC tools for C++, e.g., "gcov". Most of them, I think, won't let/help you compute such set differences over the results; many TC tools seem not to have any support for manipulating covered-sets. (My company makes a family of TC tools that do have this capability, including compute coverage-set-differences, including C++).
If you actually want to extract the relevant code, TC tools don't do that.
They merely tell you what code by designating text regions in source files. Most test coverage tools only report covered lines as such text regions; this is partly because the machinery many test coverage tools use is limited to line numbers recorded by the compiler.
However, one can have test coverage tools that are precise in reporting text regions in terms of starting file/line/column to ending file/line/column (ahem, my company's tools happen to do this). With this information, it is fairly straightforward to build a simple program to read source files, and extract literally the code that was executed. (This does not mean that the extracted code is a well-formed program! for instance, the data declarations won't be included in the executed fragments although they are necessary).
OP doesn't say what he intends to do with such code, so the set of fragments may be all that is needed. If he wants to extract the code and the necessary declarations, he'll need more sophisticated tools that can determine the declarations needed. Program transformation tools with full parsers and name resolvers for source code can provide the necessary capability for this. This is considerably more complicated to use than just test coverage tools with ad hoc extraction text extraction.
I'm attempting to test an application which has a heavy dependency on the time of day. I would like to have the ability to execute the program as if it was running in normal time (not accelerated) but on arbitrary date/time periods.
My first thought was to abstract the time retrieval function calls with my own library calls which would allow me to alter the behaviour for testing but I wondered whether it would be possible without adding conditional logic to my code base or building a test variant of the binary.
What I'm really looking for is some kind of localised time domain, is this possible with a container (like Docker) or using LD_PRELOAD to intercept the calls?
I also saw a patch that enabled time to be disconnected from the system time using unshare(COL_TIME) but it doesn't look like this got in.
It seems like a problem that must have be solved numerous times before, anyone willing to share their solution(s)?
Thanks
AJ
Whilst alternative solutions and tricks are great, I think you're severely overcomplicating a simple problem. It's completely common and acceptable to include certain command-line switches in a program for testing/evaluation purposes. I would simply include a command line switch like this that accepts an ISO timestamp:
./myprogram --debug-override-time=2014-01-01Z12:34:56
Then at startup, if set, subtract it from the current system time, and indeed make a local apptime() function which corrects the output of regular system for this, and call that everywhere in your code instead.
The big advantage of this is that anyone can reproduce your testing results, without a big readup on custom linux tricks, so also an external testing team or a future co-developer who's good at coding but not at runtime tricks. When (unit) testing, that's a major advantage to be able to just call your code with a simple switch and be able to test the results for equality to a sample set.
You don't even have to document it, lots of production tools in enterprise-grade products have hidden command line switches for this kind of behaviour that the 'general public' need not know about.
There are several ways to query the time on Linux. Read time(7); I know at least time(2), gettimeofday(2), clock_gettime(2).
So you could use LD_PRELOAD tricks to redefine each of these to e.g. substract from the seconds part (not the micro-second or nano-second part) a fixed amount of seconds, given e.g. by some environment variable. See this example as a starting point.