Ensure Linux (suse) Programs Level across multiple servers with cksum - linux

We have a GOLD image new servers are imaged from new ones are created.
Over time some of these have become out of sync due to poorly managed rollouts.
I would like to scan all of these servers bin folders and compare to what the GOLD image has into a output file. (IE: if different flag one way. If same say Same, if missing say Missing, if there but not on gold. Addition?)
I was going to accomplish via like below.
on the Gold Image run following example.
for x in `ls /bin/`
do
cksum $x >> /data/OnGold.lst
done
Distribute this file to all of servers along with another script that will execute the same thing with a different log name.
after the script executes another script will Diff the two files and report on the differences based off of the cksum or if files are missing or in addition to the OnGold.lst
(This is what I could use some advice on the best way to achieve this? Or if anyone knows of some open source tools that could accomplish the same thing? assuming. pretty sure diff would do the trick as it will advise if items were misssing or in addition but I dont know how to format this in a report format.)
Any help would be greatly appreciated?

Related

How to find text strings in a .xxx file

I'm working on a program that needs to find a tag in a .xxx file to just tell me if it exists or not in the file. I've been doing quite a bit of troubleshooting but I've realized there are three key things I don't know:
What a .xxx file is
Where to find help on how to work with .xxx files (Google didn't return anything useful)
How to read a string out of a .xxx file
I'm looking for help with these 3 things - specifically the 3rd, but help on the other two would mean I don't have to ask more questions later! I'm not in need of troubleshooting help yet - I'm not too worried about making my code run at this moment. This is more for reference and general knowledge so I don't have to ask 100 more questions about tedious specifics later on.
So, if anyone out there knows anything about these three problems, or has any knowledge on .xxx files, can you help me out?
(If you happen to know the code to do this, I'm writing in C#)
If you're using ReadLines, then it assumes it's a text file with line endings. If you're trying to use that on a binary file, then it won't necessarily work. And the best you may get is a count of 0 or 1, if there's no line endings found in the binary file at all.
You'll have to load the bytes in that instances and do a more thorough search through the binary file for instances of your string.
But if you're only wanting to know if a LINE contains at least one instance (as you have written your code above), then it won't work for binary files where you can't guarantee line endings exist.

Merge, sort, maintain line order

This probably sounds contradicting. So let me explain. I have a number of log files that use log4j to write to different files and rotate. What I want to do is merge them into fewer files.
How I started to go about doing this:
- use awk to concat multi-line entries into one line into a separate file.
- cat awk output files to 1 file.
- sort the cat file
- awk to separate the concatenated lines.
But I see that the sort is putting entries with the same second/ms in a different order than they appeared in their original output file. It may not be a HUGE deal. But, I don't like it. Any ideas for how I go about doing what I want (maintaining their original line order while sorting)? I would rather not write my own program and would like to use native linux utils if possible. But, I am open to the "best" way of doing this (Perl, Python, etc..).
I thought about cat'ing the output files from highest to lowest (log4j rotate files) so that I wouldn't have to sort. But that only solves the problem for files writing to the same log file (file1.0.log, file1.1.log, etc..). But this doesn't help when needing to merge file2 with file1.
Thank you,
Gregg
What you are talking about is "stable" sorting. There is a -s option on sort that should give you what you want.
Stability in sorting algorithms

Picking Certain Documentation with DOXYGEN

I would like to achieve the almost exact opposite of what can be
performed with command \internal. There exists a huge doxygen
documentation for a project already, but now I would like to pick out
a few blocks (functions, constants etc.) to create a very small manual
only containing the important stuff.
Instead of marking 99% of the comments as \internal it would be nice
to have a command like \external for the 1% of comments that need to
be exported in my case.
Something like disabling the "default section" (everything, which is
not part of a section) would work too, of course. Then I could use
ENABLED_SECTIONS...
Unfortunately the comments in question do not reside in one file only.
Furthermore those files contain a lot of other comments, which should
not be exported.
I already thought to move those comments into separate header files
that could be included in the original position, but this would mean
to restructure a lot and tearing files apart.
Does anybody have an idea how to solve my problem?
Thanks in advance,
Nico
I think ENABLED_SECTIONS is the way forward, but there's a couple of things that might reduce the workload.
The first is to create a separate doxyfile for your particular requirement, then you can customise that without upsetting any master one.
In that new doxyfile explicitly list, in the INPUT file list, only those files that contain content that you need. Chances are that it's currently set to pull in whole folder trees - edit that to cherry pick the individual files; not forgetting files that you may need to define the 'structure' of the document.
After that use ENABLED_SECTIONS with corresponding #if <SECTION_NAME> #endif markers to refine the selection to units smaller than a file.

Merge two files in linux and ignore any repetition

Can anyone provide me with a shell script in linux that merges two files and saves it in a third file. However I want that if there is any common data in both the files then the common lines should only be saved once. Please ask if you need any more details. Thanks in advance!!
Simplest way:
cat one two | sort -u > third
But this is probably not what you want...
You mentioned merging in your question: what do you mean with that? If it's not that simple as I assumed in my code above, provide sample files and tell us what you want to achieve.

How Can I View the Files Synced Since a Given Date in P4?

So, I'm trying to troubleshoot an error, and being able to see what files were synced to a given server since a certain date would go a long way toward helping me figure things out. I feel like there should be a way to do this through "p4 have", but I can't figure it out or find it in the documentation.
Important: I am NOT looking for what files were submitted into p4 during that time. I only want to see what files were synced to this server.
Thanks!
"p4 have" will indeed tell you what files are currently sync'd to this workspace. But it won't tell you when they were sync'd. This sounds like one of those questions where if you told us a little bit about why you thought you wanted to know this information, we could probably suggest a different technique, which would address your problem.

Resources