How can I view CruiseControl.Net logs in real time? - cruisecontrol.net

I use CruiseControl.Net for continuous integration and I would like to read the log output of the current project in real time. For example, if it is running a compile command, I want to be able to see all the compile output so far. I can see where the log files are stored but it looks like they are only created once the project finishes. Is there any way to get the output in real time?

The CCTray app will allow you to see a snapshot of the last 5 or so lines of output of any command on a regular interval.
It's not a live update as that would be too resource intensive, as would be a full output of the log to-date.
Unless you write something to capture and store the snapshots you're out of luck. Doing this also presents to possibility of missing messages that appear between snapshots, so it would not be entirely reliable. It would however give you a slightly better idea of what is going on.

You can run ccnet.exe as a command line application instead of running ccservice as a Windows service. It will output to the terminal as it runs. It's useful for debugging.

Related

Sending command-line parameters when using node-windows to create a service

I've built some custom middleware on Node.js for a client which runs great in user space, but I want to make it a service.
I've accomplished this using node-windows, which works great, but the client has occasional large bursts of data so I'd like to allocate a little more memory using the --max-old-space-size command line parameter. Unfortunately, I don't see how to configure that in my service set-up wrapper for node-windows.
Any suggestions?
FWIW, I'm also thinking about changing how I parse the data, e.g. treating it more as a stream, but since this is the first time I've used Node and the project is going live in a couple of days, I'm hoping to find a quick and dirty option that'll get us to an up-and-running status easily, to be adjusted later.
Thanks!
Use node-windows v0.1.14 or higher. The ability to add flags was merged in this version. The more appropriate issue related to this is https://github.com/coreybutler/node-windows/issues/159.

How do I determine whether linux process accounting (accton) is currently running?

Is there any way to determine whether process accounting (accton) is running? There is no process listed in the process table ("ps"), and I see nothing under "/etc" that I can call with "status" to get a status of accounting.
I'm running a custom build based on "Linux From Scratch", so while I understand that CentOS has "psacct", I don't have that available.
I could watch the log file and see if it's growing - not ideal, but if it's all I got, then it's all I got. I'm hoping there's a better way.
Appreciate any info.
May be you can do like this. enable logging for your accton, an check the log file for update. like time stamp modification/lat access time/modify time ?
/sbin/accton /var/log/accto.log then the log will be created.
Then use some script to monitor your accto.log. There are lot of log file monitoring scripts for nagios if you are interested.

how can i tell (in bash script) if a clonezilla batch mode backup succeeded?

This is my first ever post to stackoverflow, so be gentle please. ;>
Ok, I'm using slightly customized Clonezilla-Live cd's to backup the drives on four PCs. Each cd is for a specific PC, saving an image of its disk(s) to a box-specific backup folder on a samba server. That's all pretty much working. But once in a while, Something Goes Wrong, and the backup isn't completed properly. Things like: the cat bit through a cat5e cable; I forgot to check if the samba server had run out of room; etc. And it is not always readily apparent that a failure happened.
I will admit right now that I am pretty much a noob as far as linux system administration goes, even though i managed somehow to setup a centos 6 box (i wish i'd picked ubuntu...) with samba, git, ssh, and bitnami-gitlab back in february.
I've spent days and days and days trying to figure out if clonezilla leaves a simple clue in a backup as to whether it succeeded completely or not, and have come up dry. Looking in the folder for a particular backup job (on the samba server) I see that the last file written is named "clonezilla-img". It seems to be a console dump that covers the backup itself. But it does not seem to include the verification pass.
Regardless of whether the batch backup task succeeded or failed, I can run a post-process bash script automagically, that I place on my clonezilla cds. I have this set to run just fine, though its not doing a whole lot right now. What I would like this post-process script to do is determine if the backup job succeeded or not, and then rename (mv) the backup job directory to include some word like "SUCCESS" or "FAILURE". I know how to do the renaming part. It's the test for success or failure that I'm at a loss about.
Thanks for any help!
I know this is old, but I've just started on looking into doing something very similar.
For your case i think you could do what you are looking for with ocs_prerun and ocs_postrun scripts.
For my setup I'm using a pen/falsh drive for some test systems and also pxe with NFSmount. PXE and nfs are much easier to test and modify quickly.
I haven't tested this yet, but I was thinking that I might be able to search the logs in /var/log/{clonezilla.log,partclone.log} via an ocs_postrun script to validate success or failure. I haven't seen anything that indicates the result is set in the environment so I'm thinking the logs might be the quick easy method over mounting or running a crc check. Clonezilla does have an option to validate the image, the results of which might be in the local logs.
Another option might be to create a custom ocs_live_run script to do something similar. There is an example at this URL http://clonezilla.org/fine-print-live-doc.php?path=./clonezilla-live/doc/07_Customized_script_with_PXE/00_customized_script_with_PXE.doc#00_customized_script_with_PXE.doc
Maybe in the script the exit code of ocs-sr can be checked? As I said I haven't tried any of this, just some thoughts.
I updated the above to reflect the log location (/var/log). The logs are in the log folder of course. :p
Regards

How can I get the Forever to write to a different log file every day?

I have a cluster of production servers running a Node.JS app via Forever. As far as I can tell, my options for log files are as follows:
Let Forever do it on its own, in which case it will log to ~/.forever/XXXX.log
Specify one specific log file for the entire life of the process
What I'd like to do, however, is have it log to a different file every day. eg. 20121027.log, 20121028.log, etc.
Is this possible? If so, how can it be done?
You may use some linux program like logrotate to help you w/ log rotation.
People use logrotate to rotate logs for things like apache, etc.
The config file is usually under /etc/logrotate.d
man logratate can give you more information, and here is a great tutorial: http://www.thegeekstuff.com/2010/07/logrotate-examples/

Call Visitors web stat program from PHP

I've been looking into different web statistics programs for my site, and one promising one is Visitors. Unfortunately, it's a C program and I don't know how to call it from the web server. I've tried using PHP's shell_exec, but my web host (NFSN) has PHP's safe mode on and it's giving me an error message.
Is there a way to execute the program within safe mode? If not, can it work with CGI? If so, how? (I've never used CGI before)
Visitors looks like a log analyzer and report generator. Its probably best setup as a chron job to create static HTML pages once a day or so.
If you don't have shell access to your hosting account, or some sort of control panel that lets you setup up chron jobs, you'll be out of luck.
Is there any reason not to just use Google Analytics? It's free, and you don't have to write it yourself. I use it, and it gives you a lot of information.
Sorry, I know it's not a "programming" answer ;)
I second the answer of Jonathan: this is a log analyzer, meaning that you must feed it as input the logfile of the webserver and it generates a summarization of it. Given that you are on a shared host, it is improbable that you can access to that file, and even if you would access it, it is probable that it contains then entries for all the websites hosted on the given machine (setting up separate logging for each VirtualHost is certainly possible with Apache, but I don't know if it is a common practice).
One possible workaround would be for you to write out a logfile from your pages. However this is rather difficult and can have a severe performance impact (you have to serialize the writes to the logfile for one, if you don't want to get garbage from time to time). All in all, I would suggest going with an online analytics service, like Google Analytics.
As fortune would have it I do have access to the log file for my site. I've been able to generate the HTML page on the server manually - I've just been looking for a way to get it to happen automatically. All I need is to execute a shell command and get the output to display as the page.
Sounds like a good job for an intern.
=)
Call your host and see if you can work out a deal for doing a shell execute.
I managed to solve this problem on my own. I put the following lines in a file named visitors.cgi:
#!/bin/sh
printf "Content-type: text/html\n\n"
exec visitors -A /home/logs/access_log

Resources