I'm wondering if anyone knows a good way to get the date and time when a portion of code in a Puppet manifest is actually executed. Sometimes my manifests take a long time to run, and I need to schedule a task to occur soon after the end of the run, no matter when that occurs.
I have tried the time() function, setting a variable using generate() (using the date function on the Puppet master), and even creating a custom fact, but everything I've tried gets evaluated when the manifests are parsed on the server, rather than when they actually execute on the client.
Any ideas? The clients are all Windows, FWIW.
Thanks in advance!
I am not sure I understand what you mean, but you can't get this information during catalog compilation (obviously), so you can't use it to change the way the catalog will be applied.
If you need to trigger another process on the same host, then you should use any IPC mechanism you have available. You can exec anything, and have it happen just after any other resources is applied, so it is just a matter of finding the proper command.
Related
For example, I have a cyclical script that makes a request to the Api every 10 minutes. I want to change the parameters of the next request without stopping it.
Cmd is blocked at run time.
Reading arguments from a environment file or a database doesn't seem safe to me. I think it's possible that while variables are being changed, the script may access them when they are not yet ready and cause errors.
Any thoughts?
Thanks
I have developed a CMIS server for a custom repository, and when I run TCK tests on it, Root Folder Test takes forever and I have never been able to wait until the end of it.
The blame is actually on getObjectParents implementation. When I put a breakpoint there, I realize that TCK tests have created too many documents on the root folder, and they keeps calling getObjectParents for each of them. It takes so long that I have never managed to wait till the end to see what happens next! I don't think there's an infinite loop firstly because any time I pause I do stop at my getObjectParents breakpoint and each time I get a different document id (at least about 50 of them that I managed to track).
Also as a P.S., if I intentionally break my implementation of getObjectParents and throw a CmisRuntimeException, TCK tests will run and pass Ok.
Any similar experience or solution is really appreciated.
I don't think this is a TCK issue.
Have you checked how much time your getObjectParents implementation needs to respond for one document?
Some clients call this method frequently. If it constantly takes too long (>2 sec), clients may not be able to work with your repository.
Connection successfully established by nca_connect_server() but i am trying to capture current open window by using nca_get_top_window() but it returns NULL. Due to this all subsequent requests fail
It depends on how you obtained your script, whether it recorded or manually written.
If script is written manually there is guarantee that it could be replayed, since it may happen that sequence of API (or/and its parameters) is not valid. If script is recorded – there might be missed correlation or something like this, common way to spot the issue – is to compare recording and replaying behavior (by comparing log files related to these two stages, make sure you are using completely extended kind of log files) to find out what and why goes wrong on replay, and how it digress from recording activity.
I've recently taken over responsibility of a Cruise Control continuous integration server, although I know very little about either Cruise Control or Nant. But, such is life.
One of the regular build jobs this is supposed to do is to execute a Nant script that backs up files and data from one of live servers to a backup server. I've discovered that this has been failing pretty much as far back as user interface will let me see.
The error message is always the same:
Build Error: NAnt.Core.BuildException
Cannot copy '[filename].bak' to '[server]'.
But it doesn't always fail at exactly the same spot.
The Nant script that's executing is pretty much several iterations of this copy code:
<copy todir="${backup.root}\{dirname}">
<fileset basedir="s:">
<include name="**/*" />
</fileset>
</copy>
Although some of the commands are 'move' rather than 'copy'.
The fact this happens at different points in the scripts suggests to me that this is either down to a timeout, or to the script being unable to access files that in use by the system when the script is running. But I've never been able to get a successful execution through, no matter what time of day I set it to run.
The error messages are not particularly helpful in identifying what the problem actually is. And googling them is not particularly enlightening. I'm not expecting a solution to this (though one would be nice) - but it'd be enormously helpful if I could just get some pointers on where to look next in terms of identifying the problem.
As this is a backup you could set the copy task to proceed on failure until you identify the problem. Obviously you don't want to leave it that way permanently.
See http://nant.sourceforge.net/release/0.85/help/tasks/copy.html
I would add the verbose='true' and failonerror='false' attributes to the copy task and see if that helps.
Setting overwrite based on your scenario may also help.
First of all extract the Nant section to a sandbox Nant file and run Nant on your own from command line so you don't have to wait to test until the scheduled backup time each day. Basically setup a testbed for ease of testing.
Then run it and see where it fails. Then shorten the nant task till it works no matter how little work it's doing. Now you should know what first causes it to fail. Focus on that.
I've used Nant a fair amount and:
Cannot copy '[filename].bak' to '[server]'
makes me think some properties aren't being resolved. Why would someone name a file [filename].bak? 'filename' looks like the name of a property that doesn't exist in Nant. Just going by gut feel here.
I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.