I am working on an existing CMS. We were thinking if it would be usefull to work with a cronjob. When thinking on this, we could only think on updating the search index for the cronjob, so we have no idea if we should add the cronjob to our CMS or not ...
What do you think?
Are you asking whether or not it would be useful, I think that it absolutely can be for certain tasks.
Having done some Drupal development, I know that it tends to use cronjobs for aggregating feeds, caching pages, creating logs and a whole host of other things.
see http://drupal.org/cron for a more in-depth example/explanation.
If you have a valid need to perform some batch type of operations off of your main server then a cronjob makes perfect sense, but I wouldn't start adding them just for the sake of having them.
Instead of using cron to execute your scripts directly, you could implement a scheduler in whatever language you're using and just have cron give it a heartbeat. I would do this if I had multiple tasks to schedule for my site and didn't want to scatter logic everywhere.
Something like this spec: http://code.google.com/p/djangotaskscheduler/wiki/OriginalRequirements
If it is no customer requirement just leave it. It only adds complexity and has to be testet without adding any real value.
Related
In the documentation, it is written that it can be used for writing custom Django-admin commands. But my question is why do we need to write custom Django admin commands? The given example in the official documentation is a bit dry to me. I would be really grateful if someone give real-world examples from which I can connect its real-life use.
Django doc on management/commands:https://docs.djangoproject.com/en/2.2/howto/custom-management-commands/
I mainly use it from Cron / Scheduled Tasks..
Some potential examples would be:
Sending out Reports/Emails
Running Scripts to Update+Sync some Values
Updating the Cache
Any large update to values- save it to a command to run on the Prod Env
I make it + test it locally, but then I don't want to Copy+Paste it in a SSH terminal cause it sometimes gets all sorts of messed up in the paste.
I also have a management command dothing that sets up the entire project.. runs migrations, collects static, imports db, creates test users, creates required folder structures, etc.
I also have a couple of commands that I use, that I haven't made into Views.. Little tools to help me validate and clean data, spits out a representation of it
Django scheduled operations and report generation from cron is the obvious one.
Another I use is for loading data into the DB from csv files. It's easy in the management command environment to handle bad rows. I write the original csv row into an exceptions file (with a error-description column appended) and can then look at it and decide what to do about these rows. Sometimes, just a trivial edit and feed it through the management command again. It's possible to do the same via a view, but extra work for IMO no gain.
I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)
I am working on a BDD web development and testing project with other team members.
On top we write feature files in gherkin and run cucumber to generate step functions. At bottom we write Selenium page models and action libraries scripts. The rest is just fill in the step functions with Selenium script and finally run cucumber cases.
Sounds simple enough.
The problem comes starting when we write feature files.
Problem 1: Our client's requirement keeps changing every week as the project proceed, in terms of removing old ones and adding new ones.
Problem 2: On top of that, for some features, detailed steps keep changing too.
The problem gets really bad if we try to generate updated step functions based on updated feature file every day. There are quite some housecleaning to do to keep step functions and feature files in sync.
To deal with problem 2, I remembered that one basic rule in writing gherkin feature file is to use business domain language as much as possible. So I tried to persuade the BA to write the feature file a little more vague, and do not include too many UI specific steps in it, so that we need not to modify feature files/step functions often. But she hesitate 'cause the client's requirement document include details and she just try to follow.
To deal with problem 1, I have no solution.
So my question is:
Is there a good way to write feature file so that it's less impacted by client's requirement change? Can we write it vague to omit some details that may change (this way at least we can stabilize the step function prototype), and if so, how far can we go?
When is a good time to generate the step definitions and filling in the content? From the beginning, or wait until the features stabilize a little? How often should we do it if the feature keep changing? And is there a convenient way to clean the outdated step functions?
Any thoughts are appreciated.
Thanks,
If your client has specific UI requirements for which you are contracted to provide automated tests, then you ought to be writing those using actual test automation tools. Cucumber is not a test automation tool. If you attempt to use it as such, you are simply causing yourself a lot of pain for naught.
If, however, you are only contracted to validate that your application complies with the business rules provided by your client, during frequent and focused discovery sessions with them, then Cucumber may be able to help you.
In either case, you are going to ultimately fail, if there's no real collaboration with your client. If they're regularly throwing new business rules, or new business requirements over a transome through which you have limited or no visibility, then you are in a no-win situation.
I have such log format:
[26830431.7966868][4][0.013590574264526367][30398][api][1374829886.320353][init]
GET /foo
{"controller"=>"foo", "action"=>"index"}
[26830431.7966868][666][2.1876697540283203][30398][api][1374829888.4944339][request_end]
200 OK
The entry is constracted using such pattern:
[request_id][user_id][time_from_request_started][process_id][app][timestamp][tagline]
payload
Durring request I have many point where I log something - app basically has complex behaviour. This helps me debug a lot the user behaviour.
The way I would like to parse it is that I would like to make have directory structure like this:
req_id
|
|----[time_from_request_started][process_id][timestamp][tagline]
|
etc
Basically each directory will have name based on req_id, with files wchich names are rest of tagline. These files will include payload.
And also I will have other directory, with users ids, which will contain symlinks to request done by this user.
First question: Is this structure correct? In my opinion it will make easy fast log access. The reason I want to use directories and files is that I like unix approach, and try it (feel by myself its drawbacks and advantages)
Second question: I will have no problem to use ruby for creating this. But I would like to learn some new tool, which is better suited for this. I am thinking about using just unix tools (pipe, awk etc) to achieve this, or write parser in golang which I am learning right now (even have time to implement simple map reduce). What tool is best suited for this?
I would not store logs in a directory to see how the users behave.
Depending on what behaviour you want to keep track of you could use different tools. One of these could be mixpanel or keen.io.
Instead of logging what the user did in a log file you would sent an event to either of those (they are pretty similar, pick the one you think has better docs / lib), then you would graph those events to better understand the behaviour of your users. I've done this a lot recently, to display data in a nice way I've used rickshaw.
The key point why I'm suggesting this is that if you go the file route you will still have to find a way to understand your data, something that graphs will help you a lot at. Also, visualization is something keen.io does by default, you may still want to do your graphs but it's a good start.
Hope this helped.
Is this structure correct?
Only you can know that, it depends directly on how the data needs be accessed and used.
What tool is best suited for this?
You could probably use UNIX tools to achieve this but it may as well be a good exercise to practice your Go skills by writing this. It would also be more extensible.
I am using SS 2.1 and just starting out with it. I got everything loaded and it works for the simple tests I've been doing, but a general question: Is there a way to update my build without having to rebuild the entire thing, an example would be if we change the layout of a table. Lets say we have a id, and name, and then later on add id, name and disabled. Is SS smart and able to pick that up or would it require a new build? Thank you very much for your time.
Cheers
I believe you use a command-line app to generate your mapping files, so that command-line app would have to be re-run for that to happen. Second, the mapping code would have to be compiled on the fly after insert...most .net application do not do this.
But the biggest reason you would not want the mappings to be generated on the fly: speed. It takes time to do that, several seconds at least. Then how would you time it? Not every call -- that would be insane. Once a day? when during the day?
So no, SubSonic only generates the mapping files when you ask it to. If you change the database you risk breaking your application.
If you are using the build provider with ASP.NET, building your project will make SubSonic catch the change and update the generated classes.
Otherwise you will need to use SubCommander to generate the classes again.