OpenModelica internal time strictly seconds? - openmodelica

Trying to run a model with rates defined as "per day", while Modelica expects "per second". Changing the model rates is complex and potentially troublesome.
I wonder if Modelica can be set to a different time unit for simulation.
Probably using the wrong vocabulary to search as I can't find much helpful information on this.
It seems that the time unit of seconds is or at least was built in to Modelica.
Did try to convert model to rates "per second". It kind of works, but than flow rates and controllers also need to be set to this time unit. This is totally impractical from an engineering point of view.
Would be easier to change internal time unit.
Can anyone confirm that seconds are compulsory for rates? Or even better: Can anyone point me to changing the defautl "internal" time unit?
Thank you!

In the plotting view you can change the simulation time unit.

Thank you for your answers. I am aware of the possibility to change display time units. I take it as a confirmation that models in OpenModelica should be set up keeping in mind the internal time unit of seconds. So I'll adapt the modell I'm using. I guess there might also be the way of using the unit attribute for parameters defining process rates and thus describe them properly. Will try out. Maybe.

Related

Recognize "ding-dong" sound

I'm building sound recognition model to detect "ding-dong" sound.
There are two procedures, training and testing.
Training data are "ding-dong" sounds generated by a device.
The model can detect "ding-dong" sounds generated by the same device, it works well.
But, when there is a new "ding-dong" sound generated by the second device, the performance will be bad.
I know the possible solution of this issue: record "ding-dong" sound generated by the second device and add it to training data.
But, there is always a new device, new "ding-dong" sound.
What should I do ?
You are facing overfitting problem. Overfitting means that your model has trained to work optimally on specific cases which are the training data set. In order to overcome this problem you should train your model on many devices and then make interpolation between them. Interpolation may be guaranteed by the model you are using.
However, the previous information is so general. In your case, you may find much much easier way to do it. All is depend about how you define "ding-dong". If you could find a siguntur for the "ding-dong" it would be great. This signature should be invariant to all undesirable features.
For example, should "Diiiiing-doooooong" be accepted? if yes, you should find a signature which invariant to length audio clip. Is "ding-dong" with higher frequency acceptable? If yes, you should find a signature which take frequencies as fraction of each other not as absolute values and so on...
BTW, I am sure you may google this and find many many papers about your problem but it may be about "dang-dong" not "ding-dong" but you will still able to benefit from it ;)
So you want to recognize "ding dong sounds" from "other sounds".
One approach could be to train also the data to recognize "other sounds" as an other class. Therefore, a new ding dong could be more easily associated with the "ding-dong sounds" than the "others-sounds".
One drawback of this method could be a growth in the number of "false alarms", but this task always leads to a compromise between precision and recall.

Is complete regression testing achievable with Behavior Driver Development. (Jbehave/Cucmber)

Can we achieve regression tesing coverage with BDD using JBehave/Cucumber?
Please share your inputs that the complete regression testing is achievable with Behavior Driver Development. (Jbehave/Cucmber).
In all but the most trivial products, it's impossible to perform complete regression testing.
Consider these acceptance criteria:
Items can be replaced or refunded.
This leads to two scenarios; one where we refund the item, and one where we return it. Now let's add a bit more to that:
Items are put in stock when returned or refunded, unless faulty.
Now we have four scenarios:
The one where we replace the item and it's faulty
The one where we refund the item and it's faulty
The one where we replace the item and put it back in stock
The one where we refund the item and put it back in stock.
Now let's add the criteria that a receipt must be in date. We need to check that refunds and replacements are both refused, but also that the item doesn't accidentally go back into stock, nor that any fault label is printed. So now we have eight scenarios.
Now let's think about the scenarios where we have a discount, and the ones where we can't scan the barcode, so we manually input the number, and the ones where the customer lost the receipt so we have to look it up using his loyalty card, and the ones where he paid by gift certificate...
Every scenario could, if the code was poorly designed, affect every other scenario. The number of potential combinations becomes exponential, very quickly.
We hope that the code is well-designed, and that the different aspects of behaviour are well-encapsulated. We hope that all the scenarios had been considered. However, if that was the case, we wouldn't be accidentally changing behaviour we didn't mean to, and we wouldn't need regression testing at all. So we know that at least some of the time, in most teams, changes to one scenario do affect changes in another.
Thinking about the responsibility of each piece of code can help to reduce this, which is why most teams practice both BDD and TDD (or BDD at a class level).
Additionally, it's impossible to ensure that every scenario has been thought of up-front, especially since every software project involves something new (or you wouldn't be doing it).
The only thing we can do is get confidence that the code works.
BDD is pretty good at giving us confidence. Not only does it help people to understand what the code does - so they are less likely to make mistakes and write bugs - but it also helps with automating the scenarios, so that there's less work for the testers, and they can focus more on looking for scenarios nobody's thought of yet (exploratory testing).
So, BDD can definitely help with regression testing... but nothing, not even BDD, can perform complete regression test coverage.

How to predict when next event occurs based on previous events? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Basically, I have a reasonably large list (a year's worth of data) of times that a single discrete event occurred (for my current project, a list of times that someone printed something). Based on this list, I would like to construct a statistical model of some sort that will predict the most likely time for the next event (the next print job) given all of the previous event times.
I've already read this, but the responses don't exactly help out with what I have in mind for my project. I did some additional research and found that a Hidden Markov Model would likely allow me to do so accurately, but I can't find a link on how to generate a Hidden Markov Model using just a list of times. I also found that using a Kalman filter on the list may be useful but basically, I'd like to get some more information about it from someone who's actually used them and knows their limitations and requirements before just trying something and hoping it works.
Thanks a bunch!
EDIT: So by Amit's suggestion in the comments, I also posted this to the Statistics StackExchange, CrossValidated. If you do know what I should do, please post either here or there
I'll admit it, I'm not a statistics kind of guy. But I've run into these kind of problems before. Really what we're talking about here is that you have some observed, discrete events and you want to figure out how likely it is you'll see them occur at any given point in time. The issue you've got is that you want to take discrete data and make continuous data out of it.
The term that comes to mind is density estimation. Specifically kernel density estimation. You can get some of the effects of kernel density estimation by simple binning (e.g. count the number events in a time interval such as every quarter hour or hour.) Kernel density estimation just has some nicer statistical properties than simple binning. (The produced data is often 'smoother'.)
That only takes care of one of your problems, though. The next problem is still the far more interesting one -- how do you take a time line of data (in this case, only printer data) and produced a prediction from it? First thing's first -- the way you've set up the problem may not be what you're looking for. While the miracle idea of having a limited source of data and predicting the next step of that source sounds attractive, it's far more practical to integrate more data sources to create an actual prediction. (e.g. maybe the printers get hit hard just after there's a lot of phone activity -- something that can be very hard to predict in some companies) The Netflix Challenge is a rather potent example of this point.
Of course, the problem with more data sources is that there's extra legwork to set up the systems that collect the data then.
Honestly, I'd consider this a domain-specific problem and take two approaches: Find time-independent patterns, and find time-dependent patterns.
An example time-dependent pattern would be that every week day at 4:30 Suzy prints out her end of the day report. This happens at specific times every day of the week. This kind of thing is easy to detect with fixed intervals. (Every day, every week day, every weekend day, every Tuesday, every 1st of the month, etc...) This is extremely simple to detect with predetermined intervals -- just create a curve of the estimated probability density function that's one week long and go back in time and average the curves (possibly a weighted average via a windowing function for better predictions).
If you want to get more sophisticated, find a way to automate the detection of such intervals. (Likely the data wouldn't be so overwhelming that you could just brute force this.)
An example time-independent pattern is that every time Mike in accounting prints out an invoice list sheet, he goes over to Johnathan who prints out a rather large batch of complete invoice reports a few hours later. This kind of thing is harder to detect because it's more free form. I recommend looking at various intervals of time (e.g. 30 seconds, 40 seconds, 50 seconds, 1 minute, 1.2 minutes, 1.5 minutes, 1.7 minutes, 2 minutes, 3 minutes, .... 1 hour, 2 hours, 3 hours, ....) and subsampling them via in a nice way (e.g. Lanczos resampling) to create a vector. Then use a vector-quantization style algorithm to categorize the "interesting" patterns. You'll need to think carefully about how you'll deal with certainty of the categories, though -- if your a resulting category has very little data in it, it probably isn't reliable. (Some vector quantization algorithms are better at this than others.)
Then, to create a prediction as to the likelihood of printing something in the future, look up the most recent activity intervals (30 seconds, 40 seconds, 50 seconds, 1 minute, and all the other intervals) via vector quantization and weight the outcomes based on their certainty to create a weighted average of predictions.
You'll want to find a good way to measure certainty of the time-dependent and time-independent outputs to create a final estimate.
This sort of thing is typical of predictive data compression schemes. I recommend you take a look at PAQ since it's got a lot of the concepts I've gone over here and can provide some very interesting insight. The source code is even available along with excellent documentation on the algorithms used.
You may want to take an entirely different approach from vector quantization and discretize the data and use something more like a PPM scheme. It can be very much simpler to implement and still effective.
I don't know what the time frame or scope of this project is, but this sort of thing can always be taken to the N-th degree. If it's got a deadline, I'd like to emphasize that you worry about getting something working first, and then make it work well. Something not optimal is better than nothing.
This kind of project is cool. This kind of project can get you a job if you wrap it up right. I'd recommend you do take your time, do it right, and post it up as function, open source, useful software. I highly recommend open source since you'll want to make a community that can contribute data source providers in more environments that you have access to, will to support, or time to support.
Best of luck!
I really don't see how a Markov model would be useful here. Markov models are typically employed when the event you're predicting is dependent on previous events. The canonical example, of course, is text, where a good Markov model can do a surprisingly good job of guessing what the next character or word will be.
But is there a pattern to when a user might print the next thing? That is, do you see a regular pattern of time between jobs? If so, then a Markov model will work. If not, then the Markov model will be a random guess.
In how to model it, think of the different time periods between jobs as letters in an alphabet. In fact, you could assign each time period a letter, something like:
A - 1 to 2 minutes
B - 2 to 5 minutes
C - 5 to 10 minutes
etc.
Then, go through the data and assign a letter to each time period between print jobs. When you're done, you have a text representation of your data, and that you can run through any of the Markov examples that do text prediction.
If you have an actual model that you think might be relevant for the problem domain, you should apply it. For example, it is likely that there are patterns related to day of week, time of day, and possibly date (holidays would presumably show lower usage).
Most raw statistical modelling techniques based on examining (say) time between adjacent events would have difficulty capturing these underlying influences.
I would build a statistical model for each of those known events (day of week, etc), and use that to predict future occurrences.
I think the predictive neural network would be a good approach for this task.
http://en.wikipedia.org/wiki/Predictive_analytics#Neural_networks
This method is also used for predicting f.x. weather forecasting, stock marked, sun spots.
There's a tutorial here if you want to know more about how it works.
http://www.obitko.com/tutorials/neural-network-prediction/
Think of a markov chain like a graph with vertex connect to each other with a weight or distance. Moving around this graph would eat up the sum of the weights or distance you travel. Here is an example with text generation: http://phpir.com/text-generation.
A Kalman filter is used to track a state vector, generally with continuous (or at least discretized continuous) dynamics. This is sort of the polar opposite of sporadic, discrete events, so unless you have an underlying model that includes this kind of state vector (and is either linear or almost linear), you probably don't want a Kalman filter.
It sounds like you don't have an underlying model, and are fishing around for one: you've got a nail, and are going through the toolbox trying out files, screwdrivers, and tape measures 8^)
My best advice: first, use what you know about the problem to build the model; then figure out how to solve the problem, based on the model.

How to estimate search application's efficiency?

I hope it belongs here.
Can anyone please tell me is there any method to compare different search applications working in the same domain with the same dataset?
The problem is they are quite different - one is a web application which looks up the database where items are grouped in categories, and another one is a rich client which makes search by keywords.
Is there any standard test giudes for that purpose?
There are testing methods. You may use e.g. Precision/Recall or the F beta method to estimate a rate which computes the "efficiency". However you need to make a reference set by yourself. That means you will somehow measure not the efficiency in the domain, more likely the efficiency compared to your own reasoning.
The more you need to make sure that your reference set is representative for the data you have.
In most cases common reasoning will give you also the result.
If you want to measure the performance in matters of speed you need to formulate a set of assumed queries against the search and query your search engine with these at a given rate. Thats doable with every common loadtesting tool.

How do you measure if an interface change improved or reduced usability?

For an ecommerce website how do you measure if a change to your site actually improved usability? What kind of measurements should you gather and how would you set up a framework for making this testing part of development?
Multivariate testing and reporting is a great way to actually measure these kind of things.
It allows you to test what combination of page elements has the greatest conversion rate, providing continual improvement on your site design and usability.
Google Web Optimiser has support for this.
Similar methods that you used to identify the usability problems to begin with-- usability testing. Typically you identify your use-cases and then have a lab study evaluating how users go about accomplishing certain goals. Lab testing is typically good with 8-10 people.
The more information methodology we have adopted to understand our users is to have anonymous data collection (you may need user permission, make your privacy policys clear, etc.) This is simply evaluating what buttons/navigation menus users click on, how users delete something (i.e. changing quantity - are more users entering 0 and updating quantity or hitting X)? This is a bit more complex to setup; you have to develop an infrastructure to hold this data (which is actually just counters, i.e. "Times clicked x: 138838383, Times entered 0: 390393") and allow data points to be created as needed to plug into the design.
To push the measurement of an improvement of a UI change up the stream from end-user (where the data gathering could take a while) to design or implementation, some simple heuristics can be used:
Is the number of actions it takes to perform a scenario less? (If yes, then it has improved). Measurement: # of steps reduced / added.
Does the change reduce the number of kinds of input devices to use (even if # of steps is the same)? By this, I mean if you take something that relied on both the mouse and keyboard and changed it to rely only on the mouse or only on the keyboard, then you have improved useability. Measurement: Change in # of devices used.
Does the change make different parts of the website consistent? E.g. If one part of the e-Commerce site loses changes made while you are not logged on and another part does not, this is inconsistent. Changing it so that they have the same behavior improves usability (preferably to the more fault tolerant please!). Measurement: Make a graph (flow chart really) mapping the ways a particular action could be done. Improvement is a reduction in the # of edges on the graph.
And so on... find some general UI tips, figure out some metrics like the above, and you can approximate usability improvement.
Once you have these design approximations of user improvement, and then gather longer term data, you can see if there is any predictive ability for the design-level usability improvements to the end-user reaction (like: Over the last 10 projects, we've seen an average of 1% quicker scenarios for each action removed, with a range of 0.25% and standard dev of 0.32%).
The first way can be fully subjective or partly quantified: user complaints and positive feedbacks. The problem with this is that you may have some strong biases when it comes to filter those feedbacks, so you better make as quantitative as possible. Having some ticketing system to file every report from the users and gathering statistics about each version of the interface might be useful. Just get your statistics right.
The second way is to measure the difference in a questionnaire taken about the interface by end-users. Answers to each question should be a set of discrete values and then again you can gather statistics for each version of the interface.
The latter way may be much harder to setup (designing a questionnaire and possibly the controlled environment for it as well as the guidelines to interpret the results is a craft by itself) but the former makes it unpleasantly easy to mess up with the measurements. For example, you have to consider the fact that the number of tickets you get for each version is dependent on the time it is used, and that all time ranges are not equal (e.g. a whole class of critical issues may never be discovered before the third or fourth week of usage, or users might tend not to file tickets the first days of use, even if they find issues, etc.).
Torial stole my answer. Although if there is a measure of how long it takes to do a certain task. If the time is reduced and the task is still completed, then that's a good thing.
Also, if there is a way to record the number of cancels, then that would work too.

Resources