TestNumberer in OrigenTesters? - origen-sdk

I see that there is a TestNumberer class in the OrigenTesters at https://github.com/Origen-SDK/origen_testers/blob/master/lib/origen_testers/generator/test_numberer.rb . However, it looks pretty bare, and doesn't look like its being used internally anywhere. So, my question is does this TestNumberer... do anything? I don't see anything in the guides about automatically generating test numbers. What I'd like is something like:
test_numberer.set_base(1000) # for example
test_numberer.set_offset(5)
func (..., test_number: test_numberer.next) #=> test_number = 1000
func (..., test_number: test_numberer.next) #=> test_number = 1005
Possibly even embed incrementing the test number into the func function itself in the interface.
Thanks!
(For the record, I actually already have this in one of my apps for personal use, but am wondering if OrigenTesters already has one, and if not, if it could use one)

No, that is old and dead code which should be removed.
There is a solution for generating test numbers though, and that is the TestIds plugin: http://origen-sdk.org/test_ids/
I'm still not totally happy with how it works, but I use it in production today in a large test flow module.
I would say it does solve these problems effectively:
How do you assign test (and/or bin) numbers within an IP test block, in such a way that it can be included in different SoC test programs which may each want to assign a different range or even a different numbering scheme to the tests for that IP.
How do you automatically assign test (and/or bin) numbers in such a way that they will stick and won't change when other tests are added or removed from the flow in future.
I don't really have anything specific that I know is wrong with it, just some niggles that have come up from time to time and it would be good to get other people using it and involved with it to help iron these out.
One of the things that I have come to realize is that it is easier to manage if you explicitly give tests a number (or bin) ID in the test flow like this:
func :blah, number: :blah_test1
func :blah, number: :blah_test2
This makes it easier to control when you want same-named tests to have the same number or not, whilst not locking down to any particular number.
Anyway, you should find the documentation of it pretty good and obviously ask further questions here if you have any.

Related

Getting started at tradingview and referencing balances

I may be new to tradingview but their pinescript programming language seems to be the best I've ever seen for automated trading. They seem to really want me to succeed but I cannot find where it tells me how to access the balances for certain balances. I am trying to make a code where I do not reinvest the extra I make so I have to be able to reference the available amount. I have not quite finished the manual yet but I do not see what variable or function allows me to do that, or at least not where I would expect it.
Have a look at strategy.equity. There are quite a few built-in variables for strategy values. You can inspect them from the refman by searching on "strategy".
You can also calculate your own metrics using a technique like this one if you don't find what you need in the built-ins.
And welcome to Pine! This is the best place to start your journey:
https://www.tradingview.com/?solution=43000561836

Reusing cucumber steps in a large codebase/team

We're using cucumberJS on a fairly large codebase with hundreds of cucumber scenarios and we've been running into issues with steps reuse.
Since all the steps in Cucumber are global, it's quite difficult to write steps like "and I select the first item in the list" or similar that would be similarly high-level. We end up having to append "on homepage" (so: "I select the first item in the list of folders on homepage") which just feels wrong and reads wrong.
Also, I find it very hard to figure out what the dependencies between steps are. For example we use a "and I see " pattern for storing a page object reference on the world cucumber instance to be used in some later steps. I find that very awkward since those dependencies are all but invisible when reading the .feature files.
What's your tips on how to use cucumber within a large team? (Including "ditch cucumber and use instead" :) )
Write scenarios/steps that are about what you are doing and why you are doing it rather than about how you do things. Cucumber is a tool for doing BDD. The key word here is Behaviour, and its interpretation. The fundamental idea behind Cucumber and steps is that each piece of behaviour (the what) has a unique name and place in the application, and in the application context you can talk about that behaviour using that name without ambiguity.
So your examples should never be in steps because they are about HOW your do something. Good steps never talk about clicking or selecting. Instead they talk about the reason Why you are clicking or selecting.
When you follow this pattern you end up with fewer steps at a higher level of abstraction that are each focused on a particular topic.
This pattern is easy to implement, and moderately easy to maintain. The difficulty is that to write the scenarios you have to have a profound understanding of what you are doing and why its important so you can discover/uncover the language you need to express yourself distinctly, clearly and simply.
I'll give my standard example about login. I use this because we share an understanding of What login is and Why its important. Realise before you can login that you have to be registered and that is complex.
Scenario: Login
Given I am registered
When I login
Then I should be logged in
The implementation of this is interesting in that I delegate all work to helper methods
Given I am registered
#i = create_registered_user
end
When I login
login_as(user: #i)
end
Then I should be logged in
should_be_logged_in
end
Now your problem becomes one of managing helper methods. What you have is a global namespace with a large number of helper methods. This is now a code and naming problem and All you have to do is
keep the number of helper methods as small as possible
keep each helper method simple
ensure there is no ambiguity between method names
ensure there is no duplication
This is still a hard problem, but
- its not as hard as what you are dealing with
- getting to this point has a large number of additional benefits
- its now a code problem, lots of people have experience of managing code.
You can do all these things with
- naming discipline (all my methods above have login in their name)
- clever but controlled use of arguments
- frequent refactoring and code cleaning
The code of your helper methods will have
- the highest churn of all your application code
- the greatest need to be simple and clear
So currently your problem is not about Cucumber its about debt you have with your existing scenarios and their implementation. You have to pay of your debt if you want things to improve, good luck

Lua - How would I spoof the GetFenv function, without changing it?

I have some code I would like to find a vulnerability in, I coded it, and I would be grateful if you could help me out with finding anything in it.
My one and only concern are that getfenv may be able to be spoofed, in some way.
coroutine.wrap(function()
while wait() do
for i, v in func_table.pairs(func_table) do
if func_table.getfenv()[i] ~= v then
return ban_func(10, 23)
end
end
end
end)()
To be clear, the ban_func is inside the func_table, this will automatically detect its change in data and will ban accordingly. The only way I think they, being the exploiter/cheater, would be able to change anything is by spoofing getfenv.
If you could explain to me how it would be possible to spoof such a function and/or how to patch a spoof on the function, all without changing any of its own data, I would be very happy!
I assume that this code is running on the exploiter/cheaters machine. Fundamentally, there is no way to guarantee security of client code. Your checks can be removed, and your checks checks can be removed. Even the Lua binary itself can be changed internally and the getfenv can be changed to do anything in addition to what it does. Implementing a strong border between server and client logic is the only true way to secure applications.
One attack possible in this case is if the client runs Lua code in the same lua_State before your func_table is setup. In this case, they could sandbox you like my lua sandbox implementation found here.
Another attack is taking advantage of metamethods to make func_table.getfenv()[i] ~= v return true. This could be detected by using rawequal, checking the type of func_table.getfenv()[i], or using the original functions as keys in a table of true values and checking if the table at key func_table.getfenv()[i] is true.
Yet another attack would be to edit both the global state and your table. It is common when changing the address of a function, to change ALL references to that address in ram, which would include the internal reference inside your table.
Since you are using wait() I assume that you are running this code in Roblox. In that case, as I've emphasized on their developer forums, the experimental mode (previously filtering enabled) setting is the only way to secure your game from exploiters. This prevents clients from editing the game most of the time. They still have full control of their character position, a couple other instance types (like Hats I believe), can ask the server to make changes through RemoteEvent and RemoteFunction instances, and wall hack (set wall transparency) (to counter this, only send client parts that It can see).

Generic graphing and charting solutions

I'm looking for a generic charting solution, ideally not a hosted one that provides the following features:
Charting a tuple of values where the values are:
1) A service identifier (e.g. CPU usage)
2) A client identifier within that service (e.g. server IP)
3) A value
4) A timestamp with millisecond/second resolution.
Optional:
I'd like to also extend the concept of a client identifier further, taking the above example further, I'd like to store statistics for each core separately, so, another identifier would be Core 1/Core 2..
Now, to make sure I'm clearly stating my problem, I don't want a utility that collects these statistics. I'd like something that stores them, but, this is also not mandatory, I can always store them in MySQL, or such.
What I'm looking for is something that takes values such as these, and charts them nicely, in a multitude of ways (timelines, motion, and the usual ones [pie, bar..]). Essentially, a nice visualization package that allows me to make use of all this data. I'd be collecting data from multiple services, multiple applications, and the datapoints will be of varying resolution. Some of the data will include multiple layers of nesting, some none. (For example, CPU would go down to Server IP, CPU#, whereas memory would only be Server IP, but would include a different identifier, i.e free/used/cached as the "secondary' identifier. Something like average request latency might not have a secondary identifier at all, in the case of ping). What I'm trying to get across is that having multiple layers of identifiers would be great. To add one final example of where multiple identifiers would be great: adding an extra identifier on top of ip/cpu#, namely, process name. I think the advantages of that are obvious.
For some applications, we might collect data at a very narrow scope, focusing on every aspect, in other cases, it might be a more general statistic. When stuff goes wrong, both come in useful, the first to quickly say "something just went wrong", and the second to say "why?".
Further, it would be a nice thing if the charting application threw out "bad" values, that is, if for some reason our monitoring program started to throw values of 300% CPU used on a single core for 10 seconds, it'd be nice if the charts themselves didn't reflect it in the long run. Some sort of smoothing, maybe? This could obviously be done at the data-layer though, so its not a requirement at all.
Finally, comparing two points in time, or comparing two different client identifiers of the same service etc without too much effort would be great.
I'm not partial to any specific language, although I'd prefer something in (one of the following) PHP, Python, C/C++, C#, as these are languages I'm familiar with. It doesn't have to be open source, it doesn't have to be a library, I'm open to using whatever fits my purpose the best.
More of a P.S than a requirement: I'd like to have pretty charts that are easy for non-technical people to understand, and act upon too (and like looking at!).
I'm open to clarifying, and, in advance, thanks for your time!
I am pretty sure that protovis meets all your requirements. But it has a bit of a learning curve. You are meant to learn by examples, and there are plenty to work from. It makes some pretty nice graphs by default. Every value can be a function, so you can do things like get rid of your "Bad" values.

Can TDD be a valid alternative to overkill data validation?

Consider these two data validation scenarios:
Check everything everywhere
Make sure that every method that takes one or more arguments actually checks them to ensure that they're syntactically valid.
Pros
Very fine check granularity.
If the code that is being written is for some kind of library we make sure to limit the damage that can be done if the developers that will be using it fail to provide valid data.
Cons
It's costly to always perform checks that most of the time shouldn't be needed.
It's still possible to forget to add a check every now and then.
More code is being written and hence in need of maintenance.
Make use of TDD goodness
Validate data only when it enters your code from the external world.
To make sure that internally data will be always syntactically correct, create tests that check every method that returns a value. To make sure that if valid data enters, valid data exits.
The pros and the cons are practically switched with the ones from the former approach.
As of now I'm using the first approach, but since I'm employing test driven development I thought that maybe I could go with the second one.
The advantages are clear, still, I wonder if it's as secure as the first method.
It sounds like the first method is contract driven, and one aspect of that is that you also need to verify that what you return from any public interface meets the contract.
But, I think that both approaches are valid, but very different.
TDD only partially deals with the public interface, in that it should check that every input is properly validated, unfortunately, unless you have all your validation in separate functions, to adequately test, it becomes very difficult to ensure that this function of 3 or 4 parameters is being properly tested for validity. The number of tests you have to write is quite high, in either approach.
If you are using a library, then in every function that can be called directly from the outside (outside being outside the library) then you will need to check that every input is valid, and that invalid input is handled as per the contract, either returning a null or throwing an exception. But, it must be in agreement with the documentation.
Once you have verified it, then there is no reason to force the verification on private functions as those can only be called from within the library, and you should be verifying that you are only dealing with valid data.
Lots of tests will be needed, regardless, unfortunately. All these tests do is to ensure that you don't have any surprise problems, but that should generally help justify the cost of writing and maintaining them.
As to your question, if your tests are really well written, and you ensure that all validity checks are done completely, then it should be as secure, but the risk is that if you believe it is secure and you have poorly written tests then it will actually be worse than no tests, as there is an assumption that your tests are well-written.
I would use both methods, until you know your tests are well-written then just go with TDD.
My opinion is that in the first scenario, two of your Cons outweigh everything else:
It's costly to always perform checks
that most of the time shouldn't be
needed.
More code is being written and hence
in need of maintenance.
Also, technically TDD has no bearing on this question, because it is not a testing technique. More later...
To mitigate the Cons I would strongly advocate (as I think you say) splitting the code into an outside and an inside: The outside is where all the validation occurs. Hopefully this is but a thin wrapper around the inside, to prevent GIGO. Once inside, data never needs to be validated again.
As for TDD, I would strongly advocate (as you are now doing) employing it to develop your code, with the added benefit of leaving a trail of tests that become a regression test suite. Now you will naturally develop your outside code to perform robust validation, with the promise of easily adding any checks that you might initially forget. Your inside code can be developed assuming it will only handle valid data, but TDD will still give you the confidence that it will function to spec.
I'm saying that I would go with the second approach, as I've described, independently of whether I'm developing with TDD, or not (but TDD is always my first choice).
The advantages are clear, still, I wonder if it's as secure as the first method.
This completely depends on how well you test it.
This could be just as secure, if the following two criteria are met:
Every publicly exposed means of adding data to the system are validated completely
Every internal method that translates data is completely and adequately tested
However, I question that this would be easier or that it would require less code. The amount of code required to check every public entry point is going to be very similar to the amount of code required to validate each method. You're going to need more checks in the entry points, since they'll have to check things that might otherwise be checked internally.
For the second method, you need two good sets of tests. You must not only check that
To make sure that if valid data
enters, valid data exits.
You must also check that if Invalid data enters, an exception is thrown. I suppose you still have to validate data and kick out if you have invalid data. This is really the only way if you don't want pesky ArgumentNullException s or other cryptic errors in your production application. However TDD can really toughen up the quality of all that checking (especially with Fuzz Testing).
One item is missing from your list of Pros and Cons and that is something important enough to make unit testing a much more safer method than maniac parameters checking.
You just have to consider the When and the Where.
For unit testing the when and the where are:
when: at design time
where: in a dedicated source file outside of the application code
For overkill data checking they are:
when: at runtime
where: entangled in the application source code, typically using asserts.
That is the point: code covered by unit testing detects errors at design time when you run the tests, if you are the paranoid and schizofrenic kind of tester (the bests) you write tests designed to break whatever can be, checking each data boundary and perverse input. You also use code coverage tools to ensure every branch of every alternative is tested. You have no limit : tests lies in their own files and do not clutter application. Doesn't matter if you get ten times as many test lines than the actual application code, no run time penalty, no readability penalty.
On the other hand integrated overkill testing detects errors at runtime. In the worst-case it will detects errors on the user system, where you can do nothing about it (if even you ever heard of this error happening). Also even if you are the paranoid kind you will have to limit your testing. Assertion just can't be 90 percents of the application code. It raise readability issues, maintenance, often heavy performances penalty. Where will you stop then: only checking parameters for external input ? Checking every possible or impossible inputs of inner functions ? Checking every loop invariant ? Also testing behavior when out of flow data (globals, system files, etc) is changed ? You must also be conscious that assertion code can also contain some bugs. What if the formula of an assertion perform a divide. You must ensure it will not lead by a DIVIDE-BY-ZERO error or such ?
Another problem is that in many cases you just don't know what can be done when an assertion failure. If you are at a real entry point you can return back something understandable for your user or the lib user... when you are checking innner functions

Resources