SO with my limited knowledge of test Complete scripting. It seems as though one should look at the object viewer to see your windows, and use the UI features via name mapping of these objects and clicking selecting or populating their fields.
I have a question about how to do assertions using the JSscripting tests. If i want to see if a certain window looks like a past window, what i have been doing is making a checkpoint via keyword tests at that time. I feel like i should be doing this through the api though. Is there an area that explains how to do this via code, rather then using the keyword checkpoints?
Bob, the checkpoints idea is not limited to Keyword Tests. You can use checkpoints in scripts as well. When recording a script, you just create the needed Checkpoint type via the Recording toolbar (I guess you need the Region Checkpoint in your case), and you will get the needed script generated. Based on this script, you will see how checkpoints are called from a script.
As for the documentation, the "Region Checkpoints" help topic does a good job explaining basics, and giving the links to other topics to read. And the "Creating Region Checkpoints" help topic shows the procedure step by step.
I hope this helps. Let me know if there are unclear points.
Related
I am searching for a way to determine how a job was initiated on the HPCC cluster.
There are several ways to submit a job. For example:
1- a manual submission via the ECL IDE / ECL Watch
2- an external cron submission
3- an ECL submission of dynamically built code
4- if a file lands in a directory, it triggers a submission
etc.
I can retrieve some important information by executing a STD.System.Workunit.WorkunitList,
but I cannot find any function that would give me an attribute indicating the source of
that submission.
HPCC is a data-centric platform and ECL reflects that approach. So I am attempting to build
a matrix that defines the code in relation to that data. A product is technically a bunch
of data (files) that is the result of source input -> scrub and transformation processes -> to
the final base files. Then those files are then prepped / indexed for external use:
1- Roxie queries
2- PowerBI
3- webpage
4- reports ftp'd or emailed
etc.
I want to build this matrix that defines (by product) the initiating job(s), where they were initiated, any schedule (?), the associated input/output files (flagging whether they are source/intermediate/base/output). I am trying to design this so that the matrix can be dynamically built, because as we all know:
(1) nowhere does this type of documentation exist so that if someone new comes in to work on
a product, they can go and see the scope and life cycle of the data,
(2) nobody likes to document,
(3) the second any manual documentation is actually created and saved; it is out of sync with reality
So far, the design will be a collection of files (defined by the level of detail) which would
then be JOINed together to yield the final matrix. Not sure if this would end up as a PowerBI report or a webpage...still tossing that around. Still, this might prove to be something useful for
anyone using HPCC who wants a 30,000 ft view of their product.
I have attempted to programmatically scan a WUID output, looking for the necessary attributes but I have had little success.
I appreciate any assistance / comments.
No matter what component submits ECL to execute on the platform, they all ultimately end up going through the same WsWorkunits API, which is the public SOAP / REST interface.
While some client applications will leave a fingerprint so you can deduce where it came from, it is not a foolproof mechanism...
For Example: In http://play.hpccsystems.com:8010/esp/files/index.html#/workunits/W20221115-075604/xml you can see the ECL IDE appends some meta information into the Workunit (it stores the IDE version number in the "Application" section)
I may be new to tradingview but their pinescript programming language seems to be the best I've ever seen for automated trading. They seem to really want me to succeed but I cannot find where it tells me how to access the balances for certain balances. I am trying to make a code where I do not reinvest the extra I make so I have to be able to reference the available amount. I have not quite finished the manual yet but I do not see what variable or function allows me to do that, or at least not where I would expect it.
Have a look at strategy.equity. There are quite a few built-in variables for strategy values. You can inspect them from the refman by searching on "strategy".
You can also calculate your own metrics using a technique like this one if you don't find what you need in the built-ins.
And welcome to Pine! This is the best place to start your journey:
https://www.tradingview.com/?solution=43000561836
I am reading a lot about Gherkin, and I had already read that it was not good to repeat steps, and for this it is necessary to use the keyword "Background", but in the example of this page they are repeating the same "Given" again and again, Could it be that I am doing wrong? I need to know your opinion about it:
Like with several things, this a topic that will generate different opinions. On this particular example I would have moved the "Given that I select the post" to the Background section as this seems to be a pre-requisite to all scenarios on this feature. Of course this would leave the scenarios in the feature without an actual Given section but those would be incorporated from the Background section on execution.
I have also seen cases where sometimes the decision of moving steps to the Background is a trade-off between having more or less feature files and how these are structured. For example, if there are 10 scenarios for a particular feature with a lot of similar steps between them - but there are 1 or 2 scenarios which do not require a particular step, then those 1 or 2 scenarios would have to moved into a new feature file in order to have the exact same steps on the Background section of the original feature.
Of course it is correct to keep the scenarios like this. From a tester's perspective, the Scenarios/Test cases should run independently, therefore, you can keep these tests separately for each functionality.
But in case you are doing an integration testing, then some of these test cases can be merged, thus you can cover multiple test cases in one scenario.
And the "given" statement is repeating, therefore you can put that in the background, so you don't have to call it in each scenarios.
Note: These separate scenarios will be handy when you run the scripts separately with annotation tags, when you just have to check for a specific functionality, or a bug fix.
So a bit of a general question. I work as a data analyst for a startup. My primary process involves taking existing customer data a client has and cleansing/normalizing it to fit into our platform once as part of our onboarding process. A member of our team exports their data from their system they are transitioning from or, if they kept track of it in house, we receive their Excel log they used to track it. It is always in a different format and requires extensive cleansing (avg 1 min/record). We take what is usually one large table (.xlxs format), and after cleansing, split it into four .csv files; which we load as four tables on our platform.
I feel I have optimized the process quite well in terms of the process steps and cleansing with excel functions (if, concat, text-to-columns, etc). I have beginner-intermediate skills in VBA and SQL and have just scratched the surface in R; what is frustrating is that I know there is the potential to automate this process but I just don't know where to start. If anyone has experience with something like this, code, a link to an article / another thread, or just some general direction would be much appreciated. Please ask for clarification where you feel it is needed. Thanks.
This will be really hard to do in Excel. If you have the time you can try out Optimus, a Data Cleansing library written in Python and Pyspark (you don't need to know spark). Here is the webpage https://hioptimus.com.
You can create Data Pipelines with it, and I recommend that you do that, try to generalize your processes, and asking the client for more a structure way of passing the data.
The good thing is that you don't need Big Data for running Optimus, bit if you have it some day, the same code will work.
Check out the documentation for more:
http://optimus-ironmussa.readthedocs.io/en/latest/
Let me know if you have doubts!
Is there any way to generate automatically the result of an SAP transaction? Let's say I want to see the production orders for one MRP controller (I have the COOIS transaction for this). Is there any way to generate an XML feed with the result of that transaction and refresh it let's say.. every 10 minutes?
Or to auto-export an .xls file with the result somewhere... ? I know I have the jobs and the spools but I have to manually download the result from the SAP GUI.
I don't have access to ABAP so I would like to know if there are other methods to get data from SAP?
Since "a transaction" might be anything from a simple report to a complex interactive application that does not even have a simple "result", I doubt that there's a way to provide any generic tool for this. You might try the following:
Schedule a job and have the result sent to some mailbox instead of printing it. Then use the programming language of your choice to grab and process the mail.
Check whether there are BAPIs available (BAPI_PRODORD_* or something like that - I'm not a CO expert, so I wouldn't know which one to use). You can call these BAPIs from an external program without having to write ABAP yourself - however, you'll most likely need the help of someone who knows ABAP in order to get the interface documentation and understand the concepts.