Testing Mass Update Scripts - netsuite

I'm going to start writing my first mass update script and am currently using a Sandbox account. I am wondering what is the best way to test the script considering it will make changes on a lot of items? Is it possible to test the script on a small sample?

When you execute a Mass Update, you have to build a search in the UI first, then you manually select which search results the script should process. Thus, the best way to test your script is to only select one result at a time, or just a few results. Perhaps build a few test records that match your search criteria and process those.
At any rate, with Mass Updates, you are in full control over which results are processed each time you execute the script.

What I often do ( this dead simple in SS 1) is to create a companion suitelet interface that just calls the mass update function
That way you pass a record id, via the suitelet params, to the mass update function and it runs immediately. In another tab you can make changes etc and then just refresh the suitelet when you want to run it again.
I find this more convenient when developing than going through the mass update interface for each development iteration.

I always test it by simply feeding in an internal ID to the script, in the debugger. That is pretty much how things will run with a mass update.

Related

best way to execute this task on nodejs?

lets suppose i have a collection with 4k documents on mongodb. Every 10 seconds for example i need to loop for each document and make some verifications. Im wondering now what is the besy way I could do that? Do I need multithread or something like that to speed up the proccess?
fire off your script using cron
your database will need to be able to handle the traffic, so you would have to play with horizontal scaling or vertical scaling if there are issues or look at cache options like redis or Elasticsearch which can handle faster hits, mongoDB will be able to handle it with the correct hardware otherwise.
consider adding a lock while each query is running in case one of them takes too long, IE make a collection for checking if a lock is turned on prior to executing the query script, turn off the lock once the query finishes

Execute a particular function every time the date changes in the user's local time

I am saving a counter number in user storage.
I want to provide some content to the user which changes daily using this counter.
So every time the counter increases by 1 the content will change.
The problem is the timezone difference.
Is there anyway to run a function, daily which will increase this counter by 1. I could use setInterval() which is a part of the NodeJs library but that won't be an accurate "daily" update for all users.
User storage is only available to you as a developer when the Action is active. This data is not available once the Action is closed, so you wouldn't be able to asynchronously update the field. If you do want asynchronous access, I'd suggest using an external database and only storing the database row key in the user's userStorage. That way you can access the data and modify it whenever you want.
The setInterval method will run a function periodically, but may not work in the way you want. It only runs the function while the runtime is active. A lot of services will shut down a runtime after a period. Cloud Functions, for example, run sometimes but then will shut down when not used. Additonally, Cloud Functions can be run several times in parallel instances, executing a setInterval function several times in parallel. That would increment the counter more times than you want.
Using a dedicated Cron service would help reduce the number of simultaneous executions while also ensuring it runs when you want.
You are unable to directly access the user's timezone within the Action, meaning you won't be able to determine the end of a day. You can get the content to change every day, but it'll have some sort of offset. To get around this, you could have several cron jobs which run for different segments of users.
Using the conv.user.locale field, you can derive their language. en-US is generally going to be for American users, which generally are going to live in the US. While this could result in an odd behavior for traveling, you can then bucket users into a particular period of execution. Running the task overnight, either 1AM or 4AM they'll probably be unaware but know that it updates overnight.
You could use the location helper to get the user's location more precisely. This may be a bit unnecessary, but you could use that value to determine their timezone and then derive that user's "midnight" to put in the correct Cron bucket.

How to write a DB2 function for a Linux shell script

I have a trigger on one of the DB2 tables. What i need is, every time that trigger runs, it should invoke a Linux shell script.
How to do this-- same as any other process? If my trigger was to invoke a Java process instead of a shell script, putting the bytecode (.class file) of that process to ..SQLLIB/function and defining a function for it would do the job.
Is this much different for Linux script, any subtleties?
I don't have Linux for another few days but deployment is right around the corner and nervous at it just the same.
TIA.
You cannot invoke a shell script from SQL, including from a trigger. What you can do is create a Java UDF the way you described, then use Runtime.exec() to invoke the script.
Note that your approach, apart from introducing security risks, can affect data consistency (a transaction can be rolled back, but you cannot "unrun" the script) and concurrency (a transaction with all its acquired locks will have to wait until your script returns).
A better approach would be to use an asynchronous process to decouple the external action from the database transaction. As an example, your trigger might insert a record into a log table, and the external process would read that log table on schedule, perform the required action when a new record is found, then delete the processed record.
There is an good article about
Making Operating System Calls from SQL
which includes sample code.

Applying BDD testing to batch scenarios?

I'm trying to apply BDD practices to my organization. I work in a bank where the nightly batch job is a huge orchestration multi-system flow of batch jobs running and passing data between one another.
During our tests, interactive online tests probably make up only 40-50% of test scenarios while the rest are embedded inside the batch job. As an example, the test scenario may be:
Given that my savings account has a balance of $100 as of 10PM
When the nightly batch is run at 11PM
Then at 3AM after the batch run is finished, I should come back and see that I have an additional accrued interest of $0.001.
And the general ledger of the bank should have an additional entry for accrued interest of $0.001.
So as you can see, this is an extremely asynchronous scenario. If I were to use Cucumber to trigger it, I can probably create a step definition to insert the $100 balance into the account by 10PM, but it will not be realistic to use Cucumber to trigger the batch to be run at 11PM as batch jobs are usually executed by operators using their own scheduling tools such as Control-M. And having Cucumber then wait and listen a few hours before verifying the accrued interest, I'm not sure if I'll run into a timeout or not.
This is just one scenario. Batch runs are very expensive for the bank and we always tack on as many scenarios as possible to ride on a single batch run. We also have aging scenarios where we need to run 6 months of batch just to check whether the final interest at the end of a fixed deposit term is correct or not (I definitely cannot make Cucumber wait and listen for that long, can I?)
My question is, is there any example where BDD practices were applied to large batch scenarios such as these? How would one approach this?
Edit to explain why I am not targeting to execute isolated test scenarios where I am in control:
We do isolated scenarios in one of the test levels (we call it Systems Test in my bank) and BDD indeed does work in that context. But eventually, we need to hit a test level that has an entire end-to-end environment, typically in SIT. In this environment, it is a criteria for multiple test scenarios to be run in parallel, none of which have complete control over the environment. Depending on the scope of the project, this environment may run up to 200 applications. So customer channels such as Internet Banking will run transactional scenarios, whiles at the core banking system, scenarios such as interest calculation, automatic transfers etc will be executed. There will also be accounting scenarios where a general ledger system consolidates and balances all the accounts in the environment. To do manual testing in this environment frequently requires at least 30-50 personnel executing transactions and checking on results.
What I am trying to do is to find a way to leverage on a BDD framework to automate test execution and capture the results so that we do not have to manually track them all in the environment.
It sounds to me as if you are not in control over the execution of the scenario.
It is obviously so that waiting for a couple of hours before validating a result is a not a great idea.
Is it possible to extract just the part of the batch that is interesting in this scenario? If that is possible, then I would not expect the execution time to 4 - 6 hours.
If it isn't possible to execute the desired functionality in isolation, then you have a problem regarding test-ability of your system. This is very common and something you really want to address. If the only way to test is to run the entire system, then you are not able to confidently say that it is working properly since all combinations that need testing are hard, sometimes even impossible, to execute.
Unfortunately, there doesn't seem to exist a quick fix. You need to be in a position where you are able to verify small parts of the system in order to verify them fast and reliably. And it doesn't matter if you are using Cucumber or any other tool to for the verification, all tools will have the same issue.
One approach you might consider would be to have a reporting process that queries the results of each batch run. It would then store the results you were interested in (i.e. those from your tests) in to a test analysis database.
I'm assuming that each batch run has a unique identifier. This identifier would be used as the key for the test results.
Here is an example of how it might work:
We know when the batch runs are finished (say this is at 4am). We schedule a reporting job to start after batch run completion (say at 5am) that analyses the test accounts.
The reporting job looks at Account X and Account Y. It records the amount of money in their account in a table alongside the unique identifier for the batch run. This information is stored in a test results database.
A separate process matches up test scenarios with test results. It knows test scenario 29 was tied to batch run ZZ20 and so goes looking in the test results database for the analysis from batch run ZZ20.
In the morning the test engineer checks the results of the run. They see that test scenario 29 failed as there was only £100 in Account X rather than the £100.001 that was expected.
This setup would allow you to synchronously process asynchronous batch runs. It would be challenging to configure though, as you would need to do a lot of automation around reporting and linking test scenarios with test results.

Is Cron the right option for this?

I am trying to create a script that watches my college time table and registers them for a class when it is open. Kind of like an Ebay auction sniper. I was wondering if cron is the right tool for this. I need to be able to run the script for every user. The user will enter their username and password and the script will query the timetable.
Looking for some advice on if cron is the tool or if there are other tools out there.
cron runs a particular program or script at a specified time. For example, if you wanted a report compiled and e-mailed every day at 2 a.m., that would be a cron job.
In this sense, cron has a timetable, but I am not sure that it is the sort of timetable of which you are thinking.
From a system-design perspective, the clean way to achieve the effect you want naturally would be to let the students' class requests join a queue, then to have the college's registrar's own computer take requests from the queue as seats became available. However, I assume from your Ebay reference that this is not possible in your case.

Resources