Hybris -> how to create cronjob with execute beanshell script? - groovy

I have a beanshell script and I would like it to automate.
I need to create cronjob and add my beanshell code to it.
Maybe someone do that think?
Is anyone know how to do this?
or how to match my script with cronjob?

The answer is actually given in Cronjob scripting documentation.
In a nutshell, you will use dynamic scripting to create the cronjob instead of using the slow old-fashioned way.
Concept
There are three new objects used (quoted from doc):
Script - the item type where the script content is stored (a separate deployment table)
ScriptingJob - a new ServicelayerJob item, which contains additionally the scriptURI (consequently, the stored script can be
found at runtime from different locations (classpath, db, etc..)
ScriptingJobPerformable - the spring bean assigned to every ScriptingJob instance; it implements the usual perform() method (like
for any other cronjob). This is where the "scripted" cronjob logic is
executed
How to use Cronjob scripting
First step - save your Script instance in the database. For this example I've used a code from http://www.beanshell.org/manual/quickstart.html
Note: you can create script with Groovy, BeanShell ad JavaScript
INSERT_UPDATE Script; code[unique=true];content;scriptType(code)
;myBshScript;"foo = ""Foo"";
four = (2 + 2)*2/2;
print( foo + "" = "" + four ); // print() is a BeanShell command
// Do a loop
for (i=0; i<5; i++)
print(i);
// Pop up a frame with a button in it
button = new JButton( ""My Button"" );
frame = new JFrame( ""My Frame"" );
frame.getContentPane().add( button, ""Center"" );
frame.pack();
frame.setVisible(true);";BEANSHELL
Don't forget to escape " in the script with an other " (Impex restriction).
Second step - create a ScriptingJob that use the previously created Script
INSERT_UPDATE ScriptingJob; code[unique=true];scriptURI
;myBshDynamicJob;model://myBshScript
model://myBshScript is used to retrieve a Script stored in the DB
Third step - create the CronJob
INSERT_UPDATE CronJob; code[unique=true];job(code);singleExecutable;sessionLanguage(isocode)
;myBshDynamicCronJob;myBshDynamicJob;true;en
Optional step - create a trigger for the CronJob
INSERT_UPDATE Trigger;cronjob(code)[unique=true];cronExpression
;myBshDynamicCronJob;0 0 0/1 1/1 * ? *
This executes the cronjob every hour.
Execute the CronJob by script
In the hac scripting tab, choose Groovy and run this in commit mode.
def dynamicCJ = cronJobService.getCronJob("myBshDynamicCronJob")
cronJobService.performCronJob(dynamicCJ,true)
After running the Script you should have this displayed
And in the console
INFO [hybrisHTTP27] (myBshDynamicCronJob) [ScriptingJobPerformable] Foo = 4
0
1
2
3
4

In Hybris, go to HMC> System> CronJobs> Search your created cronjob or create a new cronjob> Time Schedule tab> Trigger> Create Trigger.
From this Trigger tab window, you can schedule the interval eg. daily, weekly, etc. Also you can set the start time and frequency.
You can also do this trigger setting via impex as below:
INSERT_UPDATE Trigger;cronJob(code)[unique=true];second;minute;hour;day;month;year;relative;active;maxAcceptableDelay
;CartRemovalJob;0;5;4;-1;-1;-1;false;true;-1
For more detailed information, have a look at this, in case you have access to Hybris Wiki.

If you have access to hybris wiki, here you can find how to create and execute a cronjob.
In order to execute bean shell, in the cronjob "perform" method you should do this:
SimpleScriptContent content = new SimpleScriptContent("beanshell", "here your beanshell script code as string");
ScriptExecutable script = scriptingLanguagesService.getExecutableByContent(content);
ScriptExecutionResult result = script.execute();
...
Here the import section:
import de.hybris.platform.scripting.engine.content.impl.SimpleScriptContent;
import de.hybris.platform.scripting.engine.ScriptExecutionResult;
import de.hybris.platform.scripting.engine.ScriptExecutable;
You should access to scriptingLanguagesService with annotation:
#Autowired
ScriptingLanguagesService scriptingLanguagesService;

I have tried below steps for the groovy script. You can try the same for the beanshell.
You have to create an instance of
Script - the item type where the script content is going to store.
ScriptingJob - a new ServicelayerJob item, which contains additionally the scriptURI
CronJob - This is where the "scripted" cronjob logic is executed
1. Create a Script
Script code: HelloScript
Script engine type: beanshell
Content: log.info("Hello");
2. Create the Scripting Job
Again from HMC/Backoffice, find ScriptinJobs and create the new instance of it. Here you have to define Code and ScriptURI like
Code: HelloScriptJob
ScriptURI: model://HelloScript
You can access this job in the next step to create the CronJob
3. Create a CronJob
From HMC/BackOffice, create an instance of cronJob. Select above-created job(HelloScriptJob) in
Job definition drops down and save the changes. Now you good to schedule/run this cronJob.
Refer detailed post cronjob-using-groovy-script-hybris

Related

Jmeter tearDown Thread Group can't access previous thread group used file

I have a Test plan with several thread groups that write summary report results in the same csv file hosted in a server, this works fine using a networkdrive (z:) and changing jmeter.properties -> resultcollector.action_if_file_exists=APPEND.
Finally I have a tearDown Thread Group that insert the csv data into a sql server (the previous used networkdrive is hosted in this server in c:\jmeter\results.csv) and then it deletes the csv.
The case is when I run the test plan full I always have this error: "Cannot bulk load because the file "c:\jmeter\results.csv" could not be opened. Operating system error code 32"
The strange thing is that if I start the tearDown Thread Group alone it works fine, it makes the bulk insert in sql server and then it deletes de csv.
I started 2 days ago with Jmeter, so I'm sure I am misunderstanding something :S
Summary Report Config
JDBC Request
BeanShell PostProcessor that deletes csv
Test plan Structure
It happens because Summary Report (as well as other listeners) keep the file(s) open until test ends so you need to trigger this "close" event somehow.
Since JMeter 3.1 you're supposed to be using JSR223 Test Elements and Groovy language for scripting therefore replace this Beanshell PostProcessor with the JSR223 PostProcessor and use the following code:
import org.apache.jmeter.reporters.ResultCollector
import org.apache.jorphan.collections.SearchByClass
def engine = engine = ctx.getEngine()
def test = engine.getClass().getDeclaredField('test')
test.setAccessible(true)
def testPlanTree = test.get(engine)
SearchByClass<ResultCollector> listenerSearch = new SearchByClass<>(ResultCollector.class)
testPlanTree.traverse(listenerSearch)
Collection<ResultCollector> listeners = listenerSearch.getSearchResults()
listeners.each { listener ->
def files = listener.files
files.each { file ->
file.value.pw.close()
}
}
new File('z:/result.csv').delete()
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It

What is the use of the 'Parameters' tab in the script record?

I have a scheduled script (running from a Suitelet script) that is getting parameters from a Suitelet. What do the parameters in the Parameter tab do?
my suitelet code chunk:
var params = [];
params['custscript_emp_accrual'] = empreq_id;
params['custscript_emp_months'] = rowCount;
nlapiLogExecution('debug', 'empreq_id:rowCount', empreq_id + ':' + rowCount);
nlapiScheduleScript('customscript_emp_accrual_sched', 'customdeploy_emp_accrual_sched', params);
my scheduled code chunk:
var empreq_Id = nlapiGetContext().getSetting('SCRIPT', 'custscript_emp_accrual');
var month = nlapiGetContext().getSetting('SCRIPT', 'custscript_emp_months');
var dateNow = nlapiLookupField('customrecord_payroll_period', month, 'custrecord_payperiod_enddate');
They're ways to setup script deployments. Check out the help doc "Creating Script Parameters Overview" for more information.
The Parameters allow you to pass data from the Suitelet to the Scheduled Script. If you don't set up the script parameters then the data won't be passed to the Scheduled Script.
You might want to focus your learning on SuiteScript 2.0, this is most widely used in industry and has more support online.
Script parameters are very useful for variables that could either change over time or be different between script deployments.
This is just a simple example of how you could use parameters:
You could have a script that could be deployed to more than one record type and on each record type you certain code to be executed for a different user.
Now you can add a script parameter called USER, which is a select/multi-select field. This means that on each script deployment, you can specify a different user for which your code should run.
Now you could have the following code to read your parameter and restrict editing:
var allowedUser= runtime.getCurrentScript().getParameter('custscript_user');
var currentUser= runtime.getCurrentUser();
if(currentUser == allowedUser){
//do something
}
You can now select a different user on each of your script deployments.
Hope this helps!
For more info, have a look at page 110 of this document from Oracle: SuiteScript Developer Guide

How to implement multithreading in groovy?

I want to create 10 million customers for performance testing. I am running a basic groovy script for creating a customer with only mandatory attributes. Then I am running the script inside a loop.
How can I improve the performance of this groovy script?
I can't find the corresponding multi-threading options that are available in impex import.
Is there a better way of creating 10 million customers in Hybris?
Edit 1:
Sample groovy script for generating customers with different id's.
import com.google.common.collect.ImmutableSet
import de.hybris.platform.core.model.user.AddressModel
import de.hybris.platform.core.model.user.CustomerModel
//Setting only mandatory attributes
for(int i=0; i<100000; i++) {
customerModel = new CustomerModel()
id = new Random().nextInt(100000000)
uid = 'TestCustomer_'+id
customerModel.setUid(uid)
name = 'Test Customer Name_'+id
customerModel.setName(name)
addressModel = new AddressModel()
addressModel.setOwner(customerModel)
customerModel.setDefaultPaymentAddress(addressModel)
customerModel.setDefaultShipmentAddress(addressModel)
try{
modelService.save(customerModel)
}catch(Exception e){
println('Creation of customer with id = '+uid+' and amway account = '+code+' failed with error : '+e.getMessage())
}
}
I would say the logical answer is to use Impex files. This allows creation bulk and has support for multithreading: https://help.hybris.com/1811/hcd/44f79c4e604a4bff8456a852e617d261.html
basically you can configure the number of workers or threads:
impex.import.workers=4
You would be responsible for converting your input format to either *.csv or *.impex
addition:
Regarding the Groovy script, you can set the uid and name with impex, only you would have to provide the random numbers in advance. You could do this in Excel or in some scripting language.
You could even do it in the impex itself with code execution.
But if you just want a lot of random customers: you could also just spin up 10 browser windows with /hac and run the script ten times.
I created multiple ScriptingJob with the above groovy script and attached them to 30 different Cronjobs. Executing all of them in parallel achieved the same result.

Azure Backend - Code First Migration - CREATE TRIGGER must be the first statement

When I create a new table for my Azure Backend using Code First Migration, the script looks like
CREATE TABLE(...)
CREATE TRIGGER ...
...
INSERT [dbo].[__MigrationHistory] ...
And it always fails saying "CREATE TRIGGER must be the first statement".
I have workarounds to make it work however I'd like to understand if this is a bug with Code First Migration or me not using it properly.
It says that your script must begin by "CREATE TRIGGER". In SQL Management studio, you would put a "GO" just before your CREATE TRIGGER. "GO" is not a real T-SQL command, it's like a string split of your script.
If your script in is a SQL file, put a "GO" and do some thing like :
foreach (var script in fullScript.Split(new[] { "GO" }, StringSplitOptions.RemoveEmptyEntries))
{
_dbContext.ExecuteSqlCommand(script);
}

How to use nested paramter in SoapUI context.expand expression?

My use case is that I want to do a bulk update of request bodies in multiple SoapUI projects.
Example of request body.
{
"name": "${#TestSuite#NameProperty}"
"id": "${#TestSuite#IdProperty}"
}
I want to expand the property ${#TestSuite#NameProperty} through Groovy, and get the value stored at TestSuite level then modify it as necessary.
Suppose I have 50 test steps in my test case and I want to expand the request for each one from Groovy script. To expand a specific test steps, I would pass the test Steps's name. For example:
expandedProperty = context.expand('${testStep1#Request}')
But, how can I achieve the same, if I wanted to iterate over all 50 test steps? I tried to used a nested parameter inside the context.expand expression, but it did not work. For Example:
currentTestStepName = "TestStep1"
expandedProperty = context.expand('${${currentTestStepName}#Request}')
This only returned me the expanded request from the test step right above it (where I am running the groovy script from) rather than the "TestStep1" step. ( Which is madness!)
Also, context.expand only seems to work while executing via Groovy script from the SoapUI workspace project. Is there and other way, or method similar to context.expand which can expand the properties like "${#TestSuite#NameProperty}" during headless execution? Eg: A groovy compiled jar file imported in SoapUI.
Thanks for any help in advance!
You can use context.expand('${${currentTestStepName}#Request}') way to get it.
There are other approaches as well, which does not use context.expand.
In order to get single test step request of any given test step:
Here user passes step name to the variable stepName.
log.info context.testCase.testSteps[stepName].getPropertyValue('Request')
If you want to get all the requests of the test case, here is the simple way using the below script.
/**
* This script loops thru the tests steps of SOAP Request steps,
* Adds the step name, and request to a map.
* So, that one can query the map to get the request using step name any time later.
*/
import com.eviware.soapui.impl.wsdl.teststeps.WsdlTestRequestStep
def requestsMap = [:]
context.testCase.testStepList.each { step ->
log.info "Looking into soap request step: ${step.name}"
if (step instanceof WsdlTestRequestStep) {
log.info "Found a request step of required type "
requestsMap[step.name] = context.expand(step.getPropertyValue('Request'))
}
}
log.info requestsMap['TestStep1']
Update :
If the step that you are interested is REST step, use below condition instead of WsdlTestRequestStep in the above.
if (step instanceof com.eviware.soapui.impl.wsdl.teststeps.RestTestRequestStep) { //do the stuff }

Resources