I want to read the error.jtl file which we get from Blazemeter Logs (File name- Artifacts).
I am currently using Excel to read the file. Is there any other way, I can view this JTL file, in the same manner, we see the JMeter results jtl file(to HTML Report)?
error.jtl is a JMeter-specific file which is generated by Taurus framework, it contains request and response data for the samplers which have failed during the test execution.
I don't know what do you mean by "read", the file is normal XML file so you can use any text or XML viewer/editor to inspect it.
Also as per How to Capture Response Data from a JMeter JTL File (NTC) article:
Unzip artifacts.zip and open the trace.jtl / error.jtl in JMeter's View Results Tree Listener. The samplers are listed. When selected, the response data can be examined.
so you can open it in View Results Tree or any other Listener of your choice.
You cannot generate HTML Reporting Dashboard from the .jtl file in XML format, however you can use i.e. Filter Results Tool to extract only failing sample results from the kpi.jtl file and generate the dashboard out of it.
Related
I created a logic app to export some data to a *.csv file.
Data which will be exported contains german umlauts.
I read all the needed values into variables which are then concatenated and added to an array.
Finally I get an array of semicolon separated strings with the values in it.
This result will then be added to an email as file attachment:
All the values are handled correctly in the Logic App and are correct in the *.csv file but as soon I open the csv with Excel, the umlauts are not shown correctly anymore.
Is there a way to create explicitly a file with the correct encoding within the logic app and add the file to the email instead of the ExportString?
Or can I somehow encode the content of the ExportString-Variable?
Any hints?
I have reproduced in my environment and followed below steps to get correct output in CSV file:
My input is:
I have sent the data into CSV table as below and then created a file in file share as below:
Then when i open my file share and download the content from there i got different output as you got:
Then I opened my Azure Storage explorer and downloaded it as below:
When i open in notepad the downloaded file:
I get the correct output, try to do in this way
And when i save it as hello.csv and keep utf-8 with bom like below:
Then I get the correct output in csv as well:
Experimenting with Taiko for UI automation. Trying to upload a csv file but giving the id of the csv file selector is not working. A red rectangle outline blinks on top of the file upload link file firing {attach("/Users/username/Downloads/report.csv",$('*[id="some"]'))} but shows following error message in console.
Error: Node is not a file input element, run `.trace` for more info.
HTML
I've tried following fieldfield examples from https://docs.taiko.dev/#filefield
attach('report.csv', to(fileField('Upload CSV file (Optional)')))
fileField('Upload CSV file (Optional)').exists()
fileField({'id':'event-csv-upload'}).exists()
fileField({id:'event-csv-upload'},below('Upload CSV file (Optional)')).exists()
fileField(below('Upload CSV file (Optional)')).exists()
none of this works and finally tried following
attach("/Users/username/Downloads/report.csv",$('*[id="event-csv-upload"]'))
and
attach("/Users/username/Downloads/report.csv",fileField({id:'event-csv-upload'}))
source:https://github.com/getgauge/taiko/issues/309
Still not able to upload file using Taiko.
Why this file upload element is difficult to locate in angular code?
Is it too early to try Taiko now for angular web projects?
Do you recommend any other UI automation framework that work well with any angular versions?
attach expects a File input field as a selector to perform action on, in your case that element seems to be a hidden element linked to a button, attaching to that hidden element should work.
Try,
await attach("/Users/username/Downloads/report.csv",fileField({id:'eventCSVFileInput'},{ selectHiddenElements: true }))
Try this
await attach("/Users/username/Downloads/report.csv",fileField({id:'eventCSVFileInput'},{force:true}))
HI and thanks for any help. Is there a way to work with files larger than 10mg? I have to check for updates on items in a file that would be uploaded, but the file contains all items in the system and is approximately 20MG. This 10MG limit is killing me. I see streaming for file save and appending but not for file reading. So I am open to any suggestions. The provider in this instance doesn't offer the facility to chunk the files. thanks in advance for your help.
If you are using SS2 to process a file from the file cabinet then if you use file.lines.iterator() to process a file the size limit is 10MB per line.
I believe returning a file object from a map reduce script's getInputStage automatically parses the file into lines.
The 10MB file size limit comes into play if you try to create a file larger than 10MB.
If you are trying to read in a an external file via script then one approach that I've used is to proxy the call via an external service. e.g. query an AWS lambda function that checks for and saves the file to S3. Return the file path and size to your SuiteScript. The SuiteScript then asks for "pages" of the file that are less than 10MB and saves those. If you are uploading something like a .csv then the lambda function can send the header with each paged request.
I have a REST test request that returns a MIME multipart/related message. One of the parts has type 'application/zip'.
In soapUi UI, I can see the zip file in the attachments tab.
1- I would like to have this file attached to the request of another REST testStep without manual intervention.
Is this possible with Groovy script ?
I guess it would start with :
def response = testRunner.testCase.getTestStepByName( "firstStep" ).testRequest.getResponse()
def firstStepAttachment = response1.attachments[0]
2- to make it a bit harder, the transfered file is a .zip and I'd like a specific file in it (I know its name and path), is there any way to do that during the file transfer?
Kudos
While running the CORB job, I am Extracting 100,000 URI's and loading the
data in one file at Linux server. The expectation is all the output records should be store in one file with 100k count. However The data was stored in multiple files with different counts. Can anyone help me out with root cause why the CORB process is creating multiple files in the output directory?
Please find the details of the CORB properties file that I configured in my local directory
Properties file :
THREAD-COUNT=4
PROCESS-TASK=com.marklogic.developer.corb.extension.ResilientTransform
SSL-CONFIG-CLASS=com.marklogic.developer.corb.TwoWaySSLConfig
SSL-PROPERTIES-FILE=/eiestore/ssl-configs/common-corb-sslconfig.properties
DECRYPTER=com.marklogic.developer.corb.HostKeyDecrypter
MODULE-ROOT=/a/abcmodules/corb-process/
MODULES-DATABASE="abcmodules"
URIS-MODULE=corb-select-uris.xqy
XQUERY-MODULE=corb-get-process.xqy
PROCESS-TASK=com.marklogic.developer.corb.ExportBatchToFileTask
PRE-BATCH-TASK=com.marklogic.developer.corb.PreBatchUpdateFileTask
EXPORT-FILE-TOP-CONTENT=Id,value,type
EXPORT-FILE-DIR=/a/b/c/d/