Writing a groovy script results into a file - groovy

I would like to write groovy script results in to a file on my Mac machine.
How do I do that in Groovy?
Here my attempt:
log.info "Number of nodes:" + numElements
log.info ("Matching codes:"+matches)
log.info ("Fails:"+fails)
// Exporting results to a file
today = new Date()
sdf = new java.text.SimpleDateFormat("dd-MM-yyyy-hh-mm")
todayStr = sdf.format(today)
new File( "User/documents" + todayStr + "report.txt" ).write(numElements, "UTF-8" )
new File( "User/documents" + todayStr + "report.txt" ).write(matches, "UTF-8" )
new File( "User/documents" + todayStr + "report.txt" ).write(fails, "UTF-8" )
Can anyone help in the exporting results part?
Thanks,
Regards,
A
ok, i've managed to create a file with
file = new File(dir + "${todayStr}_report.txt").createNewFile()
How do I add the numelements,matches and fails? like this:
File.append (file, matches)?
I get the follwoing error:
groovy.lang.MissingMethodException: No signature of method: static java.io.File.append() is applicable for argument types: (java.lang.Boolean, java.util.ArrayList) values: [true, [EU 4G FLAT L(CORPORATE) SP, EU 4G FLAT M(CORPORATE), ...]] Possible solutions: append(java.lang.Object, java.lang.String), append(java.io.InputStream), append(java.lang.Object), append([B), canRead(), find() error at line: 33

You have wrong filepath. I don't have MAC so I'm not 100% sure but from my point of view you should have:
new File("/User/documents/${todayStr}report.txt").write(numElements, "UTF-8")
You lack at least two backslashes, first before User and second after documents in your path. With the approach you have now, it tries to save to a directory User/documentDATE, pretty sure it does not exist.
So above I showed you way with absolute path. You can also write strictly like this:
new File("${todayStr}report.txt").write(numElements, "UTF-8")
and if the file is created then you'll be 100% sure it is a problem with your filepath :)
A few more things - since it is a groovy, try to use the advantages that language has over Java, there are several ways of working with files, I've also rewritten your logs to show you how simple it is to work with Strings in groovy:
log.info "Number of nodes: ${numElements}"
log.info "Matching codes: ${matches}"
log.info "Fails: ${fails}"
I hope it helps

Related

Formatting SQL queries in SoapUI JDBC test steps

How can I use groovy to keep the content of SQL Queries in SoapUI in sync with an external editor?
The solution might look like this:
Groovy script to export all the queries of a SoapUI TestSuite or
TestStep into SQL file(s).
Edit and save SQL with external editor.
Groovy script to update the queries in SoapUI based on changed
files.
Initial issues:
How do I access the query of a teststep? It is not there as a
porperty, is it?
Is there a way to run steps 1 and 3 on a project file (XML) instead
from within a test itself (as setUp/tearDown scripts)?
Motivation
SQL Query input fields of JDBC test steps
very small and
does not provide any code formatting like indenting, re-wrapping, or upper-casing of SQL keywords (there is just syntax highlighting).
This is IMHO very cumbersome when writing a query that contains more than a couple of where clauses or even joins.
Side note: If somebody could point me to some functionality (builtin, plugin?) to format the SQL code directly in SoapUI (not pro!), I would gladly pass on groovy scripts.
These are my groovy-scripts that I have implemented in the form of Groovy test steps (they might also be setUp/tearDown scripts, but I prefer separate test steps that are clearly visible and where I can easily toggle their activity):
Export all the JDBC/SQL queries of a SoapUI test case into SQL file(s). This might be the final step of the testcase.
def testCase = testRunner.testCase
def testSuite = testCase.testSuite
def project = testSuite.project
// Remove .xml extension from filename
String pathToProjectDir = project.path.replaceAll(~/\.\w+$/, '')
File projectDir = new File(pathToProjectDir)
File suiteDir = new File(projectDir, testSuite.name)
File caseDir = new File(suiteDir, testCase.name)
caseDir.mkdirs()
assert caseDir.exists()
assert caseDir.isDirectory()
log.info "Exporting to '${caseDir}'."
testCase.getTestStepsOfType(com.eviware.soapui.impl.wsdl.teststeps.JdbcRequestTestStep)
.each{testStep ->
String filename = "${testStep.name}.sql"
File file = new File(caseDir, filename)
file.text = testStep.query
log.info "'${filename}' written."
}
log.info "Files written."
Edit and save SQL with external editor.
Update the queries in SoapUI based on changed files. Missing or empty files are ignored in order not to break too much in the existing project.
def testCase = testRunner.testCase
def testSuite = testCase.testSuite
def project = testSuite.project
// Remove .xml extension from filename
String pathToProjectDir = project.path.replaceAll(~/\.\w+$/, '')
File projectDir = new File(pathToProjectDir)
File suiteDir = new File(projectDir, testSuite.name)
File caseDir = new File(suiteDir, testCase.name)
assert caseDir.exists()
assert caseDir.isDirectory()
log.info "Importing from '${caseDir}'."
testCase.getTestStepsOfType(com.eviware.soapui.impl.wsdl.teststeps.JdbcRequestTestStep)
.each{testStep ->
String filename = "${testStep.name}.sql"
File file = new File(caseDir, filename)
if (file.exists()) {
if (file.text) {
testStep.query = file.text
log.info "'${filename}'"
} else {
log.warn "Ignoring '${filename}'"
}
} else {
log.warn "'${filename}' does not exist."
}
}
log.info "Files imported."
The files are placed in the following directory structure:
Starting from the SoapUI project file path/to/project.xml itself, a tree is (created and) populated with one SQL file per JDBC test step:
path/to/project/${name of TestSuite}/${name of TestCase}/${name of TestStep}.sql

Spark Streaming textFileStream not supporting wildcards

I setup a simple test to stream text files from S3 and got it to work when I tried something like
val input = ssc.textFileStream("s3n://mybucket/2015/04/03/")
and in the bucket I would have log files go in there and everything would work fine.
But if their was a subfolder, it would not find any files that got put into the subfolder (and yes, I am aware that hdfs doesn't actually use a folder structure)
val input = ssc.textFileStream("s3n://mybucket/2015/04/")
So, I tried to simply do wildcards like I have done before with a standard spark application
val input = ssc.textFileStream("s3n://mybucket/2015/04/*")
But when I try this it throws an error
java.io.FileNotFoundException: File s3n://mybucket/2015/04/* does not exist.
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:506)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1483)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1523)
at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:176)
at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:134)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
at scala.Option.orElse(Option.scala:257)
.....
I know for a fact that you can use wildcards when reading fileInput for a standard spark applications but it appears that when doing streaming input, it doesn't do that nor does it automatically process files in subfolders. Is there something I'm missing here??
Ultimately what I need is a streaming job to be running 24/7 that will be monitoring an S3 bucket that has logs placed in it by date
So something like
s3n://mybucket/<YEAR>/<MONTH>/<DAY>/<LogfileName>
Is there any way to hand it the top most folder and it automatically read files that show up in any folder (cause obviously the date will increase every day)?
EDIT
So upon digging into the documentation at http://spark.apache.org/docs/latest/streaming-programming-guide.html#basic-sources it states that nested directories are not supported.
Can anyone shed some light as to why this is the case?
Also, since my files will be nested based upon their date, what would be a good way of solving this problem in my streaming application? It's a little complicated since the logs take a few minutes to get written to S3 and so the last file being written for the day could be written in the previous day's folder even though we're a few minutes into the new day.
Some "ugly but working solution" can be created by extending FileInputDStream.
Writing sc.textFileStream(d) is equivalent to
new FileInputDStream[LongWritable, Text, TextInputFormat](streamingContext, d).map(_._2.toString)
You can create CustomFileInputDStream that will extend FileInputDStream. The custom class will copy the compute method from the FileInputDStream class and adjust the findNewFiles method to your needs.
changing findNewFiles method from:
private def findNewFiles(currentTime: Long): Array[String] = {
try {
lastNewFileFindingTime = clock.getTimeMillis()
// Calculate ignore threshold
val modTimeIgnoreThreshold = math.max(
initialModTimeIgnoreThreshold, // initial threshold based on newFilesOnly setting
currentTime - durationToRemember.milliseconds // trailing end of the remember window
)
logDebug(s"Getting new files for time $currentTime, " +
s"ignoring files older than $modTimeIgnoreThreshold")
val filter = new PathFilter {
def accept(path: Path): Boolean = isNewFile(path, currentTime, modTimeIgnoreThreshold)
}
val newFiles = fs.listStatus(directoryPath, filter).map(_.getPath.toString)
val timeTaken = clock.getTimeMillis() - lastNewFileFindingTime
logInfo("Finding new files took " + timeTaken + " ms")
logDebug("# cached file times = " + fileToModTime.size)
if (timeTaken > slideDuration.milliseconds) {
logWarning(
"Time taken to find new files exceeds the batch size. " +
"Consider increasing the batch size or reducing the number of " +
"files in the monitored directory."
)
}
newFiles
} catch {
case e: Exception =>
logWarning("Error finding new files", e)
reset()
Array.empty
}
}
to:
private def findNewFiles(currentTime: Long): Array[String] = {
try {
lastNewFileFindingTime = clock.getTimeMillis()
// Calculate ignore threshold
val modTimeIgnoreThreshold = math.max(
initialModTimeIgnoreThreshold, // initial threshold based on newFilesOnly setting
currentTime - durationToRemember.milliseconds // trailing end of the remember window
)
logDebug(s"Getting new files for time $currentTime, " +
s"ignoring files older than $modTimeIgnoreThreshold")
val filter = new PathFilter {
def accept(path: Path): Boolean = isNewFile(path, currentTime, modTimeIgnoreThreshold)
}
val directories = fs.listStatus(directoryPath).filter(_.isDirectory)
val newFiles = ArrayBuffer[FileStatus]()
directories.foreach(directory => newFiles.append(fs.listStatus(directory.getPath, filter) : _*))
val timeTaken = clock.getTimeMillis() - lastNewFileFindingTime
logInfo("Finding new files took " + timeTaken + " ms")
logDebug("# cached file times = " + fileToModTime.size)
if (timeTaken > slideDuration.milliseconds) {
logWarning(
"Time taken to find new files exceeds the batch size. " +
"Consider increasing the batch size or reducing the number of " +
"files in the monitored directory."
)
}
newFiles.map(_.getPath.toString).toArray
} catch {
case e: Exception =>
logWarning("Error finding new files", e)
reset()
Array.empty
}
}
will check for files in all first degree sub folders, you can adjust it to use the batch timestamp in order to access the relevant "subdirectories".
I created the CustomFileInputDStream as I mentioned and activated it by calling:
new CustomFileInputDStream[LongWritable, Text, TextInputFormat](streamingContext, d).map(_._2.toString)
It seems to behave us expected.
When I write solution like this I must add some points for consideration:
You are breaking Spark encapsulation and creating a custom class that you would have to support solely as time pass.
I believe that solution like this is the last resort. If your use case can be implemented by different way, it is usually better to avoid solution like this.
If you will have a lot of "subdirectories" on S3 and would check each one of them it will cost you.
It will be very interesting to understand if Databricks doesn't support nested files just because of possible performance penalty or not, maybe there is a deeper reason I haven't thought about.
we had same problem. we joined sub folder names with comma.
List<String> paths = new ArrayList<>();
SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd");
try {
Date start = sdf.parse("2015/02/01");
Date end = sdf.parse("2015/04/01");
Calendar calendar = Calendar.getInstance();
calendar.setTime(start);
while (calendar.getTime().before(end)) {
paths.add("s3n://mybucket/" + sdf.format(calendar.getTime()));
calendar.add(Calendar.DATE, 1);
}
} catch (ParseException e) {
e.printStackTrace();
}
String joinedPaths = StringUtils.join(",", paths.toArray(new String[paths.size()]));
val input = ssc.textFileStream(joinedPaths);
I hope that in this way your problem is solved.

SoapUI Load test groovy sequentially reading txt file

I am using free version of soapui. In my load test, I want to read request field value from a text file. The file looks like following
0401108937
0401109140
0401109505
0401110330
0401111204
0401111468
0401111589
0401111729
0401111768
In load test, for each request I want to read this file sequentially. I am using the code mentioned in Unique property per SoapUI request using groovy to read the file. How can I use the values from the file in a sequential manner?
I have following test setup script to read the file
def projectDir = context.expand('${projectDir}') + File.separator
def dataFile = "usernames.txt"
try
{
File file = new File(projectDir + dataFile)
context.data = file.readLines()
context.dataCount = context.data.size
log.info " data count" + context.dataCount
context.index = 0; //index to read data array in sequence
}
catch (Exception e)
{
testRunner.fail("Failed to load " + dataFile + " from project directory.")
return
}
In my test, I have following script as test step. I want to read the current index record from array and then increment the index value
def randUserAccount = context.data.get(context.index);
context.setProperty("randUserAccount", randUserAccount)
context.index = ((int)context.index) + 1;
But with this script, I always get 2nd record of the array. The index value is not incrementing.
You defined the variable context.index to 0 and just do +1
You maybe need a loop to read all values.
something like this :
for(int i=0; i <context.data.size; i++){
context.setProperty("randUserAccount", i);
//your code
}
You can add this setup script to the setup script section for load test and access the values in the groovy script test step using:
context.LoadTestContext.index =((int)context.LoadTestContext.index)+1
This might be a late reply but I was facing the same problem for my load testing for some time. Using index as global property solved the issue for me.
Index is set as -1 initially. The below code would increment the index by 1, set the incremented value as global property and then pick the context data for that index.
<confirmationNumber>${=com.eviware.soapui.SoapUI.globalProperties.setPropertyValue( "index", (com.eviware.soapui.SoapUI.globalProperties.getPropertyValue( "index" ).toLong()+1 ).toString()); return (context.data.get( (com.eviware.soapui.SoapUI.globalProperties.getPropertyValue( "index" )).toInteger())) }</confirmationNumber>

How to copy files in Groovy

I need to copy a file in Groovy and saw some ways to achieve it on the web:
1
new AntBuilder().copy( file:"$sourceFile.canonicalPath",
tofile:"$destFile.canonicalPath")
2
command = ["sh", "-c", "cp src/*.txt dst/"]
Runtime.getRuntime().exec((String[]) command.toArray())
3
destination.withDataOutputStream { os->
source.withDataInputStream { is->
os << is
}
}
4
import java.nio.file.Files
import java.nio.file.Paths
Files.copy(Paths.get(a), Paths.get(b))
The 4th way seems cleanest to me as I am not sure how good is it to use AntBuilder and how heavy it is, I saw some people reporting issues with Groovy version change.
2nd way is OS dependent, 3rd might not be efficient.
Is there something in Groovy to just copy files like in the 4th statement or should I just use Java for it?
If you have Java 7, I would definitely go with
Path source = ...
Path target = ...
Files.copy(source, target)
With the java.nio.file.Path class, it can work with symbolic and hard links. From java.nio.file.Files:
This class consists exclusively of static methods that operate on
files, directories, or other types of files. In most cases, the
methods defined here will delegate to the associated file system
provider to perform the file operations.
Just as references:
Copy files from one folder to another with Groovy
http://groovyconsole.appspot.com/view.groovy?id=8001
My second option would be the ant task with AntBuilder.
If you are doing this in code, just use something like:
new File('copy.bin').bytes = new File('orig.bin').bytes
If this is for build-related code, this would also work, or use the Ant builder.
Note, if you are sure the files are textual you can use .text rather than .bytes.
If it is a text file, I would go with:
def src = new File('src.txt')
def dst = new File('dst.txt')
dst << src.text
I prefer this way:
def file = new File("old.file")
def newFile = new File("new.file")
Files.copy(file.toPath(), newFile.toPath())
To append to existing file :
def src = new File('src.txt')
def dest = new File('dest.txt')
dest << src.text
To overwrite if file exists :
def src = new File('src.txt')
def dest = new File('dest.txt')
dest.write(src.text)
I'm using AntBuilder for such tasks. It's simple, consistent, 'battle-proven' and fun.
2nd approach is too OS-specific (Linux-only in your case)
3rd it too low-level and it eats up more resources. It's useful if you need to transform the file on the way: change encoding for example
4th looks overcomplicated to me... NIO package is relatively new in JDK.
In the end of the day, I'd go for 1st option. There you can switch from copy to scp task, without re-developing the script almost from scratch
This is the way using platform independent groovy script. If anyone has questions please ask in the comments.
def file = new File("java/jcifs-1.3.18.jar")
this.class.classLoader.rootLoader.addURL(file.toURI().toURL())
def auth_server = Class.forName("jcifs.smb.NtlmPasswordAuthentication").newInstance("domain", "username", "password")
def auth_local = Class.forName("jcifs.smb.NtlmPasswordAuthentication").newInstance(null, "local_username", "local_password")
def source_url = args[0]
def dest_url = args[1]
def auth = auth_server
//prepare source file
if(!source_url.startsWith("\\\\"))
{
source_url = "\\\\localhost\\"+ source_url.substring(0, 1) + "\$" + source_url.substring(1, source_url.length());
auth = auth_local
}
source_url = "smb:"+source_url.replace("\\","/");
println("Copying from Source -> " + source_url);
println("Connecting to Source..");
def source = Class.forName("jcifs.smb.SmbFile").newInstance(source_url,auth)
println(source.canRead());
// Reset the authentication to default
auth = auth_server
//prepare destination file
if(!dest_url.startsWith("\\\\"))
{
dest_url = "\\\\localhost\\"+ dest_url.substring(0, 1) + "\$" +dest_url.substring(2, dest_url.length());
auth = auth_local
}
def dest = null
dest_url = "smb:"+dest_url.replace("\\","/");
println("Copying To Destination-> " + dest_url);
println("Connecting to Destination..");
dest = Class.forName("jcifs.smb.SmbFile").newInstance(dest_url,auth)
println(dest.canWrite());
if (dest.exists()){
println("Destination folder already exists");
}
source.copyTo(dest);
For copying files in Jenkins Groovy
For Linux:
try {
echo 'Copying the files to the required location'
sh '''cd /install/opt/
cp /install/opt/ssl.ks /var/local/system/'''
echo 'File is copied successfully'
}
catch(Exception e) {
error 'Copying file was unsuccessful'
}
**For Windows:**
try {
echo 'Copying the files to the required location'
bat '''#echo off
copy C:\\Program Files\\install\\opt\\ssl.ks C:\\ProgramData\\install\\opt'''
echo 'File is copied successfully'
}
catch(Exception e) {
error 'Copying file was unsuccessful'
}

Why is File.separator using the wrong character?

I'm trying to add functionality to a large piece of code and am having a strange problem with File Separators. When reading a file in the following code works on my PC, but fails when on a Linux server. When on PC I pass this and it works:
fileName = "C:\\Test\\Test.txt";
But when on a server I pass this and get "File Not Found" because the BufferedReader/FileReader statement below swaps "/" for "\":
fileName = "/opt/Test/Test.txt";
System.out.println("fileName: "+fileName);
reader = new BufferedReader(new FileReader(new File(fileName)));
Produces this output when run on the LINUX server:
fileName: /opt/Test/Test.txt
File Not Found: java.io.FileNotFoundException: \opt\Test\Test.txt (The system cannot find the path specified)
When I create a simple Test.java file to try and replicate it behaves as expected, so something in the larger code source is causing the BufferedReader/FileReader line to behave as if it's on a PC, not a Linux box. Any ideas what that could be?
I don't see where you used File.separator. Try this instead of hard coding the path separators.
fileName = File.separator + "opt" + File.separator + "Test" + File.separator + "Test.txt";

Resources