Groovy : datetime delta and JsonOutput.toJson - groovy

I'm trying to return some data from the database. When I call the method JsonOutput.toJson() with the provided data, it automatically changes the delta of my datatime.
This is my code :
def result = groovy.json.JsonOutput.toJson(data)
println(data)
println(result)
response.setContentType("application/json");
The first prints the following:
[[ DateStart:2019-01-14 09:34:51.0, DateEnd:2019-01-14 10:27:22.68]]
And the second after the json formatting show another date (hours - 1) :
[{"DateStart":"2019-01-14T08:34:51+0000","DateEnd":"2019-01-14T09:27:22+0000"}]
Any tips to format to JSON without changing any date/delta?

Based on the data.toString() output format:
[[ DateStart:2019-01-14 09:34:51.0, DateEnd:2019-01-14 10:27:22.68]]
it looks like you are dealing with JSON serialization of thejava.sql.Timestamp type. If you want to prevent the default timestamp serializer from being used, you will have to format timestamps manually before JSON serialization happens.
Consider the following example:
import groovy.json.JsonOutput
import java.sql.Timestamp
def format = "yyyy-MM-dd'T'hh:mm:ss.S"
def date = new Date().parse(format, "2019-01-14T09:34:51.0")
def dateStart = new Timestamp(date.time)
def dateEnd = new Timestamp((date + 20).time)
def data = [[DateStart: dateStart, DateEnd: dateEnd]]
println "Raw data:"
println data
println "\nJSON without formatting:"
println JsonOutput.toJson(data)
data = data.collect { el -> el.collectEntries { k,v ->
def value = v instanceof Timestamp ? v.format(format) : v
return [(k): value]
}}
println "\nJSON after formatting:"
println JsonOutput.toJson(data)
The output:
Raw data:
[[DateStart:2019-01-14 09:34:51.0, DateEnd:2019-02-03 09:34:51.0]]
JSON without formatting:
[{"DateStart":"2019-01-14T08:34:51+0000","DateEnd":"2019-02-03T08:34:51+0000"}]
JSON after formatting:
[{"DateStart":"2019-01-14T09:34:51.0","DateEnd":"2019-02-03T09:34:51.0"}]
The most important part (formatting timestamp value to its string representation) happens in the following part:
data = data.collect { el -> el.collectEntries { k,v ->
def value = v instanceof Timestamp ? v.format(format) : v
return [(k): value]
}}
It assumes that your list of maps may contain other key-value pairs which don't hold the timestamp value. If we deal with a timestamp, we format it using yyyy-MM-dd'T'hh:mm:ss.S format, and we return a not-modified value otherwise.

Related

groovy saving lines from file into collection

Hi I want do save collection from textfile in groovy and save only chosen lines.
I have a file contains that plaintext:
!!file-number1:
!!123.sql
!!123.jpeg
!!333.jpeg
!!texttextext.jpeg
and I want to save it to collection with that result
collection = ['123.jpeg', '333.jpeg', 333.jpeg', 'textextex.jpeg']
Only .jpeg and without "!!"
String filePath = "path/to/file.txt"
File myFile = new File(filePath)
def collection = myFile.collect().retainAll {it == '*.jpeg'}
println collection
And my question is how to remove or ignore things like "!!" and how to print that collection, because i got only output "true".
You can use findResults to "filter" and "map" in one go. e.g.
def lines = """!!file-number1:
!!123.sql
!!123.jpeg
!!333.jpeg
!!texttextext.jpeg"""
println lines.readLines().findResults{ def m = it =~ /!!(.*\.jpeg)/; m ? m[0][1] : null }
// → [123.jpeg, 333.jpeg, texttextext.jpeg]
Or a little bit easier to read, without using the Matcher object:
String filePath = "path/to/file.txt"
def lines = new File(filePath)
.collect()
.findAll { it ==~ /.*jpeg/ }
.collect { it[2..-1] }
println lines
In your example, retainAll() modifies the initial collection and returns a boolean value. See here: https://docs.groovy-lang.org/latest/html/groovy-jdk/java/util/Collection.html

How to print many variables' names and their values

I have a big chunk of json code. I assign the needed me values to more than +10 variables. Now I want to print all variable_name = value using print how I can accomplish this task
Expected output is followed
variable_name_1 = car
variable_name_2 = house
variable_name_3 = dog
Updated my code example
leagues = open("../forecast/endpoints/leagues.txt", "r")
leagues_json = json.load(leagues)
data_json = leagues_json["api"["leagues"]
for item in data_json:
league_id = item["league_id"]
league_name = item["name"]
coverage_standings = item["coverage"]["standings"]
coverage_fixtures_events =
item["coverage"]["fixtures"]["events"]
coverage_fixtures_lineups =
item["coverage"]["fixtures"]["lineups"]
coverage_fixtures_statistics =
item["coverage"]["fixtures"]["statistics"]
coverage_fixtures_players_statistics = item["coverage"]["fixtures"]["players_statistics"]
coverage_players = item["coverage"]["players"]
coverage_topScorers = item["coverage"]["topScorers"]
coverage_predictions = item["coverage"]["predictions"]
coverage_odds = item["coverage"]["odds"]
print("leagueName:" league_name,
"coverageStandings:" coverage_standings,
"coverage_fixtures_events:"
coverage_fixtures_events,
"coverage_fixtures_lineups:"
coverage_fixtures_lineups,
"coverage_fixtures_statistics:"
coverage_fixtures_statistics,
"covage_fixtes_player_statistics:"
covage_fixres_players_statistics,
"coverage_players:"
coverage_players,
"coverage_topScorers:"
coverage_topScorers,
"coverage_predictions:"
coverage_predictions,
"coverage_odds:"coverage_odds)
Since you have the JSON data loaded as Python objects, you should be able to use regular loops to deal with at least some of this.
It looks like you're adding underscores to indicate nesting levels in the JSON object, so that's what I'll do here:
leagues = open("../forecast/endpoints/leagues.txt", "r")
leagues_json = json.load(leagues)
data_json = leagues_json["api"]["leagues"]
def print_nested_dict(data, *, sep='.', context=''):
"""Print a dict, prefixing all values with their keys,
and joining nested keys with 'separator'.
"""
for key, value in data.items():
if context:
key = context + sep + key
if isinstance(value, dict):
print_nested_dict(value, sep=sep, context=key)
else:
print(key, ': ', value, sep='')
print_nested_dict(data_json, sep='_')
If there is other data in data_json that you do not want to print, the easiest solution might be to add a variable listing the names you want, then add a condition to the loop so it only prints those names.
def print_nested_dict(data, *, separator='.', context=None, only_print_keys=None):
...
for key, value in data.items():
if only_print_keys is not None and key not in only_print_keys:
continue # skip ignored elements
...
That should work fine unless there is a very large amount of data you're not printing.
If you really need to store the values in variables for some other reason, you could assign to global variables if you don't mind polluting the global namespace.
def print_nested_dict(...):
...
else:
name = separator.join(contet)
print(name, ': ', value, sep='')
globals()[name] = value
...

How can I export to CSV when JSON some records contain different keys

I am using an API to get JSON results, and then convert to CSV. However, I see in the results that some records have missing keys. The result is that the CSV has values shifted to the wrong columns
I have run my script and also ran API in Postman, and the JSON output is the same. I used https://json-csv.com/ to convert the JSON to CSV, and compared it to my output. https://json-csv.com/ output shows that the data is in the correct columns, leading me to believe that there is some code in the background that detects the missing key/value, and fills it with a null value.
import json
import requests
import csv
def get_data():
group_id = 9039
api_token = 'xxxxxxxxxxxxxxxxxxxxxxxxxxx'
api_url = 'https://api.samsara.com/v1'
endpoint_url = api_url + '/fleet/drivers'
my_params = {"access_token": api_token}
my_data = {"groupId": group_id}
resp = requests.post(url = endpoint_url, params = my_params, json = my_data)
array = resp.json()
text = json.dumps(array)
return text
def write_file(filename, text):
dataset = json.loads(text)
drivers = dataset['drivers']
csvFile = open(filename,'w')
csvwriter = csv.writer(csvFile)
# write header
if len(drivers) > 0:
keys = drivers[0].keys()
csvwriter.writerow(keys)
# write data
for line in drivers:
csvwriter.writerow(line.values())
csvFile.close()
text = get_data()
write_file('drivers.csv', text)
From the JSON output, here is a partial result.
{
"drivers": [
{
"id": 158830,
"groupId": 9039,
"vehicleId": 212014918234731,
"currentVehicleId": 212014918431705,
"username": "rdoherty",
},
{
"id": 134808,
"groupId": 9039,
"vehicleId": null,
"username": "sbermingham",
}
]
}
Note that the second record does not have the "currentVehicleId" key:value. The result is that when I convert to CSV, if there is a missing value, all other values are shifted to the column to the left of where it should be.
id groupId vehicleId currentVehicleId username
158830 9039 2.12015E+14 2.12015E+14 rdoherty
134808 9039 null sbermingham
I want the CSV conversion to ensure that all missing values are replaced with null.
I'd recommend modifying the dictionary and inserting drivers[key] = None or drivers[key] = '' for any keys that are missing.
Step 1: get all possible keys
If you already know all possible keys you could have, this is pretty easy. Just store all the keys in a list.
If not, you'll have to loop through each driver and find all the unique keys.
# write header
driver_keys = []
for d in drivers:
for key in d.keys():
if key not in driver_keys:
driver_keys.append(key)
csvwriter.writerow(driver_keys)
Step 2: Add your empty values to each line as you go. Since we're iterating over the same list every time (and not modifying it), we can guarantee the same order, so the values should match up with the column headings.
# write data
for line in drivers:
for key in driver_keys:
if key not in line.keys():
line[key] = None # or line[key] = '' if you like
csvwriter.writerow(line.values())
csvFile.close()

How to append comma separated value dynamically in groovy

I've comma separated values which I want to iterate and append the value dynamically like below:
def statusCode = '1001,1002,1003'
Output should look like this:
[item][code]=1001|[item][code]=1002|[item][code]=1003
If statusCode has only two value. For example:
def statusCode = '1001,1002'
Then output should be
[item][code]=1001|[item][code]=1002
I tried something like below since I'm new to groovy not sure how can achieve this with some best approach:
def statusCode= '1001,1002,1003'
String[] myData = statusCode.split(",");
def result
for (String s: myData) {
result <<= "[item][code]="+s+"|"
}
System.out.println("result :" +result);
You can use collect and join to simplify the code:
def result = statusCode.split(',').collect{"[item][code]=$it"}.join('|')
That returns [item][code]=1001|[item][code]=1002|[item][code]=1003

How to Trim the first two characters of a column in csv using groovy

<taskdef name="groovy" classname="org.codehaus.groovy.ant.Groovy"/>
<groovy>
newFile("C://RxBen//exports//Control_Exception__c_Exportupdated.csv").withWriter {
new File("C://RxBen//exports//Control_Exception__c_Export.csv").splitEachLine(",(?=(?:[^\"]*\"[^\"]*\")*[^\"]*\$)") { ID, Don_t_Work__c,Forwarding_fax_number__c, No_Go__c ->
it.println "${ID},${Forwarding_fax_number__c}AAA,${s}"
}
}
</groovy>
You can use a substring on the String in the column with the argument of 2, this will take all the characters from third character to the end of the string in the target String.
Example:
def s = "AATestString"
def newString = s.substring(2)
assert newString == "TestString"
Updated:
Try with this code: which Ozsaffer approached ...
#Grab('com.xlson.groovycsv:groovycsv:1.1')
import static com.xlson.groovycsv.CsvParser.parseCsv
df = new FileReader('filename.csv')
def data = parseCsv(df, readFirstLine: false)
for (def row : data){
println row.values[0].substring(2) //row.values[0] means first column
}

Resources