I want to specify logging timestamp in a particular format %d{YYYY-MM-dd HH:mm:ss.SSS}, but however I manipulate the pattern the timestamp is displayed as '2015-10-19 00:47:15,423'.
Specifying %d{ISO8601} or %d{ABSOLUTE} are taking effect. I am wondering how the timestamp format is picked, when a custom pattern is specified.
If I want to change the comma separator to period, is there a way to accomplish this?
I used %d{yyyy-MM-dd'T'HH:mm:ss.SSSZ} and it worked perfectly.
Note that the T is within single quotes so that it is output as is.
The Z at the end adds the timezone but it can be omitted too.
I used year format in capital letters for this reason the timestamp was falling back to default format.
With this change I can manipulate the timestamp with custom format specifier without any issue.
Related
I'm looking for the way to specify a field format when converting an .xlsx file to csv using ssconvert tool on linux.
I want this information because actually, I have a field that is in floating point type but after conversion, it comes as a string.
For exemple: 99,8923 become "99,8923" but other values in the same field are correctly parsed, like 54 is 54 and that's normal.
The probleme is with " added around the number. While parsing this number with Logstash, it becomes 998923 even if I specify in logstash that it is a float.
Thanks for your help.
I have a CSV file that I'm opening in Excel. It has dates in this format:
2019-09-15T00:11:57.4030000Z
I want this:
2019-09-15 00:11:57.403+00
I think I may be able to use the "format cells > custom" option in Excel, but what would do I need to specify as the 'type' for the format? I tried using this to get most of it, but it doesnt' work:
yyyy-mm-dd hh:mm:ss.000
And when I try to apply some of the pre-existing types built into Excel, they don't seem to change the date format - it's as if the original date format isn't being recognised as valid.
My OP asks how to do this in Excel, however I found it easier to parse the file with a regex. I used the excellent dnGrep tool for this, with the following pattern, that groups the date and time into substitution groups, then uses them in the replacement as $1 and $2 :
search - ([0-9]{4}[-][0-9]{2}[-][0-9]{2})[T]([0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3})[0-9]{4}Z
replace - $1 $2+00
I have two date formats for validStartDtTm (name of validAxes) stored in one collection. I want to harmonize all of the dates into one uniform format and wanted to know if there were any best practices for bi-temporal date format in MarkLogic.
Current formats are as follows:
2019-04-09T10:54:37.861434Z - generated by front end users and stored without transformation
2019-04-09T10:54:37.8614534-04:00 - ingested from back end with format 'DD/MM/YYYY HH:MM:SS' and transformed using xdmp.parseDateTime
Thanks!
These are both valid xs:dateTime values so they don't need harmonization per se, but each indicates a different time zone, so you should check to be sure those are what is intended.
Both times are in the same format, The "Z" trailing the first time indicates "Zulu" time, i.e. UTC, the equivalent of 2019-04-09T10:54:37.861434-00:00. The "-04:00` trailing the second time indicates the time is behind USC by 4 hours and 0 minutes.
How can we convert YYYYWWD format date into normal date format YYYY-MM-DD using Syncsort?
I think you are out of luck. Syncsort has the same features as DF/Sort and according to this - https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r2.icea100/ice2ca_DFSORT_data_formats.htm - Df/sort does not recognise a YYYYWWD date format. It may be possible however to so the maths yourself with Syncsort, but I can;t see any way to do this.
YYYYWWD is a non-standard date format, so this is not really surprising. The best solution (if you cannot get the data in the correct format initially) would be to process the data with REXX before sorting it, if the volumes of data allows this.
Unless, of course, this is a 'homework' question and you have to use Syncsort? (which would imply that it is possible)
A reporting service generates a csv file and certain columns (oddly enough) have mixed date/time format , some rows contain datetime expressed as m/d/y, others as d.m.y
When applying =TYPE() it will either return 1 or 2 (Excel will recognize either a text or a number (the Excel timestamp))
How can I convert any kind of wrong date-time format into a "normal" format that can be used and ensure some consistency of data?
I am thinking of 2 solutions at this moment :
i should somehow process the odd data with existing excel functions
i should ask the report to be generated correctly from the very beginning and avoid this hassle in the first place
Thanks
Certainly your second option is the way to go in the medium-to-long term. But if you need a solution now, and if you have access to a text editor that supports Perl-compatible regular expressions (like Notepad++, UltraEdit, EditPad Pro etc.), you can use the following regex:
(^|,)([0-9]+)/([0-9]+)/([0-9]+)(?=,|$)
to search for all dates in the format m/d/y, surrounded by commas (or at the start/end of the line).
Replace that with
\1\3.\2.\4
and you'll get the dates in the format d.m.y.
If you can't get the data changed then you may have to resort to another column that translates the dates: (assumes date you want to change is in A1)
=IF(ISERR(DATEVALUE(A1)),DATE(VALUE(RIGHT(A1,LEN(A1)-FIND(".",A1,4))),VALUE(MID(A1,FIND(".",A1)+1,2)),VALUE(LEFT(A1,FIND(".",A1)-1))),DATEVALUE(A1))
it tests to see if it can read the text as a date, if it fails, then it will chop up the string, and convert it to a date, else it will attempt to read the date directly. Either way, it should convert it to a date you can use