Iterating through a cursor in Sybase ASE seems to do multiple loops - sap-ase

I am creating a cursor using select statement that would return 4 values (18, 13, 14 and 15). I am trying to iterate through the cursor and display the value for now. I am expecting 4 print statements, but I see lot more than that.
Here is the code:
PRINT '***** *****'
GO
DECLARE curs CURSOR FOR
SELECT ID FROM CUSTOMER WHERE SSN LIKE '%1803'
GO
DECLARE #ID int
OPEN curs
FETCH NEXT curs INTO #ID
WHILE ##sqlstatus = 0
BEGIN
PRINT '* current value: %1! ', #ID
FETCH NEXT curs INTO #ID
END
CLOSE curs
DEALLOCATE CURSOR curs
GO
Here is the output:
***** *****
* current value: 18
* current value: 18
* current value: 13
* current value: 18
* current value: 13
* current value: 14
* current value: 18
* current value: 13
* current value: 14
* current value: 15
* current value: 18
* current value: 13
* current value: 14
* current value: 15
It seems like a simple iterate over cursor and I cannot understand why I am seeing so many print statements, I want to only see 18, 13, 14 and 15. I am using Sybase ASE 15.5 and Razor SQL client. Can someone help me with this?
* Edit *
I didn't see the issue when I used Sybase Central (for ASE). The results were inconsistent when I used other IDEs.

Firstly make sure you always use an order by with any SQL nowadays to ensure you get data back in the order you expect. You don't know how the database will optimise the query at the backend so be specific.
Also check a direct select and check how many rows match for 'SSN LIKE '%1803' to begin with to cross-check the rowcounts against the cursor.

Related

Python QtSql.QSqlQuery result wrong in connection with SQL Server database and float values

I create a GUI with PyQt5 and display a SQL Server database table in a tableView widget.
The id, date and text columns are OK, but I have also four float columns. The result from the float columns are None if there is a value in it and if the Value is NULL in the database then I get a 0 in the result.
Developer system is Win11 + VSCode + Python 3.9.6 32Bit with PyQt5 v5.15.4
Database runs on: Win10 x86 + SQL Server 2012 Express, access over TCP/IP port 1433
Here is my code to get the values from the DB
from PyQt5.QtSql import *
SERVER = '127.0.0.1'
DATABASE = 'DbName'
USERNAME = 'user'
PASSWORD = 'password'
db = QSqlDatabase.addDatabase('QODBC')
db.setDatabaseName(f'Driver={{SQL SERVER}}; Server={SERVER}; Database={DATABASE}; UID={USERNAME}; PWD={PASSWORD}')
db.open()
GET_RESULTS = '''SELECT Id, ModifiedAt, TreadDepthFL, TreadDepthFR FROM Measurement
WHERE Id < 4;
'''
data = QSqlQuery(db)
data.prepare(GET_RESULTS)
data.exec()
while (data.next()):
print(" | " + str(data.value(0)) + " | " + str(data.value(1)) + " | " + str(data.value(2))+ " | " + str(data.value(3))+ " | ")
db.close()
The result of this is:
id
ModifiedAt
TreadDepthFL
TreadDepthFR
1
PyQt5.QtCore.QDateTime(2021, 9, 16, 19, 9, 13, 990)
0.0
0.0
2
PyQt5.QtCore.QDateTime(2021, 9, 16, 19, 16, 2, 137)
None
None
3
PyQt5.QtCore.QDateTime(2021, 9, 17, 8, 36, 41, 607)
None
None
If I check the database with database-tool like HeidiSQL, the values are:
Id
ModifiedAt
TreadDepthFL
TreadDepthFR
1
2021-09-16 19:09:13,990
NULL
NULL
2
2021-09-16 19:16:02,137
6.5414
7.1887
3
2021-09-17 08:36:41,607
6.31942
6.41098
If I move the ModifiedAt to the end, I get the following strange result:
GET_RESULTS = '''SELECT Id, TreadDepthFL, TreadDepthFR, ModifiedAt FROM Measurement
WHERE Id < 4;
'''
Id
TreadDepthFL
TreadDepthFR
ModifiedAt
1
0.0
0.0
PyQt5.QtCore.QDateTime(2021, 9, 16, 19, 9, 13, 990)
2
None
None
PyQt5.QtCore.QDateTime()
3
None
None
PyQt5.QtCore.QDateTime()
Is there something missing in the code to handle float-values with PyQt5.QtSql?
I experience exactly the same behavior: the float fields are not read correctly when they are non-null and the fields following the float field are read incorrectly too.
I am using C++ QT 4.8.7 under Win10 x64. The problem has appeared with a recent Windows Security Update KB5019959. Uninstalling the update helps. I am still searching for better solutions.
It seems that only the ordering of the query matters (not the order of the fields in the database). So, reordering the fields in the query and accessing them by name will help at least to read the rest of the fields.
UPDATE: There seems to be an easy solution. Just change the type of the column into decimal (adjust the precision to your needs), i.e.
alter table TableName alter column ColumnName decimal(18,6);

Convert UTC time to specific time zone in Azure ADX Kusto query

I have millions of records in Azure Data Explorer. Each of this record has a timestamp value associated with it. I want to be able to convert this timestamp value in the specific time zone.
For example in SQL I use AT TIME ZONE to convert timestamp value from one zone into another -
Select CONVERT(datetime, timestampvalueColumn) AT TIME ZONE 'UTC' AT TIME ZONE 'US Eastern Standard Time' as 'TimeInEST' from Table;
I am not willing to use offset value as it doesn't support daylight saving changes.
How I can do this with Kusto query language in ADX?
Well, the Kusto team is moving fast :-)
Support for timezones conversion has been added:
datetime_local_to_utc()
datetime_utc_to_local()
// Sample generation. Not part of the solution
let t = materialize(range i from 1 to 15 step 1 | extend dt_utc = ago(rand()*365d*10));
// Solution Starts here
t
| extend dt_et = datetime_utc_to_local(dt_utc, "US/Eastern")
| extend offset = dt_et - dt_utc
i
dt_utc
dt_et
offset
5
2012-12-03T17:24:51.6057076Z
2012-12-03T12:24:51.6057076Z
-05:00:00
14
2012-12-10T05:04:17.8507406Z
2012-12-10T00:04:17.8507406Z
-05:00:00
10
2013-03-23T14:42:00.4276416Z
2013-03-23T10:42:00.4276416Z
-04:00:00
15
2013-10-01T06:28:36.4665806Z
2013-10-01T02:28:36.4665806Z
-04:00:00
11
2017-07-18T06:10:30.9963876Z
2017-07-18T02:10:30.9963876Z
-04:00:00
3
2017-11-17T21:57:58.4443366Z
2017-11-17T16:57:58.4443366Z
-05:00:00
6
2018-05-09T03:36:24.7533896Z
2018-05-08T23:36:24.7533896Z
-04:00:00
12
2018-06-05T17:36:41.7970716Z
2018-06-05T13:36:41.7970716Z
-04:00:00
4
2018-08-03T16:25:19.9323686Z
2018-08-03T12:25:19.9323686Z
-04:00:00
8
2019-02-21T17:33:52.9957996Z
2019-02-21T12:33:52.9957996Z
-05:00:00
2
2020-09-24T18:37:08.0049776Z
2020-09-24T14:37:08.0049776Z
-04:00:00
1
2020-12-09T19:57:23.7480626Z
2020-12-09T14:57:23.7480626Z
-05:00:00
7
2021-01-17T13:42:55.0632136Z
2021-01-17T08:42:55.0632136Z
-05:00:00
9
2021-03-04T23:44:01.7192366Z
2021-03-04T18:44:01.7192366Z
-05:00:00
13
2022-06-04T16:26:57.8826486Z
2022-06-04T12:26:57.8826486Z
-04:00:00
Fiddle
Usually the answer is "Don't do it in Kusto", do it in the client that is reading the results from Kusto, which most certainly will have a "utc-to-local-time" or "utc-to-this-timezone" functions.
You can build a convenience function using a similar ideia of the function given bellow. Note that the conversion works for DST (Daylight Saving Time) as well. You just need a way to map a place to its timezone string. In the function that follows, the mapping is from a Brazilian state abbreviation to its timezone string.
See the documentation for a list of available timezones.
.create-or-alter function with (
docstring = 'Dada uma UF e um datetime (UTC), retorna o horário local. Não é tratado o GMT-5 ao oeste de AM.'
) ToLocalDatetime(state: string, dtutc: datetime) {
let selected_tz = iff('GO,DF,MG,ES,RJ,SP,PR,SC,RS' has state, 'America/Sao_Paulo',
iff('MA,PI,CE,RN,PB' has state, 'America/Fortaleza',
iff('AL,SE' has state, 'America/Maceio',
iff('BA' == state, 'America/Bahia',
iff('RR' == state, 'America/Boa_Vista',
iff('MS' == state, 'America/Campo_Grande',
iff('MT' == state, 'America/Cuiaba',
iff('AM' == state, 'America/Manaus',
iff('PA,AP' has state, 'America/Belem',
iff('AC' == state, 'America/Rio_Branco',
iff('RO' == state, 'America/Porto_Velho',
iff('PE' == state, 'America/Recife',
iff('TO' == state, 'America/Araguaina', '')))))))))))));
let localdt = datetime_utc_to_local(dtutc, selected_tz);
let dt_hr = split(format_datetime(localdt, "yyyy-MM-dd HH:mm:ss"), " ");
iff(isnotempty(localdt),
strcat(dt_hr[0], "T", dt_hr[1], format_timespan(localdt - dtutc, "HH:mm")),
'')
}
A couple tests at the moment when DST ended in the DF Brazilian state:
print(ToLocalDatetime('DF', datetime('2019-02-17 01:00:00')))
Output: 2019-02-16T23:00:00-02:00
print(ToLocalDatetime('DF', datetime('2019-02-17 02:00:00')))
Outputs 2019-02-16T23:00:00-03:00
I agree with other answers stating that is better to do it on the client side for most cases. Additionally, the iff sequence of the function is ugly. For a more elegant solution, it is possible to define a datatable such as:
datatable(state:string, tz:string) [
'GO,DF,MG,ES,RJ,SP,PR,SC,RS', 'America/Sao_Paulo',
'MA,PI,CE,RN,PB', 'America/Fortaleza',
......
However, if you to it you cannot use the function on some scenarios, due to documented restrictions.

string new line with special charactoers is not working what I exepted

I just created a string such as
str = "TZ=Europe/Berlin\n* * 1-5\n0 5 * * *\n"
I exepted that
TZ=Europe/Berlin
* * 1-5
0 5 * * *
in jenkins cron
but it was not working
any solutions?
Your chron expression doesn't appear valid. The validation error is "Day of month values must be between 1 and 31". Check it here: https://www.freeformatter.com/cron-expression-generator-quartz.html
Also check out https://plugins.jenkins.io/parameterized-scheduler/ for Jenkins specific help.

How to join records in Easytrieve internal SORT?

I've a requirement, where I need to extract 2 types of records from a single input file & join them for EZT report processing.
Currently, I've written an ICETOOL step to perform the extraction followed by the join. The output of the ICETOOL step is fed to the Easytrieve report step.
Extraction card is as below -
SORT FIELDS=(14,07,PD,A)
OUTFILE FNAMES=FILE010,INCLUDE=(25,03,CH,EQ,C'010')
OUTFILE FNAMES=FILE011,INCLUDE=(25,04,CH,EQ,C'011')
OPTION DYNALLOC=(SYSDA,05)
Here is the join card -
SORT FIELDS=(14,07,PD,A)
JOINKEYS F1=FILE010,FIELDS=(14,07,A),SORTED,NOSEQCHK
JOINKEYS F2=FILE011,FIELDS=(14,07,A),SORTED,NOSEQCHK
REFORMAT FIELDS=(F1:14,07,
F2,25,10)
OUTREC BUILD=(1,17,80:X),VTOF
OPTION DYNALLOC=(SYSDA,05)
I'm wondering if it was possible to perform the above SORT/ICETOOL operations within EasyTrive. I've used the Easytrieve internal SORT but it was for the simple extractions. Can we perform the join operation within the Easytrieve?
Note - The idea is to have a single EZT step.
You can make use Synchronized File Processing facility (SFP) in Easytrieve to acheive the task. Read more about it here.
FILE FILE010
KEY1 14 7 N
*
FILE FILE011
KEY2 14 7 N
FIELD1 25 10 A
*
FILE OUTFILE FB(80 0)
OKEY 1 7 N
OFIELD 8 10 A
*
WS-COUNT W 5 N VALUE 0
*
JOB INPUT FILE010 KEY KEY1 FILE011 KEY KEY2 FINISH(DIS)
*
IF EOF FILE010
STOP
END-IF
*
IF MATCHED
OKEY = KEY1
OFIELD = FIELD1
WS-COUNT = WS-COUNT + 1
PUT OUTFILE
END-IF
*
DIS. PROC
DISPLAY 'RECORDS WRITTEN: ' WS-COUNT
END-PROC
Please note,
Above code isn't tested, it's just a draft showing the idea on
file matching using Easytrieve to achieve the task.
Data types to the data items are assumed. You may have to change them suitably.
You may have to define the variable input datasets in the FILE
statement.
You may add more statements within the IF MATCHED condition for the
creation of report.
Hope this helps!

Records of each version in list

I have a list of versions
1.0.0.1 - 10
1.1.0.1 - 10
1.2.0.1 - 10
That is 30 nr in my list. But I only want to show the 5 highest nr of each sort:
1.0.0.5 - 10
1.1.0.5 - 10
1.2.0.5 - 10
How can I do that? The last nr can be any number but the 3 first nr is only
1.0.0
1.1.0
1.2.0
CODE:
import groovy.json.JsonSlurperClassic
def data = new URL("http://xxxx.se:8081/service/rest/beta/components?repository=Releases").getText()
/**
* 'jsonString' is the input json you have shown
* parse it and store it in collection
*/
Map convertedJSONMap = new JsonSlurperClassic().parseText(data)
def list = convertedJSONMap.items.version
list
Version numbers alone usually don't make an easy sort. So I'd split them into numbers and work from there. E.g.
def versions = [
"1.0.0.12", "1.1.0.42", "1.2.0.666",
"1.0.0.6", "1.1.0.77", "1.2.0.8",
"1.0.0.23", "1.1.0.5", "1.2.0.5",
]
println(
versions.collect{
it.split(/\./)*.toInteger() // turn into array of integers
}.groupBy{
it.take(2) // group by the first two numbers
}.collect{ _, vs ->
vs.sort().last() // sort the arrays and take the last
}*.join(".") // piece the numbers back together
)
// => [1.0.0.23, 1.1.0.77, 1.2.0.666]

Resources