How to union in SUITEQL? - netsuite

I'm trying to union two queries together. I've copy and pasted this from the advanced queries section of the documentation but keep getting a 500 error (shown below).
SELECT TOP 1 id FROM transaction UNION SELECT TOP 1 id FROM transaction
Why doesn't this query work?
{
"type": "https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.1",
"title": "Internal Server Error",
"status": 500,
"o:errorDetails": [
{
"detail": "An unexpected error occurred. Error ID: ld3pklv4n4wk140q60is",
"o:errorCode": "UNEXPECTED_ERROR"
}
]
}

For an unknown reason the solution is to wrap the union in parentheses.
SELECT * FROM (
SELECT TOP 1 id FROM transaction
UNION ALL
SELECT TOP 1 id FROM transaction
)
CREDIT: I found an example buried on an archived slack thread:

Related

Why does this Cosmos SQL query require a subquery?

I'm trying to understand why my query below will only work when using a subquery.
Sample document structure:
{
"id": "78832-fsdfdf-3242",
"type": "Specific",
"title": "JavaScript vs TypeScript",
"summary": "Explain the differences between JavaScript and TypeScript.",
"products": [
"javascript v6",
"typescript v1",
"node.js"
]
}
Query requirements:
Find the id of all documents where the terms 'javascript' or 'csharp' or 'coding' are contained in either the title, summary or in one of the listed products.
To solve this, I'm using CONTAINS(). To avoid repeating the CONTAINS() for each combination of field and search term, I create a concatenation of the fields in question and name it searchField.
Working query
This is the query I came up with. It's using a subquery sub to add the concatenated fields and products to the result set. Then, I can use CONTAINS() on sub.searchField.
SELECT sub.id
FROM
(
SELECT
o.id,
o.type,
CONCAT(o.title, " ", o.summary, " ", p) as searchField
FROM o
JOIN p in o.products
) sub
WHERE
sub.type = "Specific"
AND
(
CONTAINS(sub.searchField, "javascript", true)
OR CONTAINS(sub.searchField, "csharp", true)
OR CONTAINS(sub.searchField, "coding", true)
)
Non-working query
Originally, I had the query written as seen below. I expected it to work as in other SQL dialects, but I cannot access searchField in the WHERE clause.
"Error: Identifier 'searchField' could not be resolved."
SELECT o.id, CONCAT(o.title, " ", o.summary, " ", p) as searchField
FROM o
WHERE
o.type = "Specific"
AND
(
CONTAINS(searchField, "javascript", true)
OR CONTAINS(searchField, "csharp", true)
OR CONTAINS(searchField, "coding", true)
)
Questions
Is there a better way to achieve the result needed? (Although, surprisingly, the query consumes only 230 RUs)
Why is the subquery needed? I really want to understand this so I can learn when the use subqueries and potentially write queries that would otherwise not be possible.

Maximum Event Hub receivers exceeded. Only 5 receivers per partition are allowed

I have a Azure Stream Analytics job that receives some raw events, transforms them and then writes them to some different outputs. I get the following error:
Maximum Event Hub receivers exceeded. Only 5 receivers per partition are allowed.
Please use dedicated consumer group(s) for this input. If there are multiple queries using same input, share your input using WITH clause.
This is weird, because I use a common table expression (the WITH clause) at the beginning to get all the data, and then I don't access the event hub anymore. Here is the query:
WITH
ODSMeasurements AS (
SELECT
collectionTimestamp,
analogValues,
digitalValues,
type,
translationTable
FROM EventhubODSMeasurements
),
-- Combine analog and digital measurements
CombineAnalogAndDigital AS (
SELECT
CAST(CONCAT(SUBSTRING(ODS.collectionTimestamp, 1, 10), ' ', SUBSTRING(ODS.collectionTimestamp, 12, 12)) AS datetime) AS "TimeStamp",
ROUND(AV.PropertyValue.value / (CAST(TT.ConversionFactor AS float)), 5) AS "ValueNumber",
NULL AS "ValueBit",
CAST(TT.MeasurementTypeId AS bigint) AS "MeasurementTypeId",
TT.MeasurementTypeName AS "MeasurementName",
TT.PartName AS "PartName",
CAST(TT.ElementId AS bigint) AS "ElementId",
TT.ElementName AS "ElementName",
TT.ObjectName AS "ObjectName",
TT.LocationName AS "LocationName",
CAST(TT.TranslationTableId AS bigint) AS "TranslationTableId",
ODS.Type AS "Status"
FROM ODSMeasurements ODS
CROSS APPLY GetRecordProperties(analogValues) AS AV
INNER JOIN SQLTranslationTable TT
ON
TT.Tag = AV.PropertyName AND
CAST(TT.Version as bigint) = ODS.translationTable.version AND
TT.Name = ODS.translationTable.name
UNION
SELECT
CAST(CONCAT(SUBSTRING(ODS.collectionTimestamp, 1, 10), ' ', SUBSTRING(ODS.collectionTimestamp, 12, 12)) AS datetime) AS "TimeStamp",
CAST(-9999.00000 AS float) AS "ValueNumber",
CAST(DV.PropertyValue.value AS nvarchar(max)) AS "ValueBit",
CAST(TT.MeasurementTypeId AS bigint) AS "MeasurementTypeId",
TT.MeasurementTypeName AS "MeasurementName",
TT.PartName AS "PartName",
CAST(TT.ElementId AS bigint) AS "ElementId",
TT.ElementName AS "ElementName",
TT.ObjectName AS "ObjectName",
TT.LocationName AS "LocationName",
CAST(TT.TranslationTableId AS bigint) AS "TranslationTableId",
ODS.Type AS "Status"
FROM ODSMeasurements ODS
CROSS APPLY GetRecordProperties(digitalValues) AS DV
INNER JOIN SQLTranslationTable TT
ON
TT.Tag = DV.PropertyName AND
CAST(TT.Version as bigint) = ODS.translationTable.version AND
TT.Name = ODS.translationTable.name
)
-- Output data
SELECT *
INTO DatalakeHarmonizedMeasurements
FROM CombineAnalogAndDigital
PARTITION BY TranslationTableId
SELECT *
INTO FunctionsHarmonizedMeasurements
FROM CombineAnalogAndDigital
SELECT Timestamp, ValueNumber, CAST(ValueBit AS bit) AS ValueBit, ElementId, MeasurementTypeId, CAST(TranslationTableId AS bigint) AS TranslationTableId
INTO SQLRealtimeMeasurements
FROM CombineAnalogAndDigital
SELECT *
INTO EventHubHarmonizedMeasurements
FROM CombineAnalogAndDigital
PARTITION BY TranslationTableId
And this is the event hub input that I use:
{
"Name": "EventhubODSMeasurements",
"Type": "Data Stream",
"DataSourceType": "Event Hub",
"EventHubProperties": {
"ServiceBusNamespace": "xxx",
"EventHubName": "xxx",
"SharedAccessPolicyName": "xxx",
"SharedAccessPolicyKey": null,
"ConsumerGroupName": "streamanalytics",
"AuthenticationMode": "ConnectionString"
},
"DataSourceCredentialDomain": "xxx",
"Serialization": {
"Type": "Json",
"Encoding": "UTF8"
},
"PartitionKey": null,
"CompressionType": "None",
"ScriptType": "Input"
}
I use a separate consumer group for this as well. As far as I see it, I do everything right. Does anyone know what's up?
Edit: I enable the diagnostic logs, and it says this:
Exceeded the maximum number of allowed receivers per partition in a
consumer group which is 5. List of connected receivers - nil, nil,
nil, nil, nil.
Turns out the issue was PEBKAC. There is another Job that accidentally pointed to the same input Event Hub.

Node js,Sequelize findAndCountAll with offset and limit doesn't work when contains "include" and "where: array[]" options

I'm trying to fetch paginated messages from a database given the ids of different chats. It works if I do not provide limit and offset, but when I provide the limit and offset parameters, it stops working. I use mariadb sql.
Message.findAndCountAll({
where: {chat_id: ids},//ids=> array of ints
offset: limit * page,
limit: limit,
include: {
model: UnreadMessage, as: 'unreadMessages',
where: {participant_id: userId}
}
},
)
The error I see is this
"(conn=12896, no: 1064, SQLState: 42000) You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ''20') AS `messages` INNER JOIN `unread_message` AS `unreadMessages` ON `messa...' at line 1\nsql: SELECT `messages`.*, `unreadMessages`.`message_id` AS `unreadMessages.message_id`, `unreadMessages`.`participant_id` AS `unreadMessages.participant_id` FROM (SELECT `messages`.`id`, `messages`.`chat_id`, `messages`.`content`, `messages`.`sender_id`, `messages`.`created_at` FROM `messages` AS `messages` WHERE `messages`.`chat_id` IN (3, 5) AND ( SELECT `message_id` FROM `unread_message` AS `unreadMessages` WHERE (`unreadMessages`.`participant_id` = 10 AND `unreadMessages`.`message_id` = `messages`.`id`) LIMIT 1 ) IS NOT NULL LIMIT 0, '20') AS `messages` INNER JOIN `unread_message` AS `unreadMessages` ON `messages`.`id` = `unreadMessages`.`message_id` AND `unreadMessages`.`participant_id` = 10; - parameters:[]"
My speculation was right all along when I first saw it. The error says it all.
...right syntax to use near ''20') AS `mess....
limit is string. Cast it using +limit.
If I'm right, you're passing it from the request directly without casting it to integer.

Azure Log Analytics - Join Fails - Inconsistent datatypes

I'm very new to Azure and the query language.
I've created a few alerts and queries which seem to work but I'm trying to make an hourly sweep looking for specific http errors and display them with a description as well as the code.
I used a join to a datatable but when I run the query it fails on the join.
Inconsistent data types for the join keys (responseCode_d, responseCode_d) (R64, I32)
responseCode_d is a numeric according to the Azure Analytics Query Schema and I declare it as an integer in my datatable.
responseCode_d schema
Below is the query.
let codes = datatable(responseCode_d:int, description:string)
[ 400, "Endpoint - Not found",
500, "Internal server error",
415, "Unsupported Media"
];
AzureDiagnostics
| join kind = inner
codes on responseCode_d
| where responseCode_d == 500 or responseCode_d == 415 or responseCode_d == 400
| where TimeGenerated >= ago(1h)
| summarize count(responseCode_d) by description
The error message gives a hint. (R64, I32)
There is a mismatch between the table column type (R64) and the type you specified for your table responseCode_d:int (I32).
Change datatable(responseCode_d:int, description:string)
to
datatable(responseCode_d:double, description:string)

DocumentDb "where" clause with mathematical expression

I would like to understand how to create query where clauses on DocumentDB with mathematical comparator inside.
For example, I used this demonstrator to understand how to make a "greater than" comparaison : expression AND food.version > 0 seems to work very well.
Here is under what I tryed onto portal.azure.com documentdb query explorer and the results. I don't understand why I got an error in some cases(QUERY3), and (in option) how to get error details on portal.azure.com ?!
Tested:
>>> QUERY1 >>
SELECT d.id,
d.name,
d.lastUpdateTime
FROM d
>>> RESULT1 >>
[
{
"id": "558d6007b909e8dfb2286e7b",
"name": "cSimpleSIMS_ici",
"lastUpdateTime": 1435589982672
},
{
"id": "558d6009b909e8df18296e7b",
"name": "didier",
"lastUpdateTime": 1435330811285
},
{
"id": "558d600ab909e8df28296e7b",
"name": "cDoubleSIMD_ici",
"lastUpdateTime": 1435331176750
},
{
"id": "558d600bb909e8df55296e7b",
"name": "george",
"lastUpdateTime": 1435330813519
}
(...)
]
>>> QUERY2 >>
SELECT d.id,
d.name,
d.lastUpdateTime
FROM d
WHERE (d.name='george')
>>> RESULT2 >>
[
{
"id": "558d600bb909e8df55296e7b",
"name": "george",
"lastUpdateTime": 1435330813519
}
]
>>> QUERY3 >>
SELECT d.id,
d.name,
d.lastUpdateTime
FROM d
WHERE (d.lastUpdateTime > 14)
>>> RESULT3 IN ERROR!
>>> QUERY4 >>
SELECT d.id,
d.name,
d.lastUpdateTime
FROM d
WHERE (d.name='george' AND d.lastUpdateTime > 14)
>>> RESULT4 >>
[
{
"id": "558d600bb909e8df55296e7b",
"name": "george",
"lastUpdateTime": 1435330813519
}
]
>>> QUERY5 >>
SELECT d.id,
d.name,
d.lastUpdateTime
FROM d
WHERE (d.name='george' AND d.lastUpdateTime > 1435330813519)
>>> RESULT5 >>
[]
Here's the gist...
Today, all JSON properties in DocumentDB get automatically indexed by a Hash index; which means queries with equality operators (e.g. WHERE d.name= "george") are extremely fast.
On the other hand, range queries (e.g. WHERE d.lastUpdateTime > 14) require a range index to operate efficiently. Without a range index, the range query will require a scan across all documents (which we allow if the header, x-ms-documentdb-query-enable-scan, is passed in by the request).
The queries you issued that had both a equality and range filter (e.g. WHERE d.name='george' AND d.lastUpdateTime > 14) succeeded, because the equality filter greatly narrowed down the set of documents to scan through.
TL;DR: There are two things you can do here to get rid of the error:
Create a custom index policy to add a range index for numeric types. The documentation for indexing policies can be found here.
Issue your query programmatically (not through the Azure Portal) to set the x-ms-documentdb-query-enable-scan header to allow scans on range queries.
P.S. I will push to improve the Azure Portal for you.
Now... there appear to be a few issues in the Azure Portal - which I will push to get fixed for you.
Bug: Exception message is truncated
Looks like the meaningful part of the exception message gets truncated out when using the Azure Portal - which is no bueno. What SHOULD have been displayed is:
Microsoft.Azure.Documents.DocumentClientException: Message: {"Errors":["An invalid query has been specified with filters against path(s) that are not range-indexed. Consider adding allow scan header in the request."]}
Missing Feature: Enabling scans in query explorer
There ability to set the x-ms-documentdb-query-enable-scan header is currently not exposed in the Azure Portal's query explorer. We will add a checkbox or something for this.
To add to aliuy's answer, we're working on a change that will improve the developer experience here - Default indexing policy for numbers will be changed from Hash to Range index, so you do not need the header or override indexing policy in order to perform range queries.

Resources