an error in RS when I try to aggregate a field - cognos

I haver cognos on windows server connecting to posgres via postgreSQL ODBC.
I created a report in RS. Whenever I try adding a numeric field to the report, i get this error:
RQP-DEF-0177 An error occurred while performing operation 'sqlPrepareWithOptions' status='-9'.
UDA-SQL-0107 A general exception has occurred during the operation "prepare".
No query has been executed with that handle
If I change the field's Aggregate Function to 'None', everything works fine.
Any ideas, anyone?

Do all measures fail aggregation? What specific data types are there? Are they correctly seen from within Framework Manager as these data types?
I would generate the SQL from the report without the aggregation, edit it to have aggregation, and run it direct against the database to rule out any data overflow similar issue. [You will not be able to generate the SQL with the aggregation in place].
If not that, my next guess would be the driver itself. What version of Cognos? What OS? 32/64 bit for each? What version of postgre?

Related

[node-odbc]Error allocating or reallocating memory when fetching data. No ODBC error information available

My system
Database Version: 6.1.0
Database Name: Sybase
Node.js Version:12.18.3
node-odbc package Version: 2.4.1
Node.js OS: Windows 10 Pro
The bug
Launching the same query with a lot of data for many times odbc packet returns to me the following error "[odbc]Error allocating or reallocating memory when fetching data. No ODBC error information available".
The heap fills up with each query despite the result being returned to the service response and the variable being cleaned.
Trying to run the query with the parameter (cursor=true) and closing the cursor without data fetch there are no problems.
The same problem occurs when I have longvarchar fields on the tables.
Although I close the connections the db still displays them open.
Connection odbc
Expected behavior
I do not understand why the system goes out of memory despite the fact that the queries are executed one after the other. Running the query once does not cause the problem.
To Reproduce
Prepare table with 22 fields (2 varchar(32767)) and 5633 records.
Call the service 15 times by pressing a key that open connection, executes the SELECT query on table, close connection and returns the result.
EDIT 27/01/2023 ⇒ 2.4.7 released solve the problem
node-obdc v2.4.7 => https://github.com/markdirish/node-odbc/releases/tag/v2.4.7
We got the same issues here.
If we're talking about the same projects ⇒ https://github.com/markdirish/node-odbc
This is a known issue related to a memory leak during SELECT query on table
https://github.com/markdirish/node-odbc/issues/304
Seems to appear on multiple drivers like: SQLite, HFSQL, etc....
Memory don't handle to free data and at least you get a memory heap limit into your program that cause the exit of it.
A merge request is about to solve this ⇒ https://github.com/markdirish/node-odbc/pull/306
If you need to read data with this solution, I can suggest an alternative solution, pending resolution...
https://github.com/wankdanker/node-odbc
A bit difficult to launch, but works like a charm afterward.

Entity Framework Core 2.1 - Getting StackOverflowException when executing queries with many columns

Since updating to Entity Framework Core 2.1 (currently on 2.1.1), I have run into a number of queries that now throw a StackOverflowException. These queries worked in v2.0.x. It is only on larger queries - 50+ columns. Most of the queries aren't complex - they are all a single select statement with 10-15 joins and a filter or 2.
An unhandled exception of type 'System.StackOverflowException' occurred in Microsoft.EntityFrameworkCore.Relational.dll
This is all I get. I am unable to get any stack trace or other helpful information.
I have a DebugLoggerProvider set up on the DbContext and it appears that the query is compiled and executed just fine:
Microsoft.EntityFrameworkCore.Database.Command: Information: Executed DbCommand (162ms) [Parameters=[#__Id_0='1'], CommandType='Text', CommandTimeout='30']
Is there some issue materializing? The object I'm projecting into has no self-references, just simple properties. I can run the exact same query perfectly fine in LINQPad. I only see the StackOverflowException when running in my ASP.NET MVC (not Core) website.
I can comment out a chunk of property projections and get the query to work, but which properties does not seem to matter. It's as if there is some column limit, yet it works fine in LINQPad and worked fine before.
Any ideas on what is happening or even how I could get more information?

Impossible to process multiple tables with ODBC connection in SSAS Tabular 2017

I'm currently building a cube in SSAS Tabular with compatibility level 1400 (on an Azure workspace server) and here is my problem. I have an ODBC connection to source my cube and I have to use a connection string and a SQL query for each tables I need (the connection string is always the same and the SQL query is always different).
When I have my first table (and only one table), I can Build, Process and Deploy easily without any problem. But, when I add a new table, I can't process anymore. I have that kind of message for both tables : Failed to save modifications to the server. Error returned: 'Column' column does not exist in the rowset.
I think the problem comes from the connection string which is the same for every table. I only have one Data source at the end because I only have one connection string for every table. In my opinion, it might be the cause of my problem but I'm not sure about that. Any idea ?
I hope I made myself clear.
Thanks a lot.
I found the solution to my problem. It was not related to my data source but to the table properties of each table.
Indeed, there was only the connection string in it and not the SQL query. I had to replace it with the correct M language query. It's still a bit strange because I had to make the same "Get Data" in Power BI to have the right M query and then copy and paste it to the table properties in SSAS. There should be a way to make it automatically I guess but I didn't find how.

Multiple Query run N1QL couchbase with nodejs error out-of-memory condition

I want to migrated data MySQL to Couchbase.
I have imported company with _id=UUID.
Now I want to import other data which it related to company. I need _id of company in to new Import But When I run N1Ql Query in a loop to find it related data it show error as below.
Error: An unknown N1QL error occured. This is usually related to an
out-of-memory condition
what I am doing.
First get other data then use for loop to run N1QL query to get it related data. This time the error occured. I am Using Nodejs
when I put the limit 0,200 it's work but it more then 300 it is given this error
Share the exact N1QL query, sample document, and your code.
Can you check what errors/warnings you have in query.log. Also, provide your h/w setup and couchbase cluster setup details. How many documents are in the bucket, and avg size of the docs?
-Prasad

Weird issue with Azure SQL Database v12: the database is always slow on the first insert or delete execution, but not with V11

We are using MVC4, ASP.NET 4.5, Entity Framework 6.
When we used Azure SQL Database v11, initial record inserts and deletes via EF, worked fine and quickly. However now, on v12, I notice that initial inserts and deletes can be very slow, especially if we choose a new value when inserting. If we insert a new record with the same value, the response is rapid. The delay I am talking about can be about 30 on S1, 15 secs on S2, 7 secs on S3.
As I say, we never encountered this on v11.
Any ideas gratefully received.
EDIT1
Just been doing some diagnostics and it seems that a view that I was using now runs very slowly first time:
db.ExecuteStoreCommand("DELETE FROM Vw_Widget where Id={0}", ID);
Do I need to rejig views in anyway for Azure SQL Database v12?
EDIT2
Looking at the Code a little more I see that I have added a delete trigger to the View, so basically I have set up a view so I can use this trigger code in certain situations. I am now trying to take out the trigger code and run it from the app, which does run alot quicker. Perhaps this code should be a stored procedure.
Definitely you need to do some diagnostics for your view to check the performance of your query and you may need to tune your query. The time measures you are saying is so high to perform any operation. Please make sure to do insert or deletes on your target tables and not views. The best practice is not to use views to insert or delete.
You can use views only in select statements.
I had a similar problem when make a migration of sql database v2 to v12. Actually i was working with business model and I tried to migrate to S0. The performance of the DB was not good. After sometime i discover that dtu model has particular views to monitor what type of provison model do you need. If is on the first time the problem, probably your application are making a lot of queries to load data in memory and these can be affecting the performance of your CRUD statement.
SELECT end_time
, (SELECT Max(v)
FROM (VALUES (avg_cpu_percent)
, (avg_data_io_percent)
, (avg_log_write_percent)
) AS value(v)) AS [avg_DTU_percent]
FROM sys.dm_db_resource_stats
ORDER BY end_time DESC;
more information about that, can be found on these page:
https://azure.microsoft.com/en-us/documentation/articles/sql-database-upgrade-server-portal/

Resources