greenplum database "relation does not exist" - database-administration

I am getting "relation does not exist" error while trying to truncate a particular table.The table actually exists in the database.
Also when I click on this table in pg admin I get the warning for vacuum.
Are these things related.?
------ Adding few more details----
Truncate statement is called within a greenplum function. This job truncates and load the table on a daily basis(This table is queried in reports)The issue pops up once in a while and if we go and restart the same job again after few minutes it succeeds.

Please try to do the below select * from schemaname.tablename limit 10; If you don't use the schema name then you have to set the search path as below and then run your select
set search_path=schemaname;

Related

Databricks auto merge schema

Does anyone know how to resolve this error?
I have put the following before my merge, but it seems to not like it.
%sql set spark.databricks.delta.schema.autoMerge.enabled = true
Also, the reason for putting this in was because my notebook was failing on schema changes to a delta lake table. I have an additional column on one of the tables I am loading into. I thought that data bricks were able to auto-merge schema changes.
The code works fine in my environment. I'm using Databricks runtime 10.4
TL;DR: add a semicolon to the end of the separate SQL statements:
set spark.databricks.delta.schema.autoMerge.enabled = true;
The error is actually a more generic SQL error; the IllegalArgumentException is a clue - though not a very helpful one :)
I was able to reproduce your error:
set spark.databricks.delta.schema.autoMerge.enabled = true
INSERT INTO records SELECT * FROM students
gives: Error in SQL statement: IllegalArgumentException: spark.databricks.delta.schema.autoMerge.enabled should be boolean, but was true
and was able to fix it by adding a ; to the end of the first line:
set spark.databricks.delta.schema.autoMerge.enabled = true;
INSERT INTO records SELECT * FROM students
succeeds.
Alternatively you could run the set in a different cell.

Oracle select not returning all columns under sqlplus

I have the below SQL select :
select TST_CODE ||'|'||UTI_CODE ||'|'||TST_NAME ||'|'||TST_NAME_REDUIT ||'|'||TST_GROUP ||'|'||TST_MET ||'|'||TST_MET_CODE ||'|'||TST_MET_FAMILY ||'|'||TST_MET_CALCUL ||'|'||TNS_STATUS_PAR_NM2 ||'|'||TNS_STATUS_PART_NM1 ||'|'||TNS_STATUS_PART_N ||'|'||STR_CODE ||'|'||FOUR_CODE ||'|'||TST_SIREN ||'|'||MEMO_ASC ||'|'||NAV_FICID
from TEST_TABLE;
When I run it in SQL Developer it returns my all the columns of the table.
But when I put the same request in an SQL file, like TEST_TABLE.sql and run it under sqlplus in linux it returns only the 1st 14 columns, that is it stops at FOUR_CODE.
Any idea why?
Edited:
After investigation, it is because one of the column is of data type CLOB. Any idea how to solve this? My TEST_TABLE.sql is being dynamically created.
Try to enlarge LINESIZE, such as
SQL> set linesize 200
If it is not enough, enlarge it even more.

Selecting rows with a specific column as null gives column does not exist error

I'm using psycopg2, and the query is a simple string.
Q= "SELECT * FROM POST WHERE PUBLISH_TIME IS NULL"
When I execute it in pgAdmin, it gives me the correct results, but throws
psycopg2.ProgrammingError: column "publish_time" does not exist
I tried this solution
but it's still the same error output.
Leaving this answer here in case anyone has the same problem. This cannot be done via psycopg2 due to a design feature. I added a flag to my model which depended on publish_date column, created a db script that wrote this flag and went ahead with
Q= "SELECT * FROM POST WHERE PUBLISH_FLAG = False;"

What am I missing in trying to pass Variables in an SSIS Execute SQL Task?

I am creating an SSIS Execute SQL Task that will use variables but it is giving me an error when I try to use it. When I try to run the below, it gives me an error and when I try to build the query, it gives me an error SQL Sytnax Errors encountered and unable to parse query. I am using an OLEDB connection. Am I not able to use variables to specify the tables?
You can't parameterize a table name.
Use the Expressions editor in your Execute SQL Task to Select a SqlStatementSource property.
Try "SELECT * FROM " + #[User::TableName]
After clicking OK twice (to exit the Task editor), you should be able to reopen the editor and find your table name in the SQL statement.
Add a string cast in a case where it might be a simple Object - (DT_WSTR,100)
You are using only single parameter(?) in the query and assigning 3 inputs to that parameters which is not fair put only single input and assign some variable as input as shown in image and change the value of variable respectively.
the parameter name should be incremented by 1 start with 0 because they are the indexes representing the "?" in the query which was written the query window.

Syntax error at position 7: unexpected "*" for `Select * FROM mytable;`

I write because I've a problem with cassandra; after have imported the data from pentaho as show here
http://wiki.pentaho.com/display/BAD/Write+Data+To+Cassandra
when I try to execute the query
Select * FROM mytable;
cassandre give me an error message
Syntax error at position 7: unexpected "*" for Select * FROM mytable;.
and don't show the results of query.Why? what does it mean that error?
the step that i make are the follow:
start cassandra cli utility;
use keyspace added from pentaho; (use tpc_h);
select to show the data added (Select * FROM mytable;)
The cassandra-cli does not support any CQL version. It has its own syntax which you can find on datastax's website.
Just for clarity, in cql to select everything from a table (aka column-family) called mytable stored in a keyspace called myks you would use:
SELECT * FROM myks.mytable;
The equivalent in cassandra-cli would *roughly be :
USE myks;
LIST mytable;
***** In the cli you are limited to selecting the first 100 rows. If this is a problem you can use the limit clause to specify how many rows you want:
LIST mytable limit 10000;
As for this:
in cassandra i have read that isn't possible make the join such as sql, ther isn't a shortcut to issue this disadvantage
There is a reason why joins don't exist in Cassandra, its for the same reason that C* isn't ACID compliant, it sacrifices that functionality for it's amazing performance and scalability, so it's not a disadvantage, you just need to re-think your model if you need joins. Also take a look at this question / answer.

Resources