Cognos 11 with ST_GEOMETRY column type - cognos

I am trying to import metadata from the schema which have tables with ST_GEOMETRY column type. It is throwing
ORA-01427: Subquery returns more than one row
.
With the tables without ST_GEOMETRY the improt is fine.
Please suggest if we need to do some extra setting for importing this type of data.
Regards
Panna Modi

Do one of two things:
Have your data architect design and your database administrator
implement views for these tables. The views should omit the
geometry columns. Alternatively, if you need the geometry in Cognos
for some reason, they can convert the geometry to text.
In the query that defines the query subject, specify the columns you
want rather than using SELECT *. Again, you can use the appropriate SQL functions to convert the geometry to WKT if needed.

Related

Why Cassndra is called unstructured database

Why Cassandra is called unstructured even though table/column family has to be defined with columns and their data type.
For the defined table with some fixed columns we can choose to fill some columns in one particular row and choose not to fill in other row. But same thing can be done in RDBMS where we can leave some columns in the insert statement and the columns left out should allow null?
As mongo store the data in json documents where we can store different (keys) data every time insert new document. we don't need to define anything . But for cassandra we need to reconfigure our table to accommodate new columns getting added.
Even though some articles are present but still its not clear to me. Can someone pin point the reason.
Basically is not about "how it works", is how the files are stored, this is why cassandra have not structure for the files, you can have a same récords in diferents folders.

How can I insert data from Excel file to Oracle using INSERT INTO SQL statement?

I have created types using Oracle objects and created a table
CREATE OR REPLACE TYPE OttawaAddress_Ty AS OBJECT
(StrtNum NUMBER(9),
Street VARCHAR2(20),
City VARCHAR2(15),
Province CHAR(2),
PostalCode CHAR(7));
/
CREATE OR REPLACE TYPE OttawaOfficesInfo_Ty AS OBJECT
(Name VARCHAR(35),
OfficeID VARCHAR2(2),
Phone VARCHAR2(15),
Fax CHAR(15),
Email CHAR(30));
/
CREATE TABLE OttawaOffices
(OfficeAddress OttawaAddress_Ty,
OfficeInfo OttawaOfficesInfo_Ty,
Longitude_DMS NUMBER (10,7),
Latitude_DMS NUMBER (10,7),
SDO_GEOMETRY MDSYS.SDO_GEOMETRY);
I have an Excel file which holds the data and I need to import to this Oracle table using INSERT INTO SQL statements. How can I do this? As you can notice, I have a column called SDO_GEOMETRY which will hold the Decimal Degrees of the records. These decimal degrees are saved in two separate columns in my Excel file.
I am not sure if I can problematically insert the values from Excel or whether I need to go through every record and create
INSERT INTO ... VALUES.... And if so, how to add values when I have created types?
Oracle has a really neat feature called External Tables. These look like regular tables from inside the database, so we can execute SELECT statements against them. The trick is that the table's data domes from OS files (hence "external"). We just define the table to say the same structure as the spreadsheet's columns. It doesn't work with Excel binary format but it does work for CSV files (so Save as ...).
The advantage of external tables is that manipulating data is easy in SQL - it's what it just best - and we don't need to load anything into a staging table. Something like
insert into OttawaOffices
select ottawaaddress_ty(ext.strtnum,ext.street,ext.city,ext.province,ext.postalcode)
, ottawaofficesinfo_ty (ext.name,ext.officeid,ext.phone,ext.fax,ext.email)
, ext.longitude
, ext.latitude
, SDO_GEOMETRY MDSYS.SDO_GEOMETRY(ext.col1, ext.col2)
from your_external_table ext
/
The limitation of external tables is the need to get the source file onto the database server and create a database directory object. Some places are funny about this.
Anyway, find out more.
I'm not going to pass judgement on the declaring table columns as user-defined types. It's usually considered bad practice but maybe it works in your use case.

full table join in powerpivot

In powerpivot, Related(Othertable[field]) retrieves the associated column from a related table.
I would like to import ALL such columns, doing the equivalent of a join.
Is it possible to do this ?
nicolas,
the smartest thing to do from my perspective is to merge your queries into one so that you can keep your original tables.
I would suggest using new PowerQuery Merge funcionality, which is very easy and works reliably (and also supports loading data directly into your PowerPivot data model).
Or you can write you custom Query in PowerPivot - if you use MSSQL (or any other) database as your source, you can actually use JOIN directly in the PowerPivot window with Table Import Wizard that makes things a bit easier.
So the answer is: keep your original data tables intact, and create a new one that will be merging them together just for the purpose of your desired report.
Hope this helps.

Create a Volatile table in teradata

I have a sharepoint list which i have linked to in MS Access.
The information in this table needs to be compared to information in our datawarehouse based on keys both sets of data have.
I want to be able to create a query which will upload the ishare data into our datawarehouse under my login run the comparison and then export the details to Excel somewhere. MS Access seems to be the way to go here.
I have managed to link the ishare list (with difficulties due to the attachment fields)and then create a local table based on this.
I have managed to create the temp table in my Volatile space.
How do i append the newly created table that i created from the list into my temporary space.
I am using Access 2010 and sharepoint 2007
Thank you for your time
If you can avoid using Access I'd recommend it since it is an extra step for what you are trying to do. You can easily manipulate or mesh data within the Teradata session and export results.
You can run the following types of queries using the standard Teradata SQL Assistant:
CREATE VOLATILE TABLE NewTable (
column1 DEC(18,0),
column2 DEC(18,0)
)
PRIMARY INDEX (column1)
ON COMMIT PRESERVE ROWS;
Change your assistant to Import Mode (File-> Import Data)
INSERT INTO NewTable (?,?)
Browse for your file, this example would be a comma delineated file with two numeric columns and column one being the index.
You can now query or join this table to any information in the uploaded database.
When you are finished you can drop with:
DROP TABLE NewTable
You can export results using File->Export Data as well.
If this is something you plan on running frequently there are many ways to easily do these type of imports and exports. The Python module Pandas has simple functionality for reading a query directly into DataFrame objects and dropping those objects into Excel through the pandas.io.sql.read_frame() and .to_excel functions.

Rendering STAR schema in Excel (a 2-Dimension table)

Please move to an appropriate forum if it doesn't belong here.
I've a data feed that represents some multidimensional data in star schema.
e.g. /Products /SalesYear /SalesContact /Region /Salesdata
Now I want to render this data in a simple tabular view
example
2005 2006 2007
Product1
Category1 27m$ 30m$ 35m$
Category2 9m$ 1m$ 11m$
Product2
Category1 27m$ 30m$ 35m$
Category2 9m$ 1m$ 11m$
Are there any standard algorithm or techniques that can be used to display this kind of data?
[EDIT]
What I need essentially is an efficient method to build an in-memory cube like powerpivot does but at a smaller scale.
I think you could use a modified version of this answer by #Dick Kusleika: Convert row with columns of data into column with multiple rows in Excel 2007. Note that this solution does not the nested rows under Product1/Product2 that you have above, but my guess is you could pretty easily modify the solution to handle two row headings: column A would contain product name and column B would contain the category.
EDIT: I misunderstood and thought you were trying to get the data out of that format, not in to that format.
If you have Excel 2010, the PowerPivot plugin can consume OData fields directly (found the answer on the OData.org consumers page. If you have an older Excel, you might still be able to pull the data in with Get External Data From Web. You may need to throw a proxy page (ASP.NET, PHP, whatever you're comfortable with) in-between that understands JSON and transform it into an HTML table. Get External Data From Web will definitely understand how to read data from a standard table.
Once you have the data in a normalized sheet in Excel, it should only be a matter of inserting a Pivot Table that uses that range as it's data source.

Resources