We use commodity trading software linked to an Oracle database to export reports to Excel. I'm connecting to this Oracle database using PowerPivot as well as SQL developer. In doing so, I'm able to connect to Oracle directly creating live, refreshable reports which no longer need to be constantly exported.
I located an Oracle view responsible for generating one of the most important reports we export to Excel. What's strange is that all of the columns are completely empty. When I open it using PowerPivot or SQL Developer, I just see the headers which contain no data. It populates with data just fine when exported from our trading software however.
Does anyone know why this might be and how I can get this view to populate the data (using PowerPivot for example)?
Is this a materialized view I'm dealing with?
My first guess would be it has to do with permissions or row-level security on the view. Whether it is materialized view is impossible to determine from the data you've provided, but should make no difference in accessing the data from Power Pivot.
This is a question for your DBAs and unlikely a problem in Power Pivot.
Related
I'm connecting my Spotfire to Postgres database. After connecting to the db some of the functions are not available in Spotfire. I want to know if there is a way I can be connected to the database and still use all the Spotfire functionalities. I have huge amount of data and I need to store it in a database as using Excel is not a viable solution. I understand when I connect to a database I can only use limited functions supported by the database. I wanted to know there is a way around this situation.
If there is a workaround like similar to qvd's in qlik or if I can store data in another dxp file and use that to import data.
I think you kept your data external, and that's why you can't use some Spotfire functionalities.
To have your data embedded, just select this option when you add a datatable. In the load method, select "import data" instead of "keep data external".
If you have too much data in your database, consider using the "load on demand" functionality to only get the data you want.
It is not because of external data, I spoke to Spotfire support they don't support this, if I'm connecting to a database, I'll only be able to use limited functions. I found an alternative solution though using information links.
We are using MS CRM 2015 and we are looking to know our options/best practice to archive audit logs. Any suggestion please? Thanks!
You can use the MSCRM Toolkit at http://mscrmtoolkit.codeplex.com/, which has a tool called Audit Export Manager to aid in archiving audit logs. The documentation for the tool is available at http://mscrmtoolkit.codeplex.com/documentation#auditexportmanager . The key items that this tool allows you to do is to do filtering by entities, Metadata, summary or detail, picking individual users, actions, and/or operations to include in your export. Exports can be limited to a particular date range and can be exported to CSV, XML, or XML spreadsheet 2003 format. Note that I've had trouble exporting with a couple of the formats, but typically get good results when exporting to CSV formats.
This is one of the tools I've found that gives you some flexibility when exporting audit records since Microsoft CRM allows you to filter the audit data, but doesn't provide a good built in means to export it.
You can try newest Stretch Database feature from SQL Server 2016:
Stretch Database migrates your cold data transparently and securely to the Microsoft Azure cloud.
Stretch warm and cold transactional data dynamically from SQL Server to Microsoft Azure with SQL Server Stretch Database. Unlike typical cold data storage, your data is always online and available to query. You can provide longer data retention timelines without breaking the bank for large tables like Customer Order History.
There is a helpful hands-on review SQL Server 2016 Stretch Database with very interesting SWITCH TABLE example.
Also there must be a solution with moving archived data from audit to separate filegroup. Take a look to Transferring Data Efficiently by Using Partition Switching:
You can use the Transact-SQL ALTER TABLE...SWITCH statement to quickly and efficiently transfer subsets of your data in the following ways:
Assigning a table as a partition to an already existing partitioned table.
Switching a partition from one partitioned table to another.
Reassigning a partition to form a single table.
I am working on an Excel to Access Application, where the front-end will be an Excel workbook provided to 60 users who will be filling and submitting a form. On submitting the form, the form data will be inserted into a table (only one table) in an Access mdb file using VBA and ADO. The users will be only inserting records; NO UPDATES. (Basically it is a data entry application and to speed up data entry, multiple users will be using Excel files to input records into a common backend mdb database.)
Will there be any locking or concurrency issues givent that only INSERTS are going to be done? If yes, are there ways to handle them?
(I am aware that an SQL Express or other solution would be better than the MS-Access solution. But business requires an mdb file as it can easily be mailed to another person periodically for analysis.)
Consider building the form within Access instead of Excel. Forms are an integral component of Access and can update, append, delete table data without VBA or ADO since the table will be the form's RecordSource. But still Access forms allow VBA coding and embedded macros with interactivity (i.e., OnClick, OnOpen, AfterUpdate, BeforeChange, AfterDelConfirm events) more advanced than Excel UserForms.
Plus, Access allows simultaneous users up to 255. You can even split Access into two files for frontend and backend and distribute multiple frontends to all 60 users but maintain one backend data file. Even more, Access backends can upsize to any standard RDMS (Oracle, SQL Server, MySQL, PostgreSQL, DB2) using ODBC/OLEDB drivers.
Please don't settle with Excel just because it is a very popular, easy-to-use application. To build a UserForm, connect to a database via ODBC, and run looped inserts with VBA recordsets will amount to quite a bit of code when all this is native to Access and can be built without one line of VBA. Use Excel as an end-use document like a Word or PDF that interacts with static, exported data for formula, statistics, reports, and tables/graphs.
CONCURRENCY
Depending on the DAO or ADO calls or general database settings, various locking mechanisms can be set in MS Access tables, stored queries, or VBA recordsets:
Pessimistic locking - where individual record is locked while first user is in edit mode
Optimistic or no locking - where individual record is locked only while first user is in save mode; usually this is used in environments with low chance of concurrency of SAME record
All records locking - where entire table is locked while first user is in edit mode
Usually, pessimistic locking is the default (Access forms employ this setting) where users after first user receives an error message if attempting to edit the SAME record. For your situation of data inserting, locking would not pose an issue but only if your other users can browse and edit previous data.
I'm sitting in on a training class and the instructor just asked the new hires a question I don't know the answer to myself. "If you don't have direct access to the database, how would you see some rows in the table through Cognos v9.5?"
Anybody have a solution? I'm pretty new to Cognos still so please be specific!
Thanks!
v 9.5 is TM1 only, there was no such Framework Manager release.
You can write custom SQL in TurboIntegrator using ODBC data source.
If we're talking BI (not TM1) then you would see rows through Cognos secured data sources. ie. the Cognos admininstrator creates a data source in Cognos (using Cognos Connection) which has access to the database. Then he grants you (as a Cognos user) access to the data source. You don't have direct access to the database but you have access to a Congos datasource via Cognos security, and you can see data that way.
I need to combine data from the Project Server reporting database with data from custom lists in SharePoint workspaces. The results need to be displayed within a single report. How should this be done? Options I've thought of:
Extend the reporting database with the custom list data (if this is possible). Use Reporting Services to display the output.
Query the reporting database and the SharePoint workspaces and combine results in memory. Write custom code to display the output.
Any other ideas? I have the skills to develop this but am very open to purchasing a product if it solves the problem.
I've had this sort of problem as well. My apporach:
Create a Custom reporting Db.
Run regular jobs from the SQL Server to query sharepoint (via WS) and store the results in the db.
i use the ListItemsChangesSinceToken is Lists.asmx to improve effeciency. Also I utilise the sitedataquery tool set. I wrote a really simple interface into it for the ability to call a sitedataquery remotely, returning a dataTable.
Use Reporting Services / any tool to extract and report on the data.
The reason I opted for a staging Db was for
Performance - the WS calls are pretty slow.
Service continuity - if SP is down for any reason or slow then queries will fail.
Hope this helps.
I also found the tool SharePoint Data Miner which appears to do the same as DJ's answer.