What the differences between "Cognos TM1" and "Cognos 10 BI"?
Which one is consider as BI Tools by IBM?
There are huge differences between "Cognos TM1" & "Cognos 10 BI"!
Cognos TM1 is a (OLAP) multidimensional database which can be queried from Excel and the Web through "TM1 Web", "TM1 Executive Viewer" and "Cognos 10 BI". Within this database, you'll be able to create almost anykind of OLAP / Decisional application.
Cognos 10 BI is a web based reporting application. Users, depending on their rights (licence) will be able to run reports, schedule reports and/or create ad hoc analysis against relational and/or multidimensional databases. In order to do so, a logical layer has to be built using "Framework Manager". This logical layer is used to hide database complexity and to provide to the end users relevant information.
In simplest terms:
TM1 is a database engine and a collection of applications for accessing and managing its databases.
Cognos BI is a collection of web applications that provide pretty interfaces for viewing and doing stuff with data.
More often than not, Cognos BI uses TM1 databases as its data source, but it does not have to. If Cognos BI is using TM1, most of the same user functions that are possible in TM1 applications are also available through Cognos BI, except they are available in a more user-friendly manner. Cognos BI also adds functionality not in TM1 applications to allow additional data management. IBM's marketing is confusing, but generally Cognos BI and Cognos TM1 collectively are considered to be the BI Tools package that they offer.
Now to be a little more technical about TM1, it is not just a plain old database. As others here have mentioned, it is a multidimensional OLAP database. It is able to handle numeric and string data, but it does not have a concept of NULL values. Numeric data can be summarized using consolidated elements. It has attributes to store metadata in. It has a built-in rules engine to handle business logic and custom calculations. It has processes for ETL and database maintenance tasks. It has chores for scheduling processes at various intervals. It links to Excel workbooks. Lastly, in addition to these features and more that are provided out of the box, TM1 exposes an API for programming against using 3GLs such as C++, C#, Java, and even VBA.
The key difference is Cognos TM1 is meant to work with excel easily wheras Cognos 10 BI is mainly browser based.
Have a look here for gory details.
http://www.tm1forum.com/viewtopic.php?f=5&t=1442
Cognos TM1 is also referred to by IBM as Financial Performance Management FPM. It's an in-memory MOLAP cube application with real-time rules calculation. As it provides end-user writeback it's often used by the Office of Finance for budgeting and modelling - therefore the MS Excel interface is frequently used. However it also comes with a zero footprint web "front end" as well as Cognos Insight, which permits distributed or disconnected access to the TM1 cubes (located server side).
TM1 may be integrated into Cognos BI, so that Cognos BI reports from TM1 cubes; or TM1 may be accessed from a portlet within a Cognos BI dashboard.
difference between cognos10 and cognos tm1 :-BI is the reporting tool and cognos tm1 is analysis and reporting tool , in tm1 bulid the CUBES and BI generatead the reports
Related
i am using a Web server service where i am able to get real-time data using RestAPI calls. Now i want to be able to collect the data - store them somehow and then visualise them in a nice way (produce graphs basically). My approach would be to store them in a database and then use the PowerBI's internal feature "Get Data" from an "SQL Server Database". No idea if this the correct approach. Can anyone advise here ?
Hello and welcome to Stack Overflow!
I agree with Andrey's comment above. But if you want to know about all the Data sources that PowerBI supports connecting to, please check the following resources:
Data sources in Power BI Desktop
Power BI Data Source Prerequisites
Connect to a web page from Power BI Desktop
Real-time streaming in Power BI
Additionally, you may also go through Microsoft Power BI Guided Learning to understand the next steps for visualization.
Hope this helps!
There's another approach that is to build a custom data connector for that API to Power BI.
That allows you to fetch the data inside Power BI, and build the visuals. You can store it in excel files or sql (you can use python scripts for this) and you can schedule refreshes on the service.
Right now I have created power bi dashboard (using power bi desktop) which retrieves data from excel file. Later on I will replicate the same data model to Azure analysis services tabular model.
Is it possible to switch my power bi dashboard's data source to azure analysis service seamlessly?. What I mean is that I don't have to do major rework on my dashboard (re-create the visualization again, etc). How do I do that?
Thank you
According to me the hard fact is that it may not be seamless. There may be possibility of minimizing some extra work though but you need to answer few questions yourself and then decide:-
Do you plan to use SSAS Tabular in Power BI using live connection or import mode? (I assume you are probably having this cube as on-premise)
Is the data layout in the excel (understand it like flattened data) going to be same as in SSAS Tabular?
One option worth considering would be to have the SSAS Tabular cube readily loaded using the data from the excel and you start off the development of Power BI. That way source changes in Power BI will not be an issue going forward.
Hope this helps?
Our team have just recently started using Application Insights to add telemetry data to our windows desktop application. This data is sent almost exclusively in the form of events (rather than page views etc). Application Insights is useful only up to a point; to answer anything other than basic questions we are exporting to Azure storage and then using Power BI.
My question is one of data structure. We are new to analytics in general and have just been reading about star/snowflake structures for data warehousing. This looks like it might help in providing the answers we need.
My question is quite simple: Is this the right approach? Have we over complicated things? My current feeling is that a better approach will be to pull the latest data and transform it into a SQL database of facts and dimensions for Power BI to query. Does this make sense? Is this what other people are doing? We have realised that this is more work than we initially thought.
Definitely pursue Michael Milirud's answer, if your source product has suitable analytics you might not need a data warehouse.
Traditionally, a data warehouse has three advantages - integrating information from different data sources, both internal and external; data is cleansed and standardised across sources, and the history of change over time ensures that data is available in its historic context.
What you are describing is becoming a very common case in data warehousing, where star schemas are created for access by tools like PowerBI, Qlik or Tableau. In smaller scenarios the entire warehouse might be held in the PowerBI data engine, but larger data might need pass through queries.
In your scenario, you might be interested in some tools that appear to handle at least some of the migration of Application Insights data:
https://sesitai.codeplex.com/
https://github.com/Azure/azure-content/blob/master/articles/application-insights/app-insights-code-sample-export-telemetry-sql-database.md
Our product Ajilius automates the development of star schema data warehouses, speeding the development time to days or weeks. There are a number of other products doing a similar job, we maintain a complete list of industry competitors to help you choose.
I would continue with Power BI - it actually has a very sophisticated and powerful data integration and modeling engine built in. Historically I've worked with SQL Server Integration Services and Analysis Services for these tasks - Power BI Desktop is superior in many aspects. The design approaches remain consistent - star schemas etc, but you build them in-memory within PBI. It's way more flexible and agile.
Also are you aware that AI can be connected directly to PBI Web? This connects to your AI data in minutes and gives you PBI content ready to use (dashboards, reports, datasets). You can customize these and build new reports from the datasets.
https://powerbi.microsoft.com/en-us/documentation/powerbi-content-pack-application-insights/
What we ended up doing was not sending events from our WinForms app directly to AI but to the Azure EventHub
We then created a job that reads from the eventhub and send the data to
AI using the SDK
Blob storage for later processing
Azure table storage to create powerbi reports
You can of course add more destinations.
So basically all events are send to one destination and from there stored in many destinations, each for their own purposes. We definitely did not want to be restricted to 7 days of raw data and since storage is cheap and blob storage can be used in many analytics solutions of Azure and Microsoft.
The eventhub can be linked to stream analytics as well.
More information about eventhubs can be found at https://azure.microsoft.com/en-us/documentation/articles/event-hubs-csharp-ephcs-getstarted/
You can start using the recently released Application Insights Analytics' feature. In Application Insights we now let you write any query you would like so that you can get more insights out of your data. Analytics runs your queries in seconds, lets you filter / join / group by any possible property and you can also run these queries from Power BI.
More information can be found at https://azure.microsoft.com/en-us/documentation/articles/app-insights-analytics/
I'm about to develop an application in Sharepoint.
I've got experience in asp.net and C#, Domino, Java, etc..
Now my 1000$ question: Where can I store data in Sharepoint? I'm aware there are list definitions.. so is it a good practice to store the data "natively" in Sharepoint using lists, or traditionally in an external data container, e.g. ms sql 2008?
Because SharePoint is essentially a .NET Web Application, the options are virtually limitless for how you store data used in your application. The two most common practices would be to use SharePoint lists to store your data, or to store the data in a SQL database.
I would suggest that each have their advantages. A SharePoint list is advantageous because it can be seen by the users and you can leverage out of the box features to allow users to do CRUD operations. A SQL database makes more sense when the size of the data is large and does not fit well within the constructs of the SharePoint lists. SQL is going to perform much faster when doing bulk operations.
Hope this helps!
My company is moving from Client/Server applications (thick client apps that makes database calls directly) to a Service Oriented Architecture (SOA) (thin or thick clients that call a web service that then does business logic and calls the database).
Part of this includes using SharePoint as our client (not our only client type, but the major one). I have been watching the Pluralsight training on SharePoint and I am starting to see a lot about SharePoint Lists.
SharePoint Lists seem to be to a core part of SharePoint. However they also seem to be a huge step backward architecturally speaking. These are my concerns:
Using these lists, I will have my SharePoint webparts hitting the data directly again (much like where we were with 2 tier client/server apps).
This confuses the data layer big time. Do I store my list of clients in the SQL Server database? Or a SharePoint list? Or both? (say it ain't so!) If both, how do I keep them in Sync?
If I store the data in SharePoint Lists do I then have to have my Web Services using the SharePoint Client Object Model to get at the lists (for non-SharePoint clients)?
Basically SharePoint Lists seem like a very very bad idea. But what I hear is that it is one of the big benefits of SharePoint. (Though I know that there are things like resource management and permissions that are also useful in SharePoint.)
SharePoint Lists seem like an attempt at low grade data storage. (With out all the benefits of a full data management solution like SQL Server.)
So here are my questions: What are the right/best practice reasons why would I use SharePoint Lists over web services that access a SQL Server? And can SharePoint even work normally using web services to get and update data? (Basically, if I don't use lists, do I lose a lot of functionality?)
SharePoint lists are not a one size fits all solution to data storage. There are a great deal of scenarios where you'll want to use data available from an external system, like an existing CRM database, inside of SharePoint.
SharePoint 2007 used a concept called Business Data Catalog to address some of these scenarios, allowing a read-only view of external system data in SharePoint lists.
SharePoint 2010 greatly expands on the SharePoint 2007 capabilities with Business Connectivity Services, allowing for full read/write from SharePoint lists, with API access allowing custom connectors to be implemented in code for whatever backend system you may be trying to access (a SQL Server provider is provided out of the box). Here's a pretty thorough primer on the BCS, and there's a lot more information to be found on MSDN.
Be wary of trying to use SharePoint lists as traditional tables in a RDBMS, these aren't their purpose and it will only lead to intense headaches down the road.
While I agree with the answer by OedipusPrime, I feel your question "Why would I use lists" warrants a more detailed answer.
The short version is, you probably shouldn't. What SharePoint gives you are lists which are a bit 'database-like', but simple enough that your ordinary user can cope with them. They're quite flexible for users. It also gives you a user interface to interact with the lists and data.
You're not using the UI, and you're probably quite happy with SQL, so SQL should probably be your choice. There's nothing that you can do in SharePoint that you can't do yourself in SQL (often faster) - but SQL isn't as user friendly for non-techies to set things up. SharePoint isn't a "full data management solution" like SQL - it's more like ASP.NET on steroids, and it has different advantages. (That's why its back end is ... SQL)
So, where would you store your data? SQL or Lists. One or the other - don't do both, that never works out well. If your data is in SQL, you can expose it in SharePoint with the BCS as mentioned already.
If your data is in a SharePoint list, yes, you can use the Client Object model. Or you can use Web Services directly. Or the REST API. All those are valid options.
Or, you could expose data from your database via your own Web Services, and then consume those via SharePoint's BCS, allowing you to present your data in SharePoint (with full CRUD, if you want) without your application becoming dependent upon it.
You are partially right. Regarding your options, here is the route you should go:
You should store the data only in lists. Not in SQL server. The data in lists is ultimately stored in SharePoint content database in SQL server and there is no point syncing it.
To have your clients access the data, your web services can call out of the box web services exposed in SharePoint which can operate on lists data.
See this article on overview of web services exposed by SharePoint:
http://msdn.microsoft.com/en-us/library/ee538665.aspx
http://www.sharepointmonitor.com/2007/01/sharepoint-web-service/
This is one of the big questions that faces someone new to SharePoint - should I store X in a SharePoint list, or in a SQL Server table?
SharePoint lists have similarities to database tables in that they have rows (items) and columns and the equivalent of triggers (event receivers). Data management pages are part of SharePoint so there is no need to build pages for updating and adding items to the table, and additionally the lists can be exposed as RSS feeds or through web services. They can also be versioned and participate in a workflow. Another advantage is that the contents of the lists are automatically included in content backups, so there is no need to manage a separate backup and restore process – everything is in the content database. There may not necessarily be a performance impact because there are several caching mechanisms which come into play.
A SharePoint list should certainly be considered as a storage mechanism, even for large datasets with appropriate treatment. In one sense the SharePoint list is acting as an effective data access layer, bearing in mind that ultimately the data is being stored in a SQL Server database anyway. What you do not get is the rigour of relational modelling, normalisation, referential integrity, optimisation of the execution plan, and all the other tools of the DBA’s craft. If the efficient management of the data is dependent on those skills then storing the data directly in its own database is probably a better choice. You can still get at the data through BCS, as well as through custom code.
A word of warning: on no account be tempted to interact with the SharePoint content databases directly.