Connecting from Tibco SOTFIRE to SAS datasets on windows - spotfire

How to create connection from spotfire to SAS datasets?

Good Morning su919. Your question doesn't provide enough details but the following link will help you out:
http://support.spotfire.com/sr_spotfire65.asp
Select your version at the top to read requirements related to Spotfire. In particular to SAS...
SAS Providers for OLE DB 9.22 or higher
It is possible to import SAS data files (*.sas7bdat, *.sd2, *.sd7) into TIBCO Spotfire directly. The requirement for this functionality is that the SAS Providers for OLE DB 9.22 or higher must first be installed on the client machine.
Click here to download SAS Providers for OLE DB (free registration on SAS website required).
NOTE: SAS Providers for OLE DB are not supported for use in Spotfire Web Player (the SAS driver is not thread-safe, which can cause general platform instabilities).

Related

Azure CosmosDB Spark OLTP Connector with Managed Identity

We are trying to ingest some data from DataLake to Azure Cosmos DB and Spark OLTP Connector seems to be the easiest to use.
But due to the company's policy, we are not supposed to use the master keys and we usually use managed identity for the applications. I see the Cosmos DB Java client builder has the 'TokenCredential' option with sample code as:
CosmosAsyncClient client = CosmosClientBuilder
.credential(new DefaultAzureCredentialBuilder().build())
.buildAsyncClient();
Is there anyway to setup the connector to use the same authentication mechanism with managed identity?
I see the Cosmos DB Java client builder has the 'TokenCredential' option with sample code
In CosmosAsyncClient you also have to mention the maker key. there is no such way to use managed identities.
we are not supposed to use the master keys and we usually use managed identity for the applications.
As you want to transfer data from Datalake to CosmosDB with Managed Identities you can use Copy Data Tool in Azur data factory. Create Linked service for cosmos db and in authentication type select Managed identity either system or user.
You can refer this So Thread by #KarthikBhyresh-MT for more understanding on Copy data tool.
Currently, the Spark Connector does not support MSI. I see you correctly created the Issue on the repo that holds the source code: https://github.com/Azure/azure-sdk-for-java/issues/29958
That will surely be used for tracking purposes or at least linking to the workitem that tracks the progress on that area. The feature will be available in the future but there is currently no ETA.

Why does LibreOffice base, Microsoft Excel and Tableau require unnecessary permission to query table data from bigquery?

I have a BigQuery instance and I have shared a view with a service account. This service account has the "bigQuery.User" role. I setup Simba ODBC drivers on my Ubuntu machine, installed Libreoffice base on it and also modified the odbc.ini file to use the above service account. I'm able to connect to Bigquery but when I try to query the shared view, it throws an error saying that "user does not have BigQuery.tables.create permission for table ...". Looks like LibreOfficeBase Base is trying to create some temp tables. Tried with MS Excel and same error is thrown
My questions:
Isn't the "bigQuery.User" role enough to query data from shared datasets/tables?
Why does Libre Office Base require such extra permissions?
What I tried:
I shared the data with a user account(someuser#gmail.com). I gave the same role i.e. bigQuery.user to this account. I was able to query data successfully from this account.
I also tried on Tableau. Tableau has native support for Bigqyuery and also supports ODBC connections(to connect BigQuery, MySql, etc). I tried with both i.e. connecting to Bigquery using Tableau native BigQuery support and using ODBC connection. It worked with native BigQuery but not with ODBC connection(maybe it has the same issue as LibreOffice base and MS Excel)

Spotfire Connection to Blob (Storage Accounts)

I am looking for the process of connecting Spotfire to Blob Storage Accounts.
If anyone can help me out by explaining the steps to connect Spotfire to Blob.
Thanks.
you can use the Spotfire Connector for OData.
for more information on the product, see the TIBCO Spotfire Connectors system requirements and documentation.

Connect Azure Data Marketplace to SQL Server 2008 R2

Is there a way to connect SQL Server 2008 R2 to the Azure Data Marketplace to enable data import?
Are there any ODBC or JDBC drivers for the Azure Data Marketplace?
I'm a bit confused by the question. Is this about publishing data through the Windows Azure Marketplace and sourcing it from SQL Server? Or is it about accessing published data from an application and bringing that data into your own app?
If the former:
You may choose to host your data in SQL Server. When you sign up for data hosting in the Windows Azure Marketplace, you'll provide the requisite connection strings for your servers. You don't have to worry about ODBC/JDBC drivers. See the data publishing documentation for more details.
If the latter: Data may be accessed via HTTP/OData, not ODBC/JDBC. It's a metered consumption model, so you need to subscribe to a particular data feed, which then gives you an access token. Check out this video from TechEd last year to see more about this, along with a .NET code sample. You can easily access data from any other language as well.
If your goal is to access the data feed directly from SQL Server: I'm no expert in CLR Stored Procedures, but if CLR SP's supported code that can access a web service endpoint, I guess you could write a CLR SP to access a data feed, pull data down, and populate local tables. I have no idea if this is supported or advisable...

Alternative to Windows Azure tables out of the cloud

I'm developing a .NET app, which needs to run both on Azure and on regular Windows Servers(2003). It needs to store a few GB of data and SQL Azure is too expensive for me, so I'll use Azure tables in the cloud version. Can you recommend a storage solution, which will run on standalone servers and have an API and behavior similar to Azure tables? From what I've seen Server AppFabric does not include Tables.
If you think what Windows Azure Table Storage is, it is a Key-Value pair based non-relational databse which is accessible through REST API. Please download this document about Windows Azure and NoSQL database details.
If I were in your situation, my approach would have been to find something similar to Azure Table Storage which I can access over REST and have similar accessibility API. So if you try to find the similar database to run on a machine you really need to look for:
Key Value Pair DB
Support for basic operations i.e add, delete, insert, modify an entity
Partition Key and Row Key based Accessibility
RESTful Interface to connect
If you would want to try something you sure can look at:
DBreeze (C# based Key Value Pair NoSQL DB) I just saw it and looks exciting
Googles LevelDB (Key Value Pair DB, open source and available on Windows) I have no idea about API
Redis (Great Key-Value Pair DB but not sure for Windows compatibility and API)
Here is a list of key/value databases without additional indexing facilities are:
Berkeley DB
HBase
MemcacheDB
Redis
SimpleDB
Tokyo Cabinet/Tyrant
Voldemort
Riak
If none works, you sure can get any of open source DB and modify to work for your requirement and then make that available to others as your contribution to community.
ADDED
Now you can use Windows Azure Virtual Machine to run any kind of Key-Value pair DB on Linux or Windows Machine and connection with your application.
I'm not sure which storage solution to recommend, but just about any database solution would work provided that you write an Interface to abstract all your data storage code. Then write implementations of that interface for Azure Table storage and whatever other database you want to use on the non-cloud server
You should be doing that anyway so that your code isn't tightly coupled with Azure Table Storage APIs.
If you combine coding against that Interface with an IoC container, then a single line of code or a single configuration setting would enable you to switch between data implementations based on which platform the code is running on.

Resources