I tried to connect to Cosmos DB via Databricks. I use the connector from Maven (com.azure.cosmos.spark:azure-cosmos-spark_3-1_2-12:4.14.0). Here is the setup:
cosmosEndpoint = "https://myendpoint.documents.azure.com:443/"
cosmosMasterKey = dbutils.secrets.get(scope = "mykv", key = "my_key")
cosmosDatabaseName = "mydb"
cfg_oro = {
"spark.cosmos.accountEndpoint" : cosmosEndpoint,
"spark.cosmos.accountKey" : cosmosMasterKey,
"spark.cosmos.database" : cosmosDatabaseName,
"spark.cosmos.container" : "mycontainer",
}
spark.conf.set("spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
spark.conf.set("spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", cosmosMasterKey)
When I run the statement:
it keeps running and shows no result. Where is the problem coming from?
Related
It seems that specifying a JDBC_DRIVER_JAR_URI connection property when defining an aws glue connection in terraform does nothing. When I test the glue connection, the cloudwatch logs show that glue is still using version 9.4 JDBC driver or postgres
resource "aws_glue_connection" "glue_connection_2" {
connection_properties = {
JDBC_DRIVER_JAR_URI = "s3://scripts/postgresql.jar"
JDBC_CONNECTION_URL = var.jdbc_connection_url
JDBC_ENGINE_VERSION = "14"
PASSWORD = var.glue_db_password
USERNAME = var.glue_db_user_name
}
name = "${local.glue_connection_name}-custom"
connection_type = "JDBC"
physical_connection_requirements {
availability_zone = var.database_availability_zone
security_group_id_list = var.security_group_id_list
subnet_id = sort(data.aws_subnets.vpc_subnets.ids)[0]
}
}
Is it possible to specify a custom jar for aws glue connections other than creating a custom connector for it?
Hi I want to establish a connection with the Microsoft azure databricks delta table inside my spring boot application.I have the cluster url,username and the password(token) of the delta table from which I need to pull the data to my application. Kindly shed some light on this
You can access cluster & underlying tables using the JDBC (see documentation). You need to get the corresponding driver, and add it to your application, and then just use normal JDBC API, like this:
String jdbcConnectPassthroughCluster = "jdbc:spark://<server-hostname>:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/0/xxxx-xxxxxx-xxxxxxxx;AuthMech=3;UID=token;PWD=";
String PATH = "<personal token>"
String JDBC_DRIVER = "com.simba.spark.jdbc.Driver";
String DB_URL = jdbcConnectPassthroughCluster + PAT;
Class.forName(JDBC_DRIVER);
System.out.println("Getting connection");
Connection conn = DriverManager.getConnection(DB_URL);
Statement stmt = conn.createStatement();
System.out.println("Going to execute query");
ResultSet rs = stmt.executeQuery("select * from table");
System.out.println("Query is executed");
int i = 0;
while(rs.next()) {
System.out.println("Row " + i + "=" + rs.getLong(1));
i++;
}
rs.close();
stmt.close();
conn.close();
As the title describes, I'm trying to change the TTL of a cosmos db table.
I couldn't find anything in c#/powershell/arm templates
Here is what I'm trying to achieve
The only thing I was able to find is the api call that is triggered in azure portal, but I'm wondering if it is safe to use this API directly?
In Cosmos DB Table API, Tables are essentially Containers thus you can use Cosmos DB SQL API SDK to manipulate the Table. Here's the sample code to do so:
var cosmosClient = new CosmosClient(CosmosConnectionString);
var database = cosmosClient.GetDatabase(Database);
var container = database.GetContainer("test");
var containerResponse = await container.ReadContainerAsync();
var containerProperties = containerResponse.Resource;
Console.WriteLine("Current TTL on the container is: " + containerProperties.DefaultTimeToLive);
containerProperties.DefaultTimeToLive = 120;//
containerResponse = await container.ReplaceContainerAsync(containerProperties);
containerProperties = containerResponse.Resource;
Console.WriteLine("Current TTL on the container is: " + containerProperties.DefaultTimeToLive);
Console.ReadKey();
Setting TTL is now supported through Microsoft.Azure.Cosmos.Table directly with version >= 1.0.8.
// Get the table reference for table operations
CloudTable table = <tableClient>.GetTableReference(<tableName>);
table.CreateIfNotExists(defaultTimeToLive: <ttlInSeconds>);
I am new to Ignite and Kubernetes. I have a.Net Core 3.1 web application which is hosted Azure Linux App Service.
I followed the instructions (Apache Ignite Instructions Offical Site) and Apache Ignite could run on Azure Kubernetes. I could create a sample table and read-write actions worked successfully. Here is the screenshot of my success tests on PowerShell.
Please see the success test
Now, I try to connect Apache Ignite from my .net core web app but I couldn't make it.
My code is as below. I try to connect with IgniteConfiguration and SpringCfgXml, but both of them getting error.
private void Initialize()
{
var cfg = GetIgniteConfiguration();
_ignite = Ignition.Start(cfg);
InitializeCaches();
}
public IgniteConfiguration GetIgniteConfiguration()
{
var appSettingsJson = AppSettingsJson.GetAppSettings();
var igniteNodes = appSettingsJson["AppSettings:IgniteNodes"];
var nodeList = igniteNodes.Split(",");
var config = new IgniteConfiguration
{
Logger = new IgniteLogger(),
DiscoverySpi = new TcpDiscoverySpi
{
IpFinder = new TcpDiscoveryStaticIpFinder
{
Endpoints = nodeList
},
SocketTimeout = TimeSpan.FromSeconds(5)
},
IncludedEventTypes = EventType.CacheAll,
CacheConfiguration = GetCacheConfiguration()
};
return config;
}
The first error I get:
Apache.Ignite.Core.Common.IgniteException HResult=0x80131500
Message=Java class is not found (did you set IGNITE_HOME environment
variable?):
org/apache/ignite/internal/processors/platform/PlatformIgnition
Source=Apache.Ignite.Core
Also, I have no idea what I am gonna set for IGNITE_HOME, and username and secret to authentication.
Solution :
I finally connect the Ignite on Azure Kubernetes.
Here is my connection method.
public void TestConnection()
{
var cfg = new IgniteClientConfiguration
{
Host = "MyHost",
Port = 10800,
UserName = "user",
Password = "password"
};
using (IIgniteClient client = Ignition.StartClient(cfg))
{
var employeeCache1 = client.GetOrCreateCache<int, Employee>(
new CacheClientConfiguration(EmployeeCacheName, typeof(Employee)));
employeeCache1.Put(1, new Employee("Bilge Wilson", 12500, 1));
}
}
To find to host IP, user name and client secret please check the below images.
Client Id and Secret
IP Addresses
Note: I don't need to set any IGNITE_HOME ana JAVA_HOME variable.
The simplest way is to download Apache Ignite binary distribution (of the same version as one that you use), unzip it to a directory, and point IGNITE_HOME environment variable or IgniteConfiguration.IgniteHome configuration property to unzipped apache-ignite-n.n.n-bin/ directory absolute path.
We support doing that automatically for Windows-hosted apps but not for Linux-based deployments.
I am getting the below error when I try to connect to Azure cosmosdb mongo api from my asp.net core :
Error: TimeoutException: A timeout occured after 30000ms selecting a server using CompositeServerSelector
Code used:
readonly string _cn = #"mongodb://testdb12321:f2iORCXklWzAHJLKbZ6lwesWU32SygfUN2Kud1pu1OtMBoe9lBPqHMWdbc0fF7zgl7FryTSCGv7EGCa6X8gZKw==#testdb12321.documents.azure.com:10255/?ssl=true&replicaSet=globaldb";
var client = new MongoClient(_cn);
var db = client.GetDatabase("researchdb");
var cns = db.ListCollectionNames().First();