How to react to message from Microsoft regarding updating API's - azure

I got a message from Microsoft in the last few days
Azure SQL Database 2014-04-01 APIs will be retired on 31 October 2025.
You're receiving this email because you use Azure SQL Database APIs.
To improve performance and security, we're updating Azure SQL Database
APIs. As part of this, all version 2014-04-01 APIs will be retired on
31 October 2025. You'll need to update your resources, including
templates, tools, scripts, and programs, to use a newer API version by
then. Any API calls still using the older versions after that date
will stop working until you've updated them.
I access my Azure SQL Databases in the following manner.
From the WebApp via a Java connection and an ODBC driver
public final class DBConnection {
private static DataSource ds = null;
private static DBConnection instance = null;
private DBConnection() throws NamingException {
InitialContext ic = new InitialContext();
ds = (DataSource) ic.lookup(Monitor.getDsName());
}
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>10.2.1.jre11</version>
</dependency>
via sqlcmd
via node.js
const configDB = {
user: "",
password: "",
server: "myserver.database.windows.net",
database: "mydb",
connectionTimeout: 3000,
parseJSON: true,
options: {
encrypt: true,
enableArithAbort: true
},
pool: {
min: 0,
idleTimeoutMillis: 3000
}
};
const poolDB = new sql.ConnectionPool(configDB);
aLine='EXEC ...'
await poolFOI.connect();
let resultDB = await poolDB.request().query(aLine);
Via Azure Logic Apps (using an API Connections)
Via Azure Function Apps (connecting similar to the WebApp Above)
Via SSMS
Which of these are possibly triggering the message about Azure SQL Database APIs?
Also I started using Azure after 2020, so it does not make sense to me that I would be using APIs from 2014

The API mentioned in the email (I also got the same) is for managing the SQL Database Servers and the Database itself (in other words they are the control plane API) and not the data inside them. You would use the SQL REST API to perform management operations on the SQL Database resources.
They will have no impact on how you connect to the database and manage the data inside those databases which is what your code is currently doing.
So unless you are using the version 2014-04-01 of the REST API to manage the SQL Database Servers and SQL Databases (and not the data inside them), you can safely ignore the email.
You can learn more about the SQL Database REST APIs here: https://learn.microsoft.com/en-us/rest/api/sql/.

I started using Azure after 2020, so it does not make sense to me that I would be using APIs from 2014
2014-04-01 refers to a specific version of the Azure SQL Database APIs. Azure API HTTP requests specify their version explicitly, usually via a header or query parameter. It looks like Azure SQL Database specifies the API version in an api-version query parameter. API versions are not updated very often, so it's normal to use a version that's years old. E.g., the next stable version after 2014-04-01 is 2021-11-01.
Whatever libraries you're using are probably tied to a specific version, and you can probably just upgrade those libraries to use a later API version. If you're not sure which library is using the old version, you can try using an HTTP proxy to sniff the traffic and inspect the api-version query parameter.

Related

No default service level objective found of edition "GeneralPurpose"

I am getting this error No default service level objective found of edition "GeneralPurpose" in SSMS when creating database in Azure SQL
Please download the latest SQL Server Management Studio version from here. Version 18.0 has many fixes related to Azure Managed Instances.
It is a limitation of the free subscription you are using at this time. ""'Free Trial subscriptions can provision Basic, Standard S0 through S3 databases, up to 100 eDTU Basic or Standard elastic pools and DW100 through DW400 data warehouses"
You can also try to create the database using T-SQL as shown below.
CREATE DATABASE Testdb
( EDITION = 'Standard', SERVICE_OBJECTIVE = 'S3' );
GO
In my case it was because I had wrong connection string in my app settings(.NET).
To find your connection string you need to go to your db on azure and in overview you need to find "connection string".

How can I connect my azure function with my azure sql

I developed a cron trigger azure fuction who needs to search for soe data in my database.
Localy i can connect whit sql server, so i change the connection string in loca.settings.json to connect in azure sql and published the function, but the function cant connect with database.
I need to do something more than configure the local.settings.json?
The local.settings.json is only used for local testing. It's not even exported to azure.
You need to create a connection string in your application settings.
In Azure Functions - click Platform features and then Configuration.
Set the connection string
A function app hosts the execution of your functions in Azure. As a best security practice, store connection strings and other secrets in your function app settings. Using application settings prevents accidental disclosure of the connection string with your code. You can access app settings for your function app right from Visual Studio.
You must have previously published your app to Azure. If you haven't already done so, Publish your function app to Azure.
In Solution Explorer, right-click the function app project and choose Publish > Manage application settings.... Select Add setting, in New app setting name, type sqldb_connection, and select OK.
Application settings for the function app.
In the new sqldb_connection setting, paste the connection string you copied in the previous section into the Local field and replace {your_username} and {your_password} placeholders with real values. Select Insert value from local to copy the updated value into the Remote field, and then select OK.
Add SQL connection string setting.
The connection strings are stored encrypted in Azure (Remote). To prevent leaking secrets, the local.settings.json project file (Local) should be excluded from source control, such as by using a .gitignore file.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-scenario-database-table-cleanup
If you are using entity framework core to make a connection, Other Way of connection to SQL is by using dependency injection from .netcore library.
You can keep the connection string in Azure Key-vault or the config file from there you can read the same using azure function startup class. which need below code setup in your function app.
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
[assembly: FunctionsStartup(typeof( TEST.Startup))]
namespace TEST
{
internal class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
Contract.Requires(builder != null);
builder.Services.AddHttpClient();
var configBuilder = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("local.settings.json", optional: true, reloadOnChange: true)
.AddAzureKeyVault($"https://XYZkv.vault.azure.net/");
var configuration = configBuilder.Build();
var conn = configuration["connectionString"];
builder.Services.AddDbContext<yourDBContext>(
options => options.UseSqlServer(configuration["connectionString"]));
}
}
}
after that where ever you are injecting this dbcontext, with context object you can do all CRUD operations by following microsoft's entity framework core library documentation.
Having just dealt with this beast (using a custom handler with Linux), I believe the simple way is to upgrade your App to premium-plan, allowing you to access the "Networking" page from "App Service plans". This should allow you to put both sql-server and app in the same virtual network, which probably makes it easier. (but what do I know?)
Instead, if you don't have the extra cash laying around, you can try what I did, and set up a private endpoint, and use the proxy connection setting for your database:
Create a virtual network
I used Address space: 10.1.0.0/16 (default I think)
Add subnet 10.1.0.0/24 with any name (adding a subnet is required)
Go to "Private link center" and create a private endpoint.
any name, resource-group you fancy
use resource type "Microsoft.Sql/Server" and you should be able to select your sql-server (which I assume you have created already) and also set target sub-resource to "sqlServer" (the only option)
In the next step your virtual network and submask should be auto-selected
set Private DNS integration to yes (or suffer later).
Update your firewall by going to Sql Databases, select your database and click "Set Server Firewall" from the overview tab.
Set Connection Policy to proxy. (You either do this, or upgrade to premium!)
Add existing virtual network (rule with any name)
Whitelist IPs
There probably is some other way, but the azure-cli makes it easy to get all possible IP's your app might use: az functionapp show --resource-group <group_name> --name <app_name> --query possibleOutboundIpAddresses
https://learn.microsoft.com/en-us/azure/app-service/overview-inbound-outbound-ips
whitelist them all! (copy paste exercise)
Find your FQDN from Private link center > Private Endpoints > DNS Configuration. It's probably something like yourdb.privatelink.database.windows.net
Update your app to use this url. You just update your sql server connection string and replace the domain, for example as ADO string: Server=tcp:yourdb.privatelink.database.windows.net,1433;Initial Catalog=somedbname;Persist Security Info=False;User ID=someuser;Password=abc123;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;
Also note that I at some point during all of this I switched to TrustServerCertificate=True and now I can't bother to figure out if it does a difference or not. So I left it as an exercise to the reader to find out.
So what we have done here...?
We have forced your function app to go outside the "azure-sphere" by connecting to the private endpoint. I think that if you bounce between azure-services directly, then you'll need some sort of authentication (like logging in to your DB using AD), and in my case, using custom handler and linux base for my app, I think that means you need some trust negotiation (kerberos perhaps?). I couldn't figure that out, so I came up with this instead.

How to do Azure Blob storage and Azure SQL Db atomic transaction

We have a Blob storage container in Azure for uploading application specific documents and we have Azure Sql Db where meta data for particular files are saved during the file upload process. This upload process needs to be consistent so that we should not have files in the storage for which there is no record of meta data in Sql Db and vice versa.
We are uploading list of files which we get from front-end as multi-part HttpContent. From Web Api controller we call the upload service passing the httpContent, file names and a folder path where the files will be uploaded. The Web Api controller, service method, repository, all are asyn.
var files = await this.uploadService.UploadFiles(httpContent, fileNames, pathName);
Here is the service method:
public async Task<List<FileUploadModel>> UploadFiles(HttpContent httpContent, List<string> fileNames, string folderPath)
{
var blobUploadProvider = this.Container.Resolve<UploadProvider>(
new DependencyOverride<UploadProviderModel>(new UploadProviderModel(fileNames, folderPath)));
var list = await httpContent.ReadAsMultipartAsync(blobUploadProvider).ContinueWith(
task =>
{
if (task.IsFaulted || task.IsCanceled)
{
throw task.Exception;
}
var provider = task.Result;
return provider.Uploads.ToList();
});
return list;
}
The service method uses a customized upload provider which is derived from System.Net.Http.MultipartFileStreamProvider and we resolve this using a dependency resolver.
After this, we create the meta deta models for each of those files and then save in the Db using Entity framework. The full process works fine in ideal situation.
The problem is if the upload process is successful but somehow the Db operation fails, then we have files uploaded in Blob storage but there is no corresponding entry in Sql Db, and thus there is data inconsistency.
Following are the different technologies used in the system:
Azure Api App
Azure Blob Storage
Web Api
.Net 4.6.1
Entity framework 6.1.3
Azure MSSql Database (we are not using any VM)
I have tried using TransactionScope for consistency which seems not working for Blob and Db, (works for Db only)
How do we solve this issue?
Is there any built in or supported feature for this?
What are the best practices in this case?
Is there any built in or supported feature for this?
As of today no. Essentially Blob Service and SQL Database are two separate services hence it is not possible to implement "atomic transaction" functionality like you're expecting.
How do we solve this issue?
I could think of two ways to solve this issue (I am sure there would be other as well):
Implement your own transaction functionality: Basically check for the database transaction failure and if that happens delete the blob manually.
Use some background process: Here you would continue to save the data in blob storage and then periodically find out orphaned blobs through some background process and delete those blobs.

Connecting to Rackspace Cloud using ASP.net

I am trying to connect to Rackspace Cloud using Asp.net.
I've downloaded Rackspace.CloudFiles assembly from NuGet, and i am trying to connect to the server:
UserCredentials userCred = new UserCredentials("username", "api_key");
Connection connection = new Connection(userCred);
var containers = connection.GetContainers();
This works, but it connects every time to only one storage location. In rackspace control panel, i have more locations where i have containers.
Is there a way to specify the Location when i connect to Rackspace?
You may want to get the entire OpenStack .NET SDK via NuGet; it allows you to connect to "the cloud" and then select containers based on region (or all regions, or course).
Such as this:
// Get a list of containers
CloudFilesProvider cfp = new CloudFilesProvider(_cloudIdentity);
IEnumerable<ContainerCDN> listOfContainers = cfp.ListCDNContainers(region: "DFW");
If you do decide to use the OpenStack .NET SDK, please don't hesitate to ask questions; I'm here to help.
-- Don Schenck, OpenStack .NET Developer Advocate, Rackspace

Use Hadoop SDK with local HDInsight Server

Is it possible to use the Hadoop SDK, especially LINQ to Hive, with a local installation of HDInsight Server. Note that I am not refering to HDInsight Service hosted on Azure.
I tried to use LINQ to Hive from Microsoft.Hadoop.Hive Nuget package, but was unable to get it working, because LINQ to Hive seems to require that results are stored in Azure Blob Storage, rather than on my hosted instance.
var hiveConnection = new HiveConnection(new Uri("http://hadoop-poc.cloudapp.net:50111"), "hadoop", "hgfhdfgh", "hadoop", "hadooppartner", "StorageKey");
var metaData = hiveConnection.GetMetaData().Result;
var result = hiveConnection.ExecuteQuery(#"select * from customer limit 1");
Even with a storage key, I cannot get this to work, because the MapReduce job fails with:
AzureException: org.apache.hadoop.fs.azure.AzureException: Container a7e3aa39-75ba-4cc2-a8aa-301257018146 in account hadooppartner not found, and we can't create it using anoynomous credentials.
I also added the credentials once more to the core-site.xml file, as follows:
<property>
<name>fs.azure.account.key.hadooppartner.blob.core.windows.net</name>
<value>Credentials</value>
</property>
However I would rather get rid of storing results on Azure Storage, if possible.
Thank you for your help!
You can use the HiveConnection constructor without the storage account options to connect to a local install. This works against a default install of the HDInsights developer preview on a local box:
var db = new HiveConnection(
webHCatUri: new Uri("http://localhost:50111"),
userName: (string) "hadoop", password: (string) null);
var result = db.ExecuteHiveQuery("select * from w3c");
Of course you can then use that connection for any LINQ queries as well.
It turned out that in the HiveConnection constructor you have to specify the full storage account name, i.e. hadooppartner.blob.core.windows.net.
I am still interested to use the .NET LINQ API without the need for a storage account. Furthermore is it possible to use the .NET API with other Hadoop distributions?

Resources