Can you clone a pipeline by using already existing JSON? - azure

I'm new to Data Factories but in reading over the basics it looks like the solution to my problem is very simple -- too good to be true.
The existing Pipeline successfully transforms data in a test environment to tables in SQL Azure. There are 4 BLOB objects which have data which will end up in one table in SQL Azure.
The database is for a DNN site so it will be copied now to Dev, Test, possibly also UAT but ultimately to production.
It looks as simple as adding new pipelines to the existing Data Factory and just altering Database name the connection strings. In production I'll set up a new user account so that's unique and no one can easily hack it. That's simple enough.
The object names in the databases remain the same. There are just 3 sites (Dev, Test, Production).
So it should just be that easy, right? Create a new pipeline, copy and paste the JSON, alter the Database connection strings in the pipeline JSON and call it a day, right?
Thanks!

Instead of cloning the pipeline, JSON and altering the Database Connection Strings you should try to automate things which are gonna help you a lot.
Manual deployment always has a high error prone.
You can follow the below steps for
You can import your ADF into Visual Studio, using the VS plugin
here
You can then use configuration files in Visual Studio to configure
properties for linked services/tables/pipelines differently for each
environment like (Dev, Test, UAT/Production)

I would recommend to store the database credentials in an Azure Key Vault. You can reference it as a parameter.
{
"parameters": {
"azureSqlReportingDbPassword": {
"reference": {
"keyVault": {
"id": "/subscriptions/<subId>/resourceGroups/<resourcegroupId> /providers/Microsoft.KeyVault/vaults/<vault-name> "
},
"secretName": " < secret - name > "
}
}
}
}
See also the documentation for more details and the Blog-Post.

Related

Contentful Content Migration API try/catch on field creation?

We are trying to setup a workflow for delivering content model changes to our other environments (stage & prod).
Right now, our approach is this:
Create a new contentful field as a migration script using Contentful CLI.
Run script in local dev to make sure the result is as desired using contentful space migration migrations/2023-01-12-add-field.ts
Add script to GIT in folder migrations/[date]-[description].js
When release to prod, run all scripts in the migrations folder, in order, as part of the build process.
When folder contains "too many" scripts, and we are certain all changes are applied to all envs, manually remove the scripts from GIT and start over in an empty folder.
Where it fails
But, between point 4 & 5 there will be cases where a script has already been run in an earlier release, and that throws an error:
I would like the scripts to continue more gracefully without throwing an error, but I cant find support for it in the space migration docs. I have tried wrapping the code in try/catch without any luck.
Contentful recommends using the Content Migration API in favour for Content Management API since it is faster. I guess we could use the Content Management API, but at the same time we want to use "best practise".
Are we missing something here?
I would like to do something like:
module.exports = function (migration) {
// Create a new category field in the blog post content type.
const blogPost = migration.editContentType('blogPost')
if (blogPost.fieldExists('testField')) {
console.log('Field already exists')
} else {
blogPost.createField('testField').name('Test field').type('Symbol')
}
}

Use different connectionString in production environment

i am new to Web-Development and so it is the first time, that I try to work with different environments.
So in my WebApp which is deployed on Azure I want to use a connectionString for my local database, when I am working locally in development environment, and when I deploy it, it should use a different database with another connectionString.
I have seen that there are two files in my Asp.Net Core project. "appsettings.json" and "appsettings.Development.json". If I understand correctly, app.dev.json should override the settings in app.json if I work in Development Environment. But it doesn`t. When I am debugging the app, to realy make sure that environment is set to development, it still uses appsettings.json.
You might be correct in term of Multiple Configuration Files. appsettings.json is a default configuration file that you can declare anything which includes both Development and Production. The merging pattern will be appsettings.{Environment}.json, then all matching keys will replace all keys in appsettings.json. Remembered the name Environment should be matched with ASPNETCORE_ENVIRONMENT to be used. In you case, app.dev.json so your environment should be dev (case-sensitive) instead of Development.
For example: We have a default `appsettings.json
{
"ConfigurationString": "mongodb://localhost:27017",
"MongoOptions": {
"AllowWireUp": true,
"AutoConnect": true
}
}
Then you want to work in Development, you create appsettings.Development.json with content
{
"ConfigurationString": "mongodb://192.168.1.1:27017",
"MongoOptions": {
"AllowWireUp": false
}
}
Later when you run with Development, you will get combined file
{
"ConfigurationString": "mongodb://192.168.1.1:27017",
"MongoOptions": {
"AllowWireUp": false
}
}
Important: You will see MongoOptions.AutoConnect is false in Development because .NET Core merges two files based on first level key instead of merging nested. That means MongoOptions in appsettings.Development.json will replace entire your appsettings.json
There is a way to do that. I guess you are using Azure app service?. if so follow thease steps
Create multiple app-settings files inside your project. (dev/ staging / uat / prod)
These file names shoud be appsettings.Development.json appsettings.uat.json and appsettings.Production.json
Each file should contain its own configurations.
Then Go to your App service in azure > configuration > Application settings , add required prifix of your appsettings json file in Value filed in ASPNETCORE_ENVIRONMENT
Restart app service. it should work now

Substitute Service Fabric application parameters during deployment

I'm setting up my production environment and would like to secure my environment-related variables.
For the moment, every environment has its own application parameters file, which works well, but I don't want every dev in my team knowing the production connection strings and other sensitive stuffs that could appear in there.
So I'm looking for every possibility available.
I've seen that in Azure DevOps, which I'm using at the moment for my CI/CD, there is some possible variable substitution (xml transformation). Is it usable in a SF project?
I've seen in another project something similar through Octopus.
Are there any other tools that would help me manage my variables by environment safely (and easily)?
Can I do that with my KeyVault eventually?
Any recommendations?
Thanks
EDIT: an example of how I'd like to manage those values; this is a screenshot from octopus :
so something similar to this that separates and injects the values is what I'm looking for.
You can do XML transformation to the ApplicationParameter file to update the values in there before you deploy it.
The other option is use Powershell to update the application and pass the parameters as argument to the script.
The Start-ServiceFabricApplicationUpgrade command accept as parameter a hashtable with the parameters, technically, the builtin task in VSTS\DevOps transform the application parameters in a hashtable, the script would be something like this:
#Get the existing parameters
$app = Get-ServiceFabricApplication -ApplicationName "fabric:/AzureFilesVolumePlugin"
#Create a temp hashtable and populate with existing values
$parameters = #{ }
$app.ApplicationParameters | ForEach-Object { $parameters.Add($_.Name, $_.Value) }
#Replace the desired parameters
$parameters["test"] = "123test" #Here you would replace with your variable, like $env:username
#Upgrade the application
Start-ServiceFabricApplicationUpgrade -ApplicationName "fabric:/AzureFilesVolumePlugin" -ApplicationParameter $parameters -ApplicationTypeVersion "6.4.617.9590" -UnmonitoredAuto
Keep in mind that the existing VSTS Task also has other operations, like copy the package to SF and register the application version in the image store, you will need to replicate it. You can copy the full script from Deploy-FabricApplication.ps1 file in the service fabric project and replace it with your changes. The other approach is get the source for the VSTS Task here and add your changes.
If you are planning to use KeyVault, I would recommend the application access the values direct on KeyVault instead of passing it to SF, this way, you can change the values in KeyVault without redeploying the application. In the deployment, you would only pass the KeyVault credentials\configuration.

DocumentDB Data migration Tool, can't migrate from db to db

I'm using DocumentDB Data Migration Tool to migrate a documentDB db to a newly created documentDB db. The connectionStrings verify say it is ok.
It doesn't work (no data transferred (=0) but not failure written in the log file (Failed = 0).
Here is what is done :
I've tried many things such as :
migrate / transfer a collection to a json file
migrate to partitionned / non partitionned documentdb db
for the target indexing policy I've taken the source indexing policy (json got from azure, documentdb db collection settings).
...
Actually nothing's working, but I have no error logs, maybe a problem of documentdb version ?
Thanx in advance for your help.
After debugging the solution from the tool's repo I figure the tools fail silently if you mistyped the database's name like I did.
DocumentDBClient just returns an empty async enumerator.
var database = await TryGetDatabase(databaseName, cancellation);
if (database == null)
return EmptyAsyncEnumerator<IReadOnlyDictionary<string, object>>.Instance;
I can import from an Azure Cosmos DB DocumentDB API collection using DocumentDB Data Migration tool.
Besides, based on my test, if the name of the collection that we specify for Source DocumentDB is not existing, no data will be transferred and no error logs is written.
Import result
Please make sure the source collection that you specified is existing. And if possible, you can try to create a new collection and import data from this new collection, and check if data can be transferred.
I've faced same problem and after some investigation found that internal document structure was changed. Therefor after migration with with tool documents are present but couldn't be found with data explorer (but with query explorer using select * they are visible)
I've migrated collection through mongo api using Mongichef
#fguigui: To help troubleshoot this, could you please re-rerun the same data migration operation using the command line option? Just launch dt.exe from the same folder as Data Migration Tool for syntax required. Then after you launch it with required parameters, please paste the output here and I'll take a look what's broken.

What are some recommended patterns for managing production deployments when using OrmLite?

We're currently using ServiceStack with Entity Framework and are investigating moving to ServiceStack.OrmLite.
One major concern we have with it is how best to manage production deployments.
We use AppVeyor/Octopus for continuous deployment. With pure code-first EF we can use Migrations. With a DB-first approach, we've used DB Projects, SSDT & MsDeploy or tools like DbUp. But with OrmLite, we're not sure what the smoothest setup is to handle deployments to our test, staging & production DBs respectively. Should we go code-first and roll our own migration logic? Should we go DB-first and use T4 templates to generate POCO's?
I would be very interested to hear some success stories from people who've used OrmLite effectively in a continuous deployment setup.
Whilst a crude approach, the way I've been handling schema migrations recently is by keeping all migrations in a Test Fixture that I manually run before deployment which just uses OrmLite to execute Custom SQL DDL statements to modify table schemas or populate any table data.
To switch between environments I just uncomment out the environment I want to run it on, e.g. here's an example of what my DatabaseMigrations test class looks like:
[TestFixture, Explicit]
public class DatabaseMigrations
{
IDbConnectionFactory LocalDbFactory = ...;
IDbConnectionFactory TestDbFactory = ...;
IDbConnectionFactory LiveDbFactory = ...;
private IDbConnection Db;
public DatabaseMigrations()
{
//DbFactory = LocalDbFactory;
//DbFactory = TestDbFactory;
DbFactory = LiveDbFactory;
Db = DbFactory.Open();
}
//...
[Test]
public void Add_ExternalRef_to_Subscription()
{
Db.ExecuteNonQuery("ALTER TABLE subscription ADD COLUMN external_ref text");
var subIds = Db.Column<int>(Db.From<Subscription>().Where(
x => x.ExternalRef == null).Select(x => x.Id));
foreach (var subId in subIds)
{
Db.UpdateOnly(new Subscription {
ExternalRef = Guid.NewGuid().ToString("N")
},
where: x => x.Id == subId,
onlyFields: x => new { x.ExternalRef });
}
}
}
It doesn't support roll backs but it's quick and easy to create and keeps all schema changes in sequence in source control and runnable in code.
Bespoke solution with Custom Migrations Table
Other custom migration solutions which I've used previously successfully involve a bespoke solution maintaining a custom table in an RDBMS e.g. Migrations that has a list of all the migrations that have been run on that database. The CI Task would compare the rows in the database with the files in a local folder, e.g:
/Migrations
01_Add_ExternalRef_to_Subscription.sql
and automatically run any missing custom sql migration scripts. This approach did a great job keeping the RDBMS table in-sync with the migration scripts where the state of what migrations were run on each db is kept in the DB itself.
The primary draw back of this solution was the migrations were kept in custom .sql files which lacked the flexibility of a proper programming language.
Optimal solution
I think the ideal solution would be a combination of these approaches, i.e. custom DB Migrations Table but instead running C# DB Migration NUnit tests instead. The DB Connection settings would also need to moved out into external configuration and potentially include a solution for rolling back schema migrations (e.g. tests ending with '_Rollback'), although the very few times I've needed to rollback, there's less effort in manually rolling back when needed then having to maintain rollback scripts for every schema change that's done.

Resources