I want to automate my RDB. I usually use SQLDeveloper to compile, execute and save my PL SQL scripts to the database. Now I wish to build and deploy the scripts directly through gitlab, using ci/cd pipeline. I am supposed to use Oracle Cloud for this purpose. I don't know how to achieve this, any help would be greatly appreciated.
Requirements: Build and deploy PL-SQL scripts to the database using gitlab, where the password and username for the database connection are picked from vault on the cloud, not hardcoded. Oracle cloud should be used for the said purpose.
If anyone knows how to achieve this, please guide.
There are tools like Liquibase and Flyway. Those tools do no do miracles.
Liquibase has a list of changes (XML or YAML) to be applied on a database schema (eventually with undo step).
Then it has a journal table in each database environment, so i can track which changes were applied and which were not.
It can not do mighty schema comparisons like SQL Developer or Toad does.
It also can not prevent situations where applied DML change on prod database goes kaboom, because the DML change was just successfully tested on 1000x smaller data set.
But yet it is better than nothing and it can be integrated with ansible/gitlab and other CI/DC tools.
You have a functional sample, using Liquibase integration with sqlcl in my project Oracle CI/CD demo.
To be totally honest
It's a little out-of-date, because I use a trick for rollback because in the moment of writting, Liquibase tagging was not supported. Currently it's supported
The final integration with Jenkins is not done, but it's obvious
Related
Is there anyone out here who has implemented CI/CD with GitLab for Azure Snowflake? Is this even possible?
Our DB development is growing fast and it's turning out to be challenging experience to develop, maintain and deploy.
We have Visual Studio Code IDE which is now bound to a Git repository who's branches I would like to point to Prod, Dev and Test depending on commits to respective branches.? Also, is it even possible to have something like a Config.sql similar to SQLCMD in SQL Server or Application.properties in a Java Springboot project, where, one can maintain 3 different config files with environment specific variables whose values can be substituted dynamically depending on the BUILD step of the CICD pipeline? I want (at least now) to keep database and schema names as config variables which will be different on where one deploys.
Yes, you can install and configure snowsql tool as part of your CICD pipeline. Snowsql for Snowflake is the same as sqlcmd for SQL Server.
Installing snowsql
Configuring snowsql
I'm coming from a long SSIS background, we're looking to use Azure data factory v2 but I'm struggling to find any (clear) way of working with multiple environments. In SSIS we would have project parameters tied to the Visual Studio project configuration (e.g. development/test/production etc...) and say there were 2 parameters for SourceServerName and DestinationServerName, these would point to different servers if we were in development or test.
From my initial playing around I can't see any way to do this in data factory. I've searched google of course, but any information I've found seems to be around CI/CD then talks about Git 'branches' and is difficult to follow.
I'm basically looking for a very simple explanation and example of how this would be achieved in Azure data factory v2 (if it is even possible).
It works differently. You create an instance of data factory per environment and your environments are effectively embedded in each instance.
So here's one simple approach:
Create three data factories: dev, test, prod
Create your linked services in the dev environment pointing at dev sources and targets
Create the same named linked services in test, but of course these point at your tst systems
Now when you "migrate" your pipelines from dev to test, they use the same logical name (just like a connection manager)
So you don't designate an environment at execution time or map variables or anything... everything in test just runs against test because that's the way the linked servers have been defined.
That's the first step.
The next step is to connect only the dev ADF instance to Git. If you're a newcomer to Git it can be daunting but it's just a version control system. You save your code to it and it remembers every change you made.
Once your pipeline code is in git, the theory is that you migrate code out of git into higher environments in an automated fashion.
If you go through the links provided in the other answer, you'll see how you set it up.
I do have an issue with this approach though - you have to look up all of your environment values in keystore, which to me is silly because why do we need to designate the test servers hostname everytime we deploy to test?
One last thing is that if you a pipeline that doesn't use a linked service (say a REST pipeline), I haven't found a way to make that environment aware. I ended up building logic around the current data factories name to dynamically change endpoints.
This is a bit of a bran dump but feel free to ask questions.
Although it's not recommended - yes, you can do it.
Take a look at Linked Service - in this case, I have a connection to Azure SQL Database:
You have possibilities to use dynamic content for either the server name and database name.
Just add a parameter to your pipeline, pass it to the Linked Service and use in the required field.
Let me know whether I explained it clearly enough?
Yes, it's possible although not so simple as it was in VS for SSIS.
1) First of all: there is no desktop application for developing ADF, only the browser.
Therefore developers should make the changes in their DEV environment and from many reasons, the best way to do it is a way of working with GIT repository connected.
2) Then, you need "only":
a) publish the changes (it creates/updates adf_publish branch in git)
b) With Azure DevOps deploy the code from adf_publish replacing required parameters for target environment.
I know that at the beginning it sounds horrible, but the sooner you set up an environment like this the more time you save while developing pipelines.
How to do these things step by step?
I describe all the steps in the following posts:
- Setting up Code Repository for Azure Data Factory v2
- Deployment of Azure Data Factory with Azure DevOps
I hope this helps.
There are tons of resources online on how to replace JSON configuration files in a release pipeline like this one. I configured this. It works. However, we have multiple integration tests which reach the database too. These tests are run during build time. I haven't seen any option yet to replace config values in the build pipeline. Does it exist? Or do I really have to use this custom task (see screenshot below)?
There is an out-of-the-box task since recently by Microsoft. It's called File Transform. It's currently in preview but it works really well! Haven't had any issues whatsoever with it and it works the same as you would configure it in the release pipeline. Would recommend this any day!
Below you can see my configuration.
There is no out-of-the-box task only to replace tokens/values in files (also in the release pipline the task is Azure App Service Deploy and not only for replace json configuration).
You need to use an external extension from here or write a PowerShell script for that.
We are looking at removing developers from production and want a simple kind of deployment management tool. One suggestion that some members are using with SalesForce is Jenkins. I have never used Jenkins or any kind of deployment tool before. I normally just copied my code from IDE and updated the file in the SuiteScript file cabinet.
Does Jenkins work for NetSuite? Or what do you recommend for this purpose?
We are planning to use Bit Bucket (which runs Git in the background) as our version control in case that matters.
Thank you for any help
IMO the greatest challenge in integrating with any CI environment(be it Jenkins or any other) is the fact that you can move code files from one system to another using code/APIs but, NOT things like scripts, custom records, fields its deployments , etc. for which you need a bundling process and hence, manual intervention.
NetSuite in recent Suiteworld 2015 said that its coming up "Change Management" which would allow you to put everything that is part of your app to version control system such as git. Please see SuiteAnswer Id 42387, when this feature is rolled out, you can integrate with your CI tool to automatically copy/deploy your app details to an another NetSuite account and run your tests there and accordingly pass/fail your build.
Why do you want to remove developers from Production? This will severely hamper their ability to create solutions for your NetSuite account and will create a ton of overhead for them.
If you must have them out of Production, then probably your "best" option would be to have them build their solutions in Sandbox and then use SuiteBundles for deployment to Production. A Production Admin would need to update the appropriate Bundle(s) for all Production migrations.
NetSuite has also built a SuiteCloud IDE plugin for Eclipse which allows uploading and downloading files (no copy-paste necessary), so if you're not using that I would recommend it.
We are using Jenkins for our own internal automated testing, but not for deployment into NetSuite. I do not know if someone has already built a NetSuite plugin for Jenkins; it is likely you would have to build your own file upload mechanism using the NetSuite Web Services SOAP API, but that would still only allow deployment of source files. Developers will most likely also need to be creating and updating custom records, fields, lists as well as Script records and Script Deployment records, which you will not be able to do through Jenkins or any other tool that I know of.
Is there a systematic way to incorporate data migration scripts with an SSDT publish?
For instance, deploying a new feature that requires a new permission be added to the database in the form of a series of inserts.
I know I can just edit the generated publish script, but if the script needs to be regenerated you have to remember to re-add the data migration scripts.
The recommended practice is to use a post-deployment script. This blog post describes the approach further:
http://blogs.msdn.com/b/ssdt/archive/2012/02/02/including-data-in-an-sql-server-database-project.aspx