I am using SVN (Subversion) which is a file versioning software, and stores the changes on code artifacts in the form of revisions.
I want to use the similar kind of software for DB2 objects like Tables and indexes, so that I can track changes over them using revisions.
Have anyone done this before?
You can use the IBM Data Studio client that creates a deployement group for database changes.
Once you create a deployement group, you can save it, and apply to several databases.
Take a look at
http://pic.dhe.ibm.com/infocenter/dstudio/v3r2/topic/com.ibm.datatools.deployment.manager.ui.doc/topics/c_deploy_mgr.html
Related
My organization formerly used TFS, but has now upgraded to the latest version of ADOS 2020. There are several things that collections that were created after switching to Azure DevOps Server 2020 can do that still involve a process on our collections brought over from TFS, such as updating WITs. What is the process for making these collections ADOS ones? Do we have to just recreate them and data dump from one to another? If this is the case, how do we transfer all of the historical data on our work items and CI/CD pipelines?
We have experience using the query tool to import new projects into ADOS, but want to know if there is an easier process since it is TFS to ADOS and we want to save our data.
I want to automate my RDB. I usually use SQLDeveloper to compile, execute and save my PL SQL scripts to the database. Now I wish to build and deploy the scripts directly through gitlab, using ci/cd pipeline. I am supposed to use Oracle Cloud for this purpose. I don't know how to achieve this, any help would be greatly appreciated.
Requirements: Build and deploy PL-SQL scripts to the database using gitlab, where the password and username for the database connection are picked from vault on the cloud, not hardcoded. Oracle cloud should be used for the said purpose.
If anyone knows how to achieve this, please guide.
There are tools like Liquibase and Flyway. Those tools do no do miracles.
Liquibase has a list of changes (XML or YAML) to be applied on a database schema (eventually with undo step).
Then it has a journal table in each database environment, so i can track which changes were applied and which were not.
It can not do mighty schema comparisons like SQL Developer or Toad does.
It also can not prevent situations where applied DML change on prod database goes kaboom, because the DML change was just successfully tested on 1000x smaller data set.
But yet it is better than nothing and it can be integrated with ansible/gitlab and other CI/DC tools.
You have a functional sample, using Liquibase integration with sqlcl in my project Oracle CI/CD demo.
To be totally honest
It's a little out-of-date, because I use a trick for rollback because in the moment of writting, Liquibase tagging was not supported. Currently it's supported
The final integration with Jenkins is not done, but it's obvious
I a writing a little webapp, which allows the users to browse through a Visual SVN server. I would like to add an online editor like github in this webapp, so users can edit the files online, leave a message and the changes appear in the repository.
For that I need to checkout the files locally. My idea was to check them in a mongodb out, so I can save the changes per user like a local working copy.
Is there a way (without reimplementing the svn protocol) to make a checkout in a database or even just the memory and then write it in the database.
If there are any questions, just ask :)
Btw. if someone is interested, here is the code https://bitbucket.org/Knerd/svn-browser
There is no way to do svn checkout to directly to database. But there is some options.
First of all, you can simple create virtual disk that resides in memory and perform checkouts to that disk. Than you can store checked out files to database.
Another option is to use rich Subversion API directly. Note, that Subversion is written in C, so you will need to build bridge between Node.js and SVN (as far as I can remember, there is no official Subversion bindings for Node.js, but there is for Python and Java and there is unofficial nodesvn package available for Node.js). Using the API you can implement your own 'in-database' working copy.
Also you can use svnmucc utility (which is shipped with VisualSVN Server) to make commits directly in the repository (without even making a working copy). If you combine it with svn ls, svn info etc. you can implement repository browsing and editing of files.
I want to copy an entire TFS project to another project, e.g. MyProj to MyProjSev. Then I want to rollback MyProjSev to a changeset that corresponds to when the client stopped paying. Then I will make MyProjSev available, including the history of the source files, to the client for a period of time as part of the severance agreement. The access/security aspect I know. I can easily make a branch, but if the client views the branch from a TFS Explorer Client, then the history is not available. There are a couple of approaches that involve cloning the entire collection, and lacking an answer to this question here, I will use one of them.
https://stackoverflow.com/a/4918289/553593 [TFS admin detach the collection, back up SQL Server database, TFS admin attach collection, SQL Server restore database to new database name, then TFS admin attach the restored collection]
http://msdn.microsoft.com/en-us/library/ee349263(v=vs.100).aspx [Collection command with /clone]
New Information http://visualstudiogallery.msdn.microsoft.com/eb77e739-c98c-4e36-9ead-fa115b27fefe TFS Integration Tools was what finally worked for me. This 2012 release is a very nice product. It was easy to do a TFS to TFS transfer from my MyProj to the new MyProjSev that I created for client to access. Upon completion I simply did some rollbacks and set the security in the new project. It would have been easier if in TFS 2010 one can rename projects (can do in TFS 2012) Neither the TFSConfig Collection /clone nor the procedure described in MS Docs for Splitting a Team Projects Collection will work for this task. The issue is that even though one ends up with two collections, their projects have the same names, and that is not allowed (and in TFS 2010 you cannot rename projects).
The best option is to clone the collection as you suggested.
I answer my own question as described in New Information above in the text of the question. Note all this is specific to TFS 2010.
I have an app that I have been working on and I did a bunch of changes and then realized later I should have been adding versioning to the Core Data model. So I'm trying to go back and do that now.
Basic information:
I think everything I've done would fall under the lightweight migration feature.
I'm using git
I already have the app in user's hands
My question is: what is the easiest way to do this?
Since I'm using git, could I simply checkout the data model from when I submitted it to apple, create a new version for it, and add my changes? My main fear with this idea is that my project.pbxproj file would be incorrect. Would this an issue? Is there a way to get around this?
IF I could do this, would I need to recreate my class files or would that be ok (assuming I get it back to being identical to what I currently have).
IF I CAN'T do this, then what can I do? If its a matter of starting from the last version I pushed to Apple and applying changes I guess I should look into doing it with git rebase, right?
This has nothing to do with git.
You need to create a new version of your app, provide the new data model, set it for lightweight migration and then release it as an update. Core Data will basically assume that any model without version info is version zero and attempt a migration to the new version.
When the user downloads the update, the automatic migration will trigger the first time the app runs.
Creating a new version means nothing more than changing the version number in the project info. When submitted, that will trigger the upgrade and the migration.