I was just about to do a continuous integration of SQL Server scripts with VSTS. I have two script files in my visual studio 2015 database project.
createStudentTable.sql => simple create table script
Script.PostDeployment1.sql => :r .\createStudentTable.sql (pointing to the above script)
Now after the successful build in visual studio online I suddenly recognized that a .dacpac file is also created - see this screenshot:
Now my database has around 100 tables + view and stored procedures. Now does this .dacpac file contain the entire schema details? If so then it would be an huge overhead in carrying this .dacpac with every build.
Please advise.
Dacpac file only contains the schema model definition of your database and it does not contain any of table data unless you add all of insert statements in the postdeploymentscript.sql
The overhead of dacpac is that it compares the model in dacpac and your target database when the actual deployment happens.
This is a trade-off. If you don't use dacpac then you will end up doing all the database versions and version migrations by yourself manually or using another tool that can make those database change managements with ALTER statements somewhat easier.
BTW the scale of 100 table can be handled well by dacpac.
Related
I'm currently using Azure Data Factory for an ETL job and in the end I want to start a U-SQL job. I've created my datalake.usql script and the UDF's the script uses are in the datalake.usql.cs file, the same structure a U-SQL project has in Visual Studio (which is where I developed the U-SQL job and succesfully ran it).
After that, I uploaded them both to Azure Blob Storage and set up the U-SQL step in Azure Data Factory to use the U-SQL script, but it doesn't see the datalake.usql.cs with my UDF's.
How can I do this?
I figured this one out so I'll post the answer here if someone else has this problem in the future.
When you're developing locally, you have a script.usql and a script.usql.cs file. When you run it, Visual Studio does all the heavy lifting, compiles it all in a usable way and your script runs. But when you're trying to run a script from Azure Data Factory, you can't perform that compilation on the fly, like Visual Studio does.
The solution to this problem is to make an Assembly, a .dll file, from your script.usql.cs, upload it to the data storage you're using and register it in the U-SQL Catalog. Then, you can reference this assembly and use it normally, as you would on your local machine.
All the steps needed for this are presented in these short guides:
https://saveenr.gitbooks.io/usql-tutorial/content/usql-catalog/intro.html
https://saveenr.gitbooks.io/usql-tutorial/content/usql-catalog/usql-databases.html
https://saveenr.gitbooks.io/usql-tutorial/content/usql-catalog/assemblies.html
https://www.c-sharpcorner.com/UploadFile/1e050f/creating-and-using-dll-class-library-in-C-Sharp/
I am pretty new to AZURE cloud and stuck at a place where I want to repplicate 1 table into another database with same schema and table name.
By replication I mean, the new table in another database should automatically synced with the original table. I can do this using the elastic table, but the queries are taking way too long and some time getting timed out, so I am thinking of having a local table in another database instead of elastic table, but I am not sure how I can do this in AZURE ?
Note: Both database resided on same DB server
Any example, links will be helpful
Thanks
To achieve this you can use a DACPAC (Data-Tier Package) a data tier package can be created in Visual Studio or extracted from an existing database. They contain the database creation scripts and manage your deltas for you. More information can be found here. For information about how to build and deploy a DACPAC using both VS and extracted from a database see this answer
I have an SSIS package that does a Bulk Insert, then executes a SQL Task, and then finally writes some database data to a flat file on our network. The package runs fine if I run it from Visual Studio 2012. However, if I deploy the package to the Integration Services Catalog on a 2012 SQL Server and run it from there, the Bulk Insert and SQL Task run fine, but when the package tries to output to the flat file, I get these error messages:
Cannot open the datafile "\\nyfil006\Projects\Accounting\CostRecovery\Cafe de Novo\HospitalityCharges.csv".
HospitalityCharges Flat File failed the pre-execute phase and returned error code 0xC020200E.
I'm able to output the System::UserName to an errorlog, and it's what I think it should be: an account that has full permissions to the folder in the flat file destination (and its parent folders). I've tried creating a blank version of HospitalityCharges.csv, and DelayValidation is set to True for the Data Flow Task that outputs the flat file. My system admin has granted Network Service permissions to the folder as per this link and this link, but that doesn't help. I've also made the connection string an expression as described here. We've also created a mapped drive and used that for the Destination Connection String instead of a UNC path. No joy. Does anyone know why this is happening?
Another note: if I change the flat file destination to point to the C: drive, the package runs fine, whether I run it from Visual Studio or from the Integration Services Catalog.
Please bear with me as I give a little history, then I will get to the problem:
I am rewriting a web app in Visual Studio 2012 Professional using C# and SQL Server 2012 Express. I am creating new pages / logic but am reusing the data access objects from the old app, which was written in VS2008 and SQL Server 2008 Express by another developer years ago. I got a backup of the db from the SQL Server 2008 db server and restored into the SQL Server 2012 db server.
Most everything works fine in the new application, but I have recently added a few columns to the Contacts table in SQL Server and now I am stuck at how to update the .xsd and related files for Contacts. When I said that I reused the data access objects from the old app, what I meant was that I simply copied all of the .xsd, .xsc, .xss, .xsx and designer.cs files for Contacts and other objects from the old project (VS2008) to the new (VS2012).
I think all of these .xs* files are auto-generated when you use a data source configuration wizard or something in Visual Studio? The problem is that I just copied the files into the new project to get the code behind working so that I could get and update data. But now I must update these files some how so they may be aware of the newly added columns, and I think I have to, in this new project, add a new Data Source? I don't want to perform surgery by updating the files manually, that could get messy I would think...
I can probably figure out how to add the data source, but my concern is will that process complete successfully considering the existence of the files already? Should I remove all of the .xs* files I copied in and add the datasource so it will recreate all those files again, and with the new columns? What should I do? I have backed up my new project in case something goes terribly wrong, so I can torpedo this project as many times as I need to until I get it right!
Thx in advance and thanks for reading this far.
We have few databases running together, with synonyms between them. There're also two way synonyms between the databases. VS database projects can't seem to handle these. Two way synonyms don't work in VS, I can only reference another database in VS one way, otherwise there's circular reference. I tried creating a snapshot of the database project in VS but to be able to take a snapshot I need to build the project, to be able to build the project I need to reference the other database project which doesn't compile because it doesn't recognise synonyms, etc. It seems multiple databases (same server) with two way synonyms on each other is too complicated for VS to manage. Has anyone managed to get something like this to work?
Use the SQLPackage command line to create the dacpac (it's a bit more forgiving of the cross-database references than the GUI). Add those as DB references.
There's a section here about using SQLPackage to extract the dacpac from an existing database.
http://schottsql.blogspot.com/2012/10/ssdt-importing-existing-database.html
I've written about external references here:
http://schottsql.blogspot.com/2012/10/ssdt-external-database-references.html
We have a lot of cross-DB dependencies and once we get past the initial builds or start from a restored DB, we don't have any issues w/ the references.