Azure SQL bacpac export fails after retirement of Azure Scheduler - azure

After Azure Scheduler was retired in Jan 22 (See more here), I am unable to export azure SQL database to bacpac.
I get the following error:
Export bacpac: One or more unsupported elements were found in the schema used as part of a data package.
Error SQL71501: Error validating element [jobs_internal].[visible_targets_formatted]: SqlView: [jobs_internal].[visible_targets_formatted] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [jobs_internal].[database_credentials].[C], [jobs_internal].[database_credentials].[name] or [jobs_internal].[targets].[C].
Error SQL71501: Error validating element [jobs]: SqlSchema: [jobs] has an unresolved reference to object [##MS_JobAccount##].
Error SQL71501: Error validating element [jobs_internal]: SqlSchema: [jobs_internal] has an unresolved reference to object [##MS_JobAccount##].
What exactly do I need to drop from the database to fix this.
Creation of the bacpac is part of Azure Web App backup (when option to back the database up is selected)

I found a solution.
You have to drop all objects in [jobs] and jobs_internal schemas.
I did mine in the following order:
stored procedures,
functions,
views,
foreign keys,
tables,
custom types,
schemas,
and finally users.
After this was done, I was able to export bacpac and also perform a web application backup that included the database.

Related

Cannot update identity column 'scope_local_id'

Update I revised my approach to use 1 sync scope per database schema. This eliminated the problem for all schemas except dbo. Then I split dbo into several sync scopes, which seems to have eliminated the problem in dbo. It is not clear to me why having a large number of tables in a single sync scope would lead to this particular error message.
I am using Microsoft Sync Framework 2.1 to set up synchronization between two databases. I am provisioning the Azure database using the following code:
// Set up destination server for sync
var destinationScopeProvisioning = new SqlSyncScopeProvisioning(DestinationConnection, scope);
if (!destinationScopeProvisioning.ScopeExists(ScopeName))
{
destinationScopeProvisioning.Apply();
}
This code throws an exception after running for several minutes:
An unhandled exception of type 'Microsoft.Synchronization.Data.DbPartiallyProvisionedException' occurred in Microsoft.Synchronization.Data.SqlServer.dll
Additional information: Cannot update identity column 'scope_local_id'.
Ordinarily, this kind of error is caused by a scope that already exists, so I've tried the following three methods (several times) to clean up the scopes and start again:
Deprovisioning the scope by invoking deprovisioning.DeprovisionScope(ScopeName)
Deprovisioning the store by invokeing deprovisioning.DeprovisionStore()
Dropping and recreating the database.
Unfortunately, none of those worked.

Azure .Net SDK Error : FsOpenStream failed with error 0x83090aa2

We are trying to download a file present in Data Lake Store. I have been following the below tutorial which uses .Net Azure SDk.
https://azure.microsoft.com/en-us/documentation/articles/data-lake-analytics-get-started-net-sdk/
As we have already the file present in Azure Data Lake Store , I just added the code to download the file
FileCreateOpenAndAppendResponse beginOpenResponse = _dataLakeStoreFileSystemClient.FileSystem.BeginOpen("/XXXX/XXXX/test.csv", DataLakeStoreAccountName, new FileOpenParameters());
FileOpenResponse openResponse = _dataLakeStoreFileSystemClient.FileSystem.Open(beginOpenResponse.Location);
But it's failing with the below error
{"RemoteException":{"exception":"RuntimeException","message":"FsOpenStream
failed with error 0x83090aa2 ().
[83271af3c3a14973ad7814e7d9d201f6]","javaClassName":"java.lang.RuntimeException"}}
While debugging we inspected the beginOpenResponse.Location that been used in the second line code. It seems to the correct value as below
https://XXXXXXXX.azuredatalakestore.net/webhdfs/v1/XXXX/XXX/test.csv?op=OPEN&api-version=2015-10-01-preview&read=true
The error does not provide much information to track down the problem.
I agree that the store errors are currently non-printable comment. We are working on improving this.
According to my store developer, 0x83090aa2 is access check failed. Can you please check if you have access to the storage account and/or the path is correct?

validation error when project parameter sensitive property is set to true in SSIS 2012

I am using SSIS 2012 and deploying projects via project deployment model. I have 3 project connection managers and passing the password information to the connection manager through a project parameter. When I set the Sensitive property of password parameter to False the package runs fine but when I set it to true it gives the below error :
Error: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "ConnManager" failed with error code 0xC0202009
It is erroring because you are trying to touch a Parameter that is marked as Sensitive. You cannot use the "old" approach for configuring connection managers. For the project deployment model and Connection managers, in the SSISDB, you right click on the project and select Configure.
There is where you overlay a password.
Otherwise, you then need to use the GetSensitiveValue method to access the value instead of the standard Getter property.
Dts.Variables["$Package::FtpPassword"].GetSensitiveValue().ToString();
See Matt's article Retrieving the Value of a Sensitive Parameter in a Script

SQL Azure unexpected database deletion/recreation

I've been scratching my head on this for hours, but can't seem to figure out what's wrong.
Here's our project basic setup:
MVC 3.0 Project with ASP.NET Membership
Entity Framework 4.3, Code First approach
Local environment: local SQL Server with 2 MDF database files attached (aspnet.mdf + entities.mdf)
Server environment: Windows Azure + 2 SQL Azure databases (aspnet and entities)
Here's what we did:
Created local and remote databases, modified web.config to use SQLEXPRESS connection strings in debug mode and SQL Azure connection strings in release mode
Created a SampleData class extending DropCreateDatabaseAlways<Entities> with a Seed method to seed data.
Used System.Data.Entity.Database.SetInitializer(new Models.SampleData()); in Application_Start to seed data to our databases.
Ran app locally - tables were created and seeded, all OK.
Deployed, ran remote app - tables were created and seeded, all OK.
Added pre-processor directives to stop destroying the Entity database at each application start on our remote Azure environment:
#if DEBUG
System.Data.Entity.Database.SetInitializer(new Models.SampleData());
#else
System.Data.Entity.Database.SetInitializer<Entities>(null);
#endif
Here's where it got ugly
We enabled Migrations using NuGet, with AutomaticMigrationsEnabled = true;
Everything was running smooth and nice. We left it cooking for a couple days
Today, we noticed an unknown bug on the Azure environment:
we have several classes deriving from a superclass SuperClass
the corresponding Entity table stores all of these objects in the same SuperClass table, using a discriminator to know which column to feed from when loading the various classes
While the loading went just fine before today, it doesn't anymore. We get the following error message:
The 'Foo' property on 'SubClass1' could not be set to a 'null' value. You must set this property to a non-null value of type 'Int32'.
After a quick check, our SuperClass table has columns Foo and Foo1. Logical enough, since SuperClass has 2 subclasses SubClass1 and SubClass2, each with a Foo property. In our case, Foo is NULL but Foo1 has an int32 value. So the problem is not with the database - rather, it would seem that the link between our Model and Database has been lost. The discriminator logic was corrupted.
Trying to find indications on what could've gone wrong, we noticed several things:
Even though we never performed any migration on the SQL Azure Entity database, the database now has a _MigrationHistory table
The _MigrationHistory table has one record:
MigrationID: 201204102350574_InitialCreate
CreatedOn: 4/10/2012 11:50:57 PM
Model: <Binary data>
ProductVersion: 4.3.1
Looking at other tables, most of them were emptied when this migration happened. Only the tables that were initially seeded with SampleData remained untouched.
Checking in with the SQL Azure Management portal, our Entity database shows the following creation date: 4/10/2012 23:50:55.
Here is our understanding
For some reason, SQL Azure deleted and recreated our database
The _MigrationHistory table was created in the process, registering a starting point to test the model against for future migrations
Here are our Questions
Who / What triggered the database deletion / recreation?
How could EF re-seed our sample data since Application_Start has System.Data.Entity.Database.SetInitializer<Entities>(null);?
EDIT: Looking at what could've gone wrong, we noticed one thing we didn't respect in this SQL Azure tutorial: we didn't remove PersistSecurityInfo from our SQL Azure Entity database connection string after the database was created. Can't see why on Earth it could have caused the problem, but still worth mentioning...
Nevermind, found the cause of our problem. In case anybody wonders: we hadn't made any Azure deployment since the addition of the pre-processor directives. MS must have restarted the machine our VM resided on, and the new VM recreated the database using see data.
Lesson learned: always do frequent Azure deployments.

How to re-enable Orchard CMS features without Dashboard or command line

I just disabled the Comments feature on my Orchard installation, not realising it was a dependency of Disqus, and now the entire site including admin dashboard fails with this error:
None of the constructors found with policy 'Orchard.Environment.AutofacUtil.DynamicProxy2.ConstructorFinderWrapper' on type 'Disqus.Comments.Services.DisqusCommentUpdateService' can be invoked with the available services and parameters: Constructor 'Void .ctor(Orchard.IOrchardServices, Disqus.Comments.Services.IDisqusMappingService, Orchard.Comments.Services.ICommentService)' parameter resolution failed at parameter 'Orchard.Comments.Services.ICommentService commentService'.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: Autofac.Core.DependencyResolutionException: None of the constructors found with policy 'Orchard.Environment.AutofacUtil.DynamicProxy2.ConstructorFinderWrapper' on type 'Disqus.Comments.Services.DisqusCommentUpdateService' can be invoked with the available services and parameters: Constructor 'Void .ctor(Orchard.IOrchardServices, Disqus.Comments.Services.IDisqusMappingService, Orchard.Comments.Services.ICommentService)' parameter resolution failed at parameter 'Orchard.Comments.Services.ICommentService commentService'.
The Orchard installation is running on a web host and I do not have access to the command line there. I have FTP access, and access to the MS SQL database. Is there any way I can re-enable the Comments feature without access to the command line or web admin interface?
There is a file, /orchard.web/app_data/cache.dat, which is an xml containing a list of which features are enabled.
The documentation warns that modifying it may have unpredictable results, so be warned: http://docs.orchardproject.net/Documentation/Developer-FAQ#What'sinApp_Data?
There is a table in the database called Settings_ShellFeatureStateRecord, which stores the state for each module's feature. I re-enabled Orchard.Comments on my local installation (using SQL Server Compact Edition) with the following SQL:
update Settings_ShellFeatureStateRecord
set InstallState = 'Up',
EnableState = 'Up'
where Name = 'Orchard.Comments'
Good luck!

Resources