I don't see a way to do so, could anyone know how to archive in spanner?
drop index testindex1 if exists
Our scenario:
On day 10, we created an index testindex1, and this change (schema file) may get deployed to some or all production environments
On day 30, we decided we actually don't need testindex1, so we would like to drop it if it is there. We are not sure which production databases it has been created in.
Is there a way to ignore the not-found error in the middle when running a batch of DDL statements?
DROP ... IF EXISTS is not supported in Cloud Spanner.
What is the use case of this? Can you ignore the not-found error from the drop?
Related
I have (several) database project in a solution. In one I have a referesnce to a dacpac (this is ACTUALLY a copy of one of the main databases as we take a SQL snapshot at end of day and some code needs to reference this (DBANME_Daily) rather that DBNAME).
now this builds correctly, the code with SELECT * FROM DBNAME_DAILY.schema.table all compiles and builds with no error.
ON deployment however I get the unresolved reference to DBNAME_DAILY.schema.table
You want to add a dacpac for that database as a reference, using variables for the database. Should be different database, different names. You'd then use that variable in your code and you'd pass in the variable name for your build/publish tasks depending on the environment.
This is a little older article, but still pretty accurate:
http://schottsql.com/2012/10/31/ssdt-external-database-references/
You can tweak that a little bit using a variable for the database name. When I wrote this, it was mostly just a different database with the same name across environments. For your case, you'd just use the DB variable. Then replace your "DBName.schema." with "[$DBNameVariable].schema." (or something similar)
Sorted, my mistake was the dacpac reference would only allow the project to BUILD. For deployment the database DBNAME_DAILY MUST EXIST.
Lesson Learnt.
I'm quite new to Data Factory and Logic Apps (but I am experienced with SSIS since many years),
I succeeded in loading a folder with 100 text-files into SQL-Azure with DATA FACTORY
But the files themselves are untouched
Now, another requirement is that I loop through the folders to get all files with a certain file extension,
In the end I should move (=copy & delete) all the files from the 'To_be_processed' folder to the 'Processed' folder
I can not find where to put 'wildcards' and such:
For example, get all files with file extensions .001, 002, 003, 004, 005, ...until... , 996, 997, 998, 999 (thousand files)
--> also searching in the subfolders.
Is it possible to call a Data Factory from within a Logic App ? (although this seems unnecessary)
Please find some more detailed information in this screenshot:
(click to enlarge)
Thanks in advance helping me out exploring this new technology!
Interesting situation.
I agree that using Logic Apps just for this additional layer of file handling seems unnecessary, but Azure Data Factory may currently be unable to deal with exactly what you need...
In terms of adding wild cards to your Azure Data Factory datasets you have 3 attributes available within the JSON type properties block, as follows.
Folder Path - to specify the directory. Which can work with a partition by clause for a time slice start and end. Required.
File Name - to specify the file. Which again can work with a partition by clause for a time slice start and end. Not required.
File Filter - this is where wildcards can be used for single and multiple characters. (*) for multi and (?) for single. Not required.
More info here: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-onprem-file-system-connector
I have to say that separately none of the above are ideal for what you require and I've already fed back to Microsoft that we need a more flexible attribute that combines the 3 above values into 1, allowing wildcards in various places and a partition by condition that works with more than just date time values.
That said. Try something like the below.
"typeProperties": {
"folderPath": "TO_BE_PROCESSED",
"fileFilter": "17-SKO-??-MD1.*" //looks like 2 middle values in image above
}
On a side note; there is already a Microsoft feedback item thats been raised for a file move activity which is currently under review.
See here: https://feedback.azure.com/forums/270578-data-factory/suggestions/13427742-move-activity
Hope this helps
We have used a C# application which we call through 'app services' -> webjobs.
Much easier to iterate through folders. To call SQL we used sql bulkinsert
I am going to make some modifications to methods and the biosphere3 database. As I might break things (I have before), I would like to create backups.
Thankfully, there exist backup() methods for just this. For example:
myBiosphere = Database('biosphere3')
myBiosphere.backup()
According to the docs, this "Write[s] a backup version of the data to the backups directory." Doing so indeed creates a backup, and the location of this backup is conveniently returned when calling backup().
What I wish to do is to load this backup and replace the database I have broken, if need be. The docs seem to stay silent on this, though the docs on serialize say "filepath (str, optional): Provide an alternate filepath (e.g. for backup)."
How can one restore a database with a saved version?
As a bonus question: how is increment_version(database, number=None) called, and how can one use it to help with database management?
The code to backup is quite simple:
def backup(self):
"""Save a backup to ``backups`` folder.
Returns:
File path of backup.
"""
from bw2io import BW2Package
return BW2Package.export_obj(self)
So you would restore the same as with any BW2Package:
from brightway2 import *
BW2Package.import_file(filepath)
However, if would recommend using backup_project_directory(project) and restore_project_directory(filepath) instead, as they don't go through an (older) intermediate format.
increment_version is only for the single file database backend, and is invoked automatically every time the database is saved. You could add versioning to the sqlite database backend, but this is non-trivial.
I am getting an import error in a specific environment with a managed CRM 2011 solution. The solution has been imported before into many other environments, but the one in particular where it is failing is throwing the following error:
Dependency Calculation
role With Id = 9e2d2d9b-645f-409f-b31d-3a9c39fcc340 Does Not Exist
I am a bit confused about this. I searched within the solution XML and was not able to find any reference to this particular GUID of 9e2d2d9b-645f-409f-b31d-3a9c39fcc340. I cannot really find it in SQL either, just wandering through the different tables, but perhaps I do not know exactly where to look there.
I have tried importing the solution multiple times. As a desperation effort, I tried renaming all of the security roles in the destination environment prior to importing, but this did not help.
Where is this reference to a security role actually stored? Is this something that is supposed to be within my solution--which my existing CRM deployment is expecting me to import?
How do I fix the problem so that I am able to import this solution?
This is the code we used to fix the issue. We had to run two different scripts. Script A we had to run a total of four times. Run it once, attempt the import, and then consult the log to find the role that is causing the problem--if you receive another error for another role.
To run script A, you must use a valid RoleTemplateId from your database. We just picked a random one. It should not matter precisely which one you use, because you will erase that data element with script B.
After all of the roles are fixed, we got a different error (complaining something about the RoleTemplateId was already related to a role), and had to run script B. That removes the RoleTemplateId from multiple different roles and sets it to NULL.
Script A:
insert into RoleBaseIds(RoleId)
values ('WXYZ74FA-7EA3-452B-ACDD-A491E6821234')
insert into RoleBase(RoleId
,RoleTemplateId
,OrganizationId
,Name
,BusinessUnitId
,CreatedOn
,ModifiedOn
,CreatedBy
)
values ('WXYZ74FA-7EA3-452B-ACDD-A491E6821234'
,'ABCD89FF-7C35-4D69-9900-999C3F605678'
,(select organizationid from Organization)
,'ROLE IMPORT FIX'
,(select BusinessUnitID from BusinessUnit where ParentBusinessUnitId is null)
,GETDATE()
,GETDATE()
,null
)
Script B:
update RoleBase
set RoleTemplateId = NULL
where RoleTemplateID='ABCD89FF-7C35-4D69-9900-999C3F605678'
Perfect solution, worked for me! My only comment would be the error in Script B: it shouldn't clear the template IDs of all roles for the given template, only the template ID of the newly created "fix" role, as follows:
update RoleBase
set RoleTemplateId = NULL
where RoleID='WXYZ74FA-7EA3-452B-ACDD-A491E6821234'
I would've gladly put this in a comment to the answer, but not enough rep as of now.
We have a Job which takes a Back-up of VSAM file followed by standard Delete-Define-Repro of the same VSAM file. To handle the scenario of trying to delete a non-existing file we are following a standard practice to set MAXCC/LASTCC to 0 if Delete returns a non-zero return code and then continue the process as if there are no errors.
But sometimes we are facing a situation where Delete is not working because file is opened by some user or being read in some other Job. In this case Job fails because while Defining a new VSAM file because file is already present (Delete could not purge it).
Any work-arounds for this situation? Or can we force delete a file even if it is held by some other process/user?
Thanks for reading!
You should be able to work out that it would not be a good idea to delete a VSAM file (or any other) whilst it is being used by "something else".
Why don't you test for the specific value from the DELETE?
If you are doing a backup, then delete/define, it would be a really, really good idea to get exclusive control of the file, else something is going to get messed-up.
You could put a DD with DSN being the VSAM file in question with DISP=OLD, so that your job would only be selected when nothing is using the file.
How are you doing the backup? Why are other jobs accessing the file at the same time anyway? Is this in a "test" environment? What type of VSAM file is it? Why are you doing the REPRO, and do you feel that that is the best way to do it?
An actual answer is difficult without knowing all this, and more.