I recently spun up a sql database on Windows Azure. I created a couple of tables and loaded some data. I then went into Sql Server Management Studio on my local machine and typed
Truncate table XXXX
And now it is clocking "Executing Query". Is truncate not supported by azure?
Tony:
I think you are right. I used the suggestion here and after resizing the db, all of the connections went away and I could truncate without a problem.
Thanks
Related
I have an Azure VM with Pentaho Data Integration installed, i'm trying to build some ETL which loads a dimensional model from the staging area, but when i start a transformation, the load speed of PDI into any azure database is painfully slow.
It is possible to have PDI working on cloud with Azure Databases? There is some configuration step needed to achieve a reasonable loading speed?
PS:
VM and databases are in the same region
There is a firewall rule to allow port acess
Reading speed is working just fine
PDI 8.1, using table output step
I've been experiencing same speed problem but I will tell you my workarounds with this.
First of all: Download and install latest jdbc driver that lets you gain connection with azure sql database, in documentation the link is here but the way I do is keep it synced from here in GitHub any of this will let you use the latest driver in PDI.
Second workaround: for large files what I've found most powerful is using BCP Utility integrated with PowerShell or Linux Batch. Doesn't care if it files are local or in azure blob storage but you might need credentials for this.
Last but not least: Use Azure Data Factory V2 to move and load files (if you are like me I try to keep it in PDI until I have to load it, the http get step will let you trigger ADF pipeline).
Good luck and let me know if you get it.
I am currently on a project where I have 2 VM (virtual machine), a Windows and Linux one.
I also have an Oracle database where I have a simple table called "Material".
On the 2 VM, I want to connect to my Oracle database without any client or libraries. The thing is I want to create a script which would run on the VM and can connect to my database and insert some datas to my table "Material" but I can't install anything on my VM (like the mysqlclient for exemple).
So is it possible to connect to a database without installing anything on my VM? Or perhaps can I access to an online client to send my SQL to my Oracle Database?
I know it's quite difficult to understand my problem so if you have any question, feel free to ask.
I am working in a business in New Zealand. We currently use a remote server (Plexus) to store a large amount of data (some tables > 2 billion rows). We have started down the SharePoint route, and I have created a number of databases and apps in SharePoint that use this data. Currently, I have to run a program in New Zealand that downloads the data to our local server and then pushes up that data into an Azure database, which the web apps connect to. I would like to remove this middle step for many reasons but the biggest reason is that the web connection between NZ and the US tends to result in a lot of time outs and long pulls due to having to pull large data sets across the Pacific. The remote database we are using is Plexus.
Ideally, I would like to have my C# code sitting in Azure and have this connect to the remote server directly. This way I could simply send the SQL request to Plex and have this data go directly into the Azure databases. The major advantage would be that this would mean it would all be based in the US which would make things a lot faster.
The major hurdle is that we need to install an ODBC Driver given to us by the remote server into Azure so it recognises the calls as genuine. Our systems adminstrator has said he has looked into it and it seems this can't be done?
I was hoping someone on the StackOverFlow community has encountered a similar issue and resolved it?
Note: Please dont think I am asking whether Azure has an ODBC connection because I know it does. I am not asking if I can connect TO Azure, I am asking if I can connect Azure to another external data source.
In a Worker Role/Cloud service in azure you can install the ODBC driver in a startup task using powershells ODBC commandlets.
More info here: Powershell Add-OdbcDsn and here: Powershell startup task in cloud services
One option is to create a virtual machine in the same Azure data center as your database and install your ODBC driver and your C# app.
I have problem with Azure SQL DB.
Bacpack export of DB is only 18MB but charged DB size of server exceeds 5GB already.
Is there any way to see actual size of data?
Is there any way to move DB to simple recovery model?
Or is there any other way to shrink log files?
Or should I just drop Database and restore from backup?
Problem was caused by defragmented indexes.
You can find good scripts for fixing those from here:
http://blogs.msdn.com/b/dilkushp/archive/2013/07/28/fragmentation-in-sql-azure.aspx
After running scripts (and 24h) size of DB went back to 300MB.
The bacpac file is going to be significantly smaller than the DB as it's a compressed version of the data and I believe it strips out things like index content and only stores index definitions which are reindexed on restore so one shouldn't be indicative of the other.
For example, I have a database on SQL Azure configured as a 10GB Premium DB, which is currently using 2.7GB, which BACPACs to about 300MB
What kind of database have you configured ?
What Edition, Size and Usage settings are you currently being shown.
** Edit ** Image wasn't loading so here's the external link - http://i.snag.gy/JfsPk.jpg
The next thing to check is the size breakdown in the database by table/object.
Connect to your Azure environment with Management Studio and run the following query. which will give a table breakdown of the database with sizes in MB.
select
sys.objects.name, sum(reserved_page_count) * 8.0 / 1024
from
sys.dm_db_partition_stats, sys.objects
where
sys.dm_db_partition_stats.object_id = sys.objects.object_id
group by sys.objects.name
What are your disaster recovery plans for Windows Sharepoint Services 3.0 ?
Currently we are backuping all databases (1 content, admin, search and config) using sql backup tools, and backuping the front end server via dataprotector.
To test our backups, we use another server farm, restore the content database (following the procedure on technet) and create a new application that uses this database. We just have to redeploy solutions on the newly created sharepoint application.
However, we have to change database access credentials (on sql server) : the user accounts used on production aren't the same as those used on our "test" farm.
At the end, we can restore our content database and access all our sites. Searching doesn't work, but we're investigating.
Is this restore scenario reliable (as in supported by microsoft) ?
You can't really backup / restore both config database and search database:
restoring config database only work if your new farm have exactly the same server names
when you restore the search database, the fulltext index is not synchronize. however, this is not a problem as you can just reindex.
As a result, I would say that yes, this a reliable for content. But take care of:
You may have to redo some configuration (AAM, managed path...).
This does not include customization, you want to keep a backup of your solution
Reliability is in the eye of the beholder. In this case, if your tests of the restore process is successful, then yes, it is reliable.
A number of my clients run SharePoint (both MOSS and WSS) in virtual environments, SQL Server is also virtualised and backed up both with SQL tools and with Volume Shadow copy.
The advantage of a Virtual Environment is downtime is only as long as it takes your Virtual Server host to boot the images.
If you are not using Virtualisation, then remember to backup transaction logs regularly as this will make it easier to restore to a given point in the day - it also means that your transaction logs dont grow too big!
I prefer to use the stsadm -o backup command 'for catastrophic backup' as it says in the help. This can be scheduled but requires some maintenance of the backup metadata XML file when you start running out of disk space and need to archive older backups. It has the advantage of transferring over timer jobs (usually) and other configuration because as Nico says, restoring the config database won't work for most situations.
To restore, you can use the user interface which is nice and not have to mess around with much else. I think it restores your solutions as well but haven't tested that extensively.