Since installing Kaspersky Total Security two days ago, my Azure backups keep failing. This is the process: Taking snapshot of volumes; Preparing storage; Estimating size of backup items; job failed. The error message for each volume is 'unable to find changes in a file. This could be due to various reasons (0x07EF8).
Data in my files has definitely been changing. I have tried two things: 1. Disabled Kaspersky. 2. Completely deleted the backup and rebuilt from scratch. This made no difference at all.
Try to exclude all VHD files and adding cbengine.exe to the Trusted applications list, see this article:
http://www.cantrell.co/blog/2016/3/21/microsoft-azure-backup-feat-kaspersky
Related
Posting here as server fault doesn't seem to have the detailed Azure knowledge.
I have a Azure storage account, a file share. This file share is connected to a Azure VM through mapped drive. A FTP server on the VM accepts a stream of files and stores them in the File Share directly.
There are no other connections. Only I have Azure admin access, limited support people have access to the VM.
Last week, for unknown reasons 16 million files, which are nested in many sub-folders (origin, date) moved instantly into a unrelated subfolder, 3 levels deep.
I'm baffled how this can happen. There is a clear instant cut off when files moved.
As a result, I'm seeing increased costs on LRS. I'm assuming because internally Azure storage is replicating the change at my expense.
I have attempted to copy the files back using a VM and AZCOPY. This process crashed midway through leaving me with a half a completed copy operation. This failed attempt took days, which makes me confident I wasn't the support guys dragging and moving a folder by accident.
Questions:
Is it possible to just instantly move so many files (how)
Is there a solid way I can move the files back, taking into account the half copied files - I mean an Azure backend operation way rather than writing an app / power shell / AZCOPY?
So there a cost efficient way of doing this (I'm on Transaction Optimised tier)
Do I have a case here to get Microsoft to do something, we didn't move them... I assume something internally messed up.
Thanks
A tool that supports server-side copy (like AzCopy) can move the files quickly because only the metadata is updated. If you wants to investigate the root cause, I recommend opening a support case. (To sort this out – Your best bet is to connect with our Azure support team by filing a ticket, our support team on best effort basis can help you guide on this matter. )
We have a VPS running on Google Cloud which had a very important folder in a user directory. An employee of ours deleted that folder and we can't seem to figure out how to recover it. I came across extundelete but it seems the partition needs to be unmounted for it to work but I don't understand how I would do it on Google. This project took more than a year and that was the latest copy after a fire which took out the last copy from our local servers.
Could anyone please help or guide me in the right direction?
Getting any files back from your VM's disk may be tricky (at best) or impossible (most probably) if the files got overwritten.
Easiest way would be to get them back from a copy or snapshot of your VM's disk. If you have a snapshot of your disk (either taken manually or automatically) from before when the folder in question got delete then you will get your files back.
If you don't have any backups then you may try to recover the files - I've found many guides and tutorials, let me just link the ones I believe would help you the most:
Unix/Linux undelete/recover deleted files
Recovering accidentally deleted files
Get list of files deleted by rm -rf
------------- UPDATE -----------
Your last chance in this battle is to make two clones of the disk
and then detach original disk from the VM and attach one of the clones to keep your VM running. Then use second clone for any experiments. Keep the original untouched in case you mess up the second clone.
Now create a new Windows VM and attach your second clone as the additional disk. At this moment you're ready to try various data redovery software;
UFS Explorer
Virtual Machine Data Recovery
There are plenty of others to try from too.
Another approach would be to create an image from the original disk and export it as a VMDK imagae (and save it to a storage bucket). Then download it to yor local computer and then use for example VMware VMDK Recovery or other specialized software for extracting data from virtual machines disk images.
We are scanning ADLS Gen 2 data lake successfully with Purview. However, if a folder is deleted in the lake and you re-scan, the scan does not remove the deleted folder. The deleted folder remains in Purview, but the last modified date (from the scan) remains as the previous scan date/time from when it was present. How can I purge these now invalid entries? Removing the previous scan does not work. Removing the entire source from Purview leaves the scan results behind in the register and a new scan does not clean them up. There is also no manual delete/purge option. The only option seems to be to remove the entire purview account from Azure, redeploy and reconfigure everything.
Am I missing a trick?
Reading to this https://learn.microsoft.com/en-us/azure/purview/concept-detect-deleted-assets this mostly seems like a expected behavior, did you try scanning more than twice post 5mins intervals.
To keep deleted files out of your catalog, it's important to run regular scans. The scan interval is important, because the catalog can't detect deleted assets until another scan is run. So, if you run scans once a month on a particular store, the catalog can't detect any deleted data assets in that store until you run the next scan a month later. 😕
I have a azure service fabric development cluster running locally with two applications.
After a two week holiday I come back and see that my hard drive is completely full, consequently nothing really works anymore.
the sfdevcluster\log\traces folder has many *.etl files all larger than 100MB.
And all kinds of other log files > 250 MB are present
So my questions: how to disable tracing/logging on azure service fabric and are there tools to administer log files?
The powerShell script file that does the cluster setup magic is:
Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\DevClusterSetup.ps1
Looking inside, there is a function called DeployNodeConfiguration which sets the logs and data path using the PowerShell command New-ServiceFabricNodeConfiguration. Unfortunately, It does not seem that there is a way to limit the size of those folders.
I believe that your slowness / freeze is due to insufficient space on the OS drive (happened to me too haha). A workaround can be to set the location of those folders to a non-OS drive with a limited amount of space.
Hope this helps
This turned out to be a bug in Service Fabric, upgrade your local cluster to the latest version 6.1.472.9494 which will fix the issue. more details here
I've update production deployment yesterday morning then I've made changes to service files using remote connection
add and update files and everything was OK.
today morning all the changes I've done after deployment was undone and customers use the old version and this cost us hundreds of thousand of pounds
i need to know what's happen nothing appeared in operations log
Probably what has happened is that Microsoft has updated your servers at the Cloud Centre and re-deployed your application from the original deployment. This is in their terms and conditions, you should not make any important manual changes to the deployment after it is deployed unless they are stored in the portal (environment settings etc.), otherwise they might be lost during updates or reboots.
I learned this the hard way too. I had a cache role with only one instance (I thought it only made sense with one instance) and while updates happened, my whole site went down several times over several days!
PaaS services are stateless, which means the VMs running your service can be destroyed and recreated at any time, at which point the VM will be recreated with the content from your original .cspkg.
For more information see http://blogs.msdn.com/b/kwill/archive/2012/09/19/role-instance-restarts-due-to-os-upgrades.aspx and http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx.
As others have said, PaaS Web Roles are stateless. If you're making manual configuration changes to your deployed solution package after it has been auto-deployed then any re-deployment by the Azure fabric will simply deploy the package minus your manual changes. To solve this issue you could use startup tasks to apply your manual changes using a PowerShell script or similar (depending on what you're changing). See http://msdn.microsoft.com/en-us/library/jj129544.aspx.
Note that startup tasks don't just run when a machine gets re-imaged or rebooted.