I'm trying to restore a file from a backup content database in SharePoint 2016 by using the Get-SPContentDatabase -ConnectAsUnattachedDatabase and drilling down to the item level to use the OpenBinary() call. I believe this is failing because the BLOB is externalized via StoragePoint, but I'm not sure how to allow this command to access the external BLOB data. Any ideas on what permissions might be necessary? The BLOB and endpoint still exist in SharePoint and on the file share and I am successfully able to see the item and its properties within PowerShell.
I found a similar issue where the OP said they solved it by giving explicit permissions to the StoragePoint databases, but Imm not sure what permissions or which databases need them. listItem.File.OpenBinary() not working - Remote Blob Storage / FileStreaming not enabled on SQL Server the culprit?
I was able to figure this out. I was testing from a server that didn't have the full StoragePoint installation. Testing the same call from one of the web servers in the farm I was able to open and download the file.
Related
I've tried removing a file in an Azure File Share using
the az CLI
Azure Storage Explorer
Both yield the error:
The specified resource may be in use by an SMB client. and ErrorCode:SharingViolation
I've tried listing file handles with the Azure Powershell and az CLI commands, but no file handles are shown. Supposedly, this should reveal any file locks.
I've also tried rebooting everything (that I know of!) that is connected to this file share. Other files in the same directory can be deleted. Everything else with this file share seems normal.
Any idea how I can find the source of the lock, and how to delete it?
Can you check any other client accessing the share?
Create another test file in the same storage account(fileshare) for testing purpose and see are facing the similar issue?
Sharingviolation: The operation failed because the object is already opened and does not allow the sharing mode that the caller requested.
Based on the error message you may refer to this article: https://learn.microsoft.com/en-us/rest/api/storageservices/managing-file-locks
which provides detailed information on file locks
Try to Unlock all Azure file share locks
This article lists common problems that are related to Microsoft Azure Files when you connect from Windows clients. It also provides possible causes and resolutions for these problems. In addition to the troubleshooting steps in this article: Unable to delete files
I am looking for a solution to my issue. I would like to use one folder with fils for my VMs.
I have tested a few solutions but always I have the same result. My shared folder is disconnected after every restart VM.
The problem is that Windows Server has credentials in Credential Manager.
I am trying to do this with net use, PowerShell and Cdmkey -
The easiest way to establish a persistent connection.
Does anybody has the same issue and found the solution?
I'm using Azure Files on my laptop it reconnects just fine after months of using it\rebooting\shutting down (i never hibernate). I think I was using net use\powershell and manually from explorer, all paths lead to the same outcome.
Another option - Azure File Sync, quote:
Use Azure File Sync to centralize your organization's file shares in
Azure Files, while keeping the flexibility, performance, and
compatibility of an on-premises file server. Azure File Sync
transforms Windows Server into a quick cache of your Azure file share.
You can use any protocol that's available on Windows Server to access
your data locally, including SMB, NFS, and FTPS. You can have as many
caches as you need across the world.
have you looked at "persisting Azure File Share credentials in Windows" section in the following document: https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows. Let me know if you have additional questions.
I am trying to quickly access text files via URL. The Azure portal (http://portal.azure.com) can (at best) link to the explore view of a specific folder, but I have not found any way to deep link into a specific file.
I also tried Azure Storage Explorer, which does support adl:// URLs but (apart from opening slowly) it only browses to the folder and it doesn't actually open it.
My use case is that at the end of each data processing job, I want to print a URL to open a text file for browsing.
Any ideas or workarounds?
In fact , there's no anonymous access allowed for files stored in ADLS. The access needs to be authorized so that we can't open it via the url directly.
Based on your situation, I suggest you creating your own endpoint (For example: Azure Function) as proxy to access resources with being authorized. You could access Azure Function with the url of the file you want to open as parameter.Then make the request to get the content of the file to display for browsing.
In addition, considering the security of accessing files , you need to focus on the Access control in Azure Data Lake Store.
Hope it helps you.
I'm starting using Windows Azure to manipulate my azure databases. I don't have experienced in IT world, I'm just looking a way to backup my database (preferibly in a local computer) and restore it.
I started reading from here:
http://msdn.microsoft.com/en-us/library/jj650016.aspx#copy
And I ran this code:
CREATE DATABASE destination_database_name
AS COPY OF [source_server_name].source_database_name
But I'm not sure if it's working, in the next image, contoso2 is my original database and the another is the copy, and this one does not have any table from the original source.
So, please guide about how to backup my datases not using commercial products.
If you need additional data, please let me know.
I recommend reading Business Continuity in Windows Azure SQL Database which explains the underlying infrastructure available to you and the two main mechanisms for backup - ocpy database and export/import
You have third party products available; some of which don't require you to purchase anything. Here is a good summary which is still valid. You can also use the Export/Import feature available right off the management portal of Windows Azure.
Well it is easy if you are using Sql Server 2012. If you are not then you can install the express version.
Select the database you want to back up in new portal of windows azure https://manage.windowsazure.com
In the footer you will have an option to import/export. Click export. This opens a modal popup. Select the storage account you want to use and type in a appropriate name to save the *.bacpac file.
Once the file is saved to storage, download it to local, open sql server 2012 management studio. Select the database server. Right click on it and in the context menu you will find Import Data-Tier Application. Select the bacpac file from you local and follow the settings.
At the end you will have your data residing on your local machine.
I have an application that is deployed on Windows Azure, in the application there is a Report part, the reports works as shown below.
The application generates the report as a PDF file and save it in a certain folder in the application.
I have a PDF viewer in the application that takes the URL of the file and displays it.
As you know, in windows azure I will have several VMs that will handled through a Load balancer so I can not ensure that the request in step 2 will go to the same VM in step 1, and this will cause a problem for me.
Any help is very appreciated.
I know that I can use BLOB, but this is not the problem.
The problem is that after creating the file on a certain VM, I give the PDF viewer the url of the pdf viewer as "http://..../file.pdf". This will generate a new request that I cannot control, and I cannot know which VM will server, so even I saved the file in the BLOB it will not solve my problem.
as in any farm environment, you have to consider saving files in a storage that is common for all machines in the farm. In Windows Azure, such common storage is Windows Azure Blob Storage.
You have to make some changes to your application, so that it saves the files to a Blob stroage. If these are public files, then you just mark the Blob Container as public and provide the full URL to the file in blob to the PDF viewer.
If your PDF files are private, you have to mark your container as private. Second step is to generate a Shared Access Signature URL for the PDF and provide that URL to the PDF viewer.
Furthermore, while developing you can explore your Azure storage using any of the (freely and not so freely) available tools for Windows Azure Storage. Here are some:
Azure Storage Explorer
Azure Cloud Storage Studio
There are a lot of samples how to upload file to Azure Storage. Just search it with your favorite search engine. Check out these resources:
http://msdn.microsoft.com/en-us/library/windowsazure/ee772820.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload-and-download-page-blobs.aspx
http://wely-lau.net/2012/01/10/uploading-file-securely-to-windows-azure-blob-storage-with-shared-access-signature-via-rest-api/
The Windows Azure Training Kit has great lab named "Exploring Windows Azure Storage"
Hope this helps!
UPDATE (following question update)
The problem is that after creating the file on a certain VM, I give
the PDF viewer the url of the pdf viewer as "http://..../file.pdf".
This will generate a new request that I cannot control, and I cannot
know which VM will server, so even I saved the file in the BLOB it
will not solve
Try changing a bit your logic, and follow my instructions. When your VM create the PDF, upload the file to a blob. Then give the full blob URL for your pdf file to the PDF viewer. Thus the request will not go to any VM, but just to the blob. And the full blob URL will be something like http://youraccount.blob.core.windows.net/public_files/file.pdf
Or I am missing something? What I understand, your process flow is as follows:
User makes a special request which would cause PDF file generation
File is generated on the server
full URL to the file is sent back to the client so that a client PDF viewer could render it
If this is the flow, that with suggested changes will look like the following:
User make a special request which would cause PDF file generation
File is generated on the server
File is uploaded to a BLOB storage
Full URL for the file in the BLOB is returned back to the client, so that it can be rendered on the client.
What is not clear? Or what is different in your process flow? I do exaclty the same for on-the-fly report generation and it works quite well. The only difference is that my app is Silverlight based and I force file download instead of inline-displaying.
An alternative approach is not to persist the file at all.
Rather, generate it in memory, set the content type of the response to "application/pdf" and return the binary content of the report. This is particularly easy if you're using ASP.NET MVC, but you can use a HttpHandler instead. It is a technique I regularly use in similar circumstances (though lately with Excel reports rather than PDF).
The usefulness of this approach does depend on how you're generating the PDF, how big it is and what the load is on your application.
But if the report is to be served just once, persisting it just so that another request can be made by the browser to retrieve it is just wasteful (and you have to provide the persistence mechanism).
If the same file is to be served multiple times and it is resource-intensive to create, it makes sense to persist it, then.
You want to save your PDF to a centralized persisted storage. VM's hard drive is neither. Azure Blob Storage is likely the simplest and best solution. It is dirt cheap to store and access. API for storing files and access them is very simple
There are two things you could consider.
Windows Azure Blob + Queue Storage
Blob Storage is a cost effective way of storing binary and sharing that information between instances. You would most likely use a worker role to create the Report which would store the report to Blob Storage and drop a completed message on the Queue.
Your web role instance could monitor the queue looking for reports that are ready to be displayed.
It would be similar to the concept used in the Windows Azure Guest Book app.
Windows Azure Caching Service
Similarly [and much more expensive] you could share the binary using the Caching Service. This gives a common layer between your VMs in which to store things, however you won't be able to provide a url to the PDF you'd have to download the binary and use either an HttpHandler or change the content-type of the request.
This would be much harder to implement, very expensive to run, and is not guaranteed to work in your scenario. I'd still suggest Blobs over any other means
Another option would be to implement a sticky session handler of your own. Take a look at:
http://dunnry.com/blog/2010/10/14/StickyHTTPSessionRoutingInWindowsAzure.aspx