I am currently researching into migrating the EPiServer 11.10.1 media blobs from a Windows DFS Share to an Azure Storage Account.
The configuration tried is as follows:
web.config (Note: only relevant sections are shown)
<dependentAssembly>
<assemblyIdentity name="EPiServer.Azure" publicKeyToken="8fe83dea738b45b7" culture="neutral" />
<bindingRedirect oldVersion="0.0.0.0-9.4.4.0" newVersion="9.4.4.0" />
</dependentAssembly>
<episerver.framework updateDatabaseSchema="false">
<clientResources debug="false" />
<appData basePath="" />
<scanAssembly forceBinFolderScan="true" />
<blob defaultProvider="azureblobs">
<providers>
<add name="azureblobs" type="EPiServer.Azure.Blobs.AzureBlobProvider,EPiServer.Azure" connectionStringName="EPiServerAzureBlobs" container="mycontainer"/>
</providers>
</blob>
connectionStrings.config (Note: only relevant sections are shown)
<connectionStrings>
<clear />
<add name="EPiServerAzureBlobs" connectionString="DefaultEndpointsProtocol=https;AccountName=storage00001;AccountKey=NuJBkcpuCbPKH+lcw65OwELkJ1nptJ7CY2Hn4MqNwqwL4WY4C3caSSSJYgH91J6MH9qZPPOOSbAzFZrNk8eIHt6PA==" />
</connectionStrings>
When starting the site, the following error is shown in the logs:
(Note: only relevant sections are shown)
2019-02-19 13:12:41,875 [94] [94a2e50f-06c6-4ddc-a6f7-2d1c43b0735d] ERROR
EPiServer.Global: Unhandled exception in ASP.NET
Microsoft.WindowsAzure.Storage.StorageException: The remote server returned
an error: (404) Not Found. ---> System.Net.WebException: The remote server
returned an error: (404) Not Found.
at System.Net.HttpWebRequest.GetResponse()
Request Information
RequestID:5e731c27-d01e-00cc-4254
RequestDate:Tue, 19 Feb 2019 13:12:41 GMT
StatusMessage:The specified blob does not exist.
I am unable to see an error which would be causing the media blob (image) not to be displayed.
The following has been tried already but to no avail:
Permissions: Azure Storage Account - Blobs - Contain (anonymous read access for containers and blobs)
Permissions: The media blob (images) are accessible in a browser independent of the EpiServer platform
Microsoft Support has confirmed there are no known issues affecting the Storage Account
If this makes a difference, EpiServer itself is running on a dedicated VM (IaaS) and using Azure SQL for databases.
Does appData basePath="" need to contain a value to work with an Azure Storage Account?
Any suggestions on what might be (or what I am doing) wrong are welcome.
Thank you.
Thank you for your proposed answer Ted but the solution was more straightforward. I posted a similar question to the official EPiServer forums:
https://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2019/2/migrating-to-azure-storage-accounts-media-blob---image---not-displayed-in-browser/
It was simply a matter of adding the suffix to the connection string:
;EndpointSuffix=core.windows.net
However, I did read similar suggestions to what you answered whilst researching, so I think it will be of benefit to users Googling a similar problem for your answer to be upvoted.
The 404 is because you've changed the blob provider without migrating the actual blobs. Thus, when an existing blob (referenced in the Episerver database) is requested, it is no longer found.
You may also be interested in the blob converter package mentioned here: How to move blobs from App_Data folder of episerver cms site to azure blob storage hosted in azure cloud
Related
i can deploy an app using Azure service extension in visual studio code which also creates web.config file and the app works fine, but when i try to upload the ZIP file using ZIPdeployUI and manually add the web.config the app throws an error "You do not have permission to view this directory or page". The difference is using Visual studio code the files are uploaded straight into /wwwroot folder and with ZipdeployUI it creates another folder that was zipped on local system.
You do not have permission to view this directory or page.
That is basically a hint that Azure encounter an error while running your web app. Since it's in production, it does not show any useful error messages. For testing/debugging purposes you can turn on the Azure detailed messaging, and turn it back off when it's ready for production. To do so, you have to follow these two steps,
Log in to Azure > App Services (left side menu) > Your Web App > App Service logs (search box is at the top if you can't find it), then turn on Detailed Error Messages or turn on all of the logging options, up to you.
Now add the following in your Web Config file,
In your Web Config file add <customErrors mode="Off" /> BEFORE system.web closing tag, </system.web>. Similarly, add <httpErrors errorMode="Detailed"></httpErrors> BEFORE </system.webServer>. Finally, upload the Web Config to Azure and cross your fingers.
If you follow the steps correctly, that will show the error messages in detail which takes you to figure out the issue.
Few Error Cases-Resolutions were:
You do not... this directory... this error comes when you have restricted IP on IIS Config. Check your web.config file and add your IP address in security section like:
<security>
<ipSecurity allowUnlisted="false">
<clear />
<add ipAddress="192.168.250.70" allowed="true" />
</ipSecurity >
</security>
Remove it if you do not want to restrict any IP address.
Check the zip is unpacked in the path Site>wwwroot>, else try restarting your function or web app.
You might need to tweak this depending on how your application structure looks like after the build like site\wwwroot\dist\, but if you have the app name in the folder structure, you might need to: site\wwwroot\dist\<app-name>\
Sometimes azure active directory authentication was created by Function App / Authentication automatically like (MS Graph - User.Read, Azure Service Management -user_impersonation). If yes, removing those will work to access the directory.
I've got the Redis Session State Provider working fine locally with my ASP.Net site and in Azure with my Azure Website. But I've got a question about configuration...
Is there any way to store the configuration for that in the Azure Website itself using the App Settings (or Configuration Strings) section in the Website Properties screen?
That would be very convenient because it would mean that I don't have to modify the web.config file when I publish. I already do this for connection strings and app settings, but I just don't see a way to do that for anything in the <system.web> node of the web.config file, like the <sessionState> node.
There isn't a way to change the behaviour of the provider-based session state from utilising the web.config file.
You could write your own provider and modify where it finds the connection details from so you can publish those details somewhere other than in the web.config, but this wouldn't be standard behaviour.
This question has the way to make this work.
<appSettings>
<add key="REDIS_CONNECTION_STRING" value="[your dev connection string]" />
</appSettings>
<system.web>
<sessionState mode="Custom" customProvider="RedisProvider">
<providers>
<add name="RedisProvider" type="Microsoft.Web.Redis.RedisSessionStateProvider" connectionString="REDIS_CONNECTION_STRING" />
</providers>
</sessionState>
</system.web>
Then, in the portal, you can create an app setting with the name 'REDIS_CONNECTION_STRING' with the correct connection string. You cannot use connection strings section of web.config or azure portal. It must be app settings. Not sure why, but connection strings just uses whatever is in the web.config and is not replaced with what is in the portal.
I want to use AzCopy to copy a blob from account A to account B. But instead of using access key for the source, I only have access to the Shared Access Key. I've tried appending the SAS after the URL, but it throws a 404 error. This is the syntax I tried
AzCopy "https://source-blob-object-url?sv=blah-blah-blah-source-sas" "https://dest-blob-object-url" /destkey:base64-dest-access-key
The error I got was
Error parsing source location "https://source-blob-object-url?sv=blah-blah-blah-source-sas":
The remote server returned an error: (404) Not Found.
How can I get AzCopy to use the SAS URL? Or that it doesn't support SAS?
Update:
With the SourceSAS and FilePattern options, I'm still getting the 404 error. This is the command I use:
AzCopy [source-container-url] [destination-container-url] [file-pattern] /SourceSAS:"?sv=2013-08-15&sr=c&si=ReadOnlyPolicy&sig=[signature-removed]" /DestKey:[destination-access-key]
This will get me a 404 Not Found. If I change the signature to make it invalid, AzCopy will throw a 403 Forbidden instead.
You're correct. Copy operation using SAS on both source and destination blobs is only supported when source and destination blobs are in same storage account. Copying across storage accounts using SAS is still not supported by Windows Azure Storage. This has been covered (though one liner only) in this blog post from storage team: http://blogs.msdn.com/b/windowsazurestorage/archive/2013/11/27/windows-azure-storage-release-introducing-cors-json-minute-metrics-and-more.aspx. From the post:
Copy blob now allows Shared Access Signature (SAS) to be used for the
destination blob if the copy is within the same storage account.
UPDATE
So I tried it and one thing I realized is that it is meant for copying all blobs from one container to another. Based on my trial/error, a few things you would need to keep in mind are:
Source SAS is for source container and not the blob. Also ensure that you have both Read and List permission on the blob container in the SAS.
If you want to copy a single file, please ensure that it is defined as "filepattern" parameter.
Based on these, can you please try the following:
AzCopy "https://<source account>.blob.core.windows.net/<source container>?<source container sas with read/list permission>" "https://<destination account>.blob.core.windows.net/<destination container>" "<source blob name to copy>" /DestKey:"destination account key"
UPDATE 2
Error parsing source location [container-location]: Object reference
not set to an instance of an object.
I was able to recreate the error. I believe the reason for this error is the version of storage client library (and thus the REST API) which is used to create SAS token. If I try to list contents of a blob container using a SAS token created by using version 3.x of the library, this is the output I get:
<?xml version="1.0" encoding="utf-8"?>
<EnumerationResults ServiceEndpoint="https://cynapta.blob.core.windows.net/" ContainerName="vhds">
<Blobs>
<Blob>
<Name>test.vhd</Name>
<Properties>
<Last-Modified>Fri, 17 May 2013 15:23:39 GMT</Last-Modified>
<Etag>0x8D02129A4ACFFD7</Etag>
<Content-Length>10486272</Content-Length>
<Content-Type>application/octet-stream</Content-Type>
<Content-Encoding />
<Content-Language />
<Content-MD5>uflK5qFmBmek/zyqad7/WQ==</Content-MD5>
<Cache-Control />
<Content-Disposition />
<x-ms-blob-sequence-number>0</x-ms-blob-sequence-number>
<BlobType>PageBlob</BlobType>
<LeaseStatus>unlocked</LeaseStatus>
<LeaseState>available</LeaseState>
</Properties>
</Blob>
</Blobs>
<NextMarker />
</EnumerationResults>
However if I try to list contents of a blob container using a SAS token created by using version 2.x of the library, this is the output I get:
<?xml version="1.0" encoding="utf-8"?>
<EnumerationResults ContainerName="https://cynapta.blob.core.windows.net/vhds">
<Blobs>
<Blob>
<Name>test.vhd</Name>
<Url>https://cynapta.blob.core.windows.net/vhds/test.vhd</Url>
<Properties>
<Last-Modified>Fri, 17 May 2013 15:23:39 GMT</Last-Modified>
<Etag>0x8D02129A4ACFFD7</Etag>
<Content-Length>10486272</Content-Length>
<Content-Type>application/octet-stream</Content-Type>
<Content-Encoding />
<Content-Language />
<Content-MD5>uflK5qFmBmek/zyqad7/WQ==</Content-MD5>
<Cache-Control />
<x-ms-blob-sequence-number>0</x-ms-blob-sequence-number>
<BlobType>PageBlob</BlobType>
<LeaseStatus>unlocked</LeaseStatus>
<LeaseState>available</LeaseState>
</Properties>
</Blob>
</Blobs>
<NextMarker />
</EnumerationResults>
Notice the difference in <EnumerationResults> XElement.
Now AzCopy uses version 2.1.0.4 version of the storage client library. As a part of copying operation it first lists the blobs in source container using the SAS token. Now as we saw above the XML returned is different in both versions so storage client library 2.1.0.4 fails to parse the XML returned by storage service. Because it fails to parse the XML, it is not able to create a Blob object and thus you get the NullReferenceException.
Solution:
One possible solution to this problem is to create a SAS token using version 2.1.0.4 version of the library. I tried doing that and was able to successfully copy the blob. Do give it a try. That should fix the problem you're facing.
Make sure you are using the latest version of the AzCopy and
check this http://blogs.msdn.com/b/windowsazurestorage/archive/2013/09/07/azcopy-transfer-data-with-re-startable-mode-and-sas-token.aspx
/DestSAS and /SourceSAS: This option allows access to storage containers and blobs with a SAS (Shared Access Signature) token. SAS token, which is generated by the storage account owner, grants access to specific containers and blobs with specifc permissions and for a specified period of time.
Example: Upload all files from a local directory to a container using SAS token which offers permits for list and write
AzCopy C:\blobData https://xyzaccount.blob.core.windows.net/xyzcontainer /DestSAS:”?sr=c&si=mypolicy&sig=XXXXX” /s
/DestSAS here is for you to specify the SAS token to access storage container, it should be enclosed in quotes.
You can use IaaS Management Studio to generate the powershell script for you. It is a commercial tool, but you can to that in the trial version. It does not use AzCopy though, but the classic blob API in powershell.
Just "Share the VHD" to get the SAS link. Then "Import from shared link", copy the SAS link you got earlier. Check at the bottom, you'll see a script icon. Put your cursor on it and it shows up.
However, in the trial, you can't copy the script, you'll need to type it by hand, but it is not very long to do so.
I've got a web.config that contains my SQL connection string and my Azure Blob storage connection string.
A Web.Config transformation replaces my Local SQL connection string with the Azure one.
When I publish the site to Azure, the Blob storage connection string is deleted and replaced with a duplicate SQL connection string, but with the blob storage string's name.
The only way I've found to fix is to log in via FTP and manually change the erroneous Storage connection string with the correct one from my local machine.
How do I get VS to publish my web config to Azure and leave it alone!!!
Web.Config
<connectionStrings>
<add name="DefaultConnection" connectionString="Data Source=(LocalDB)\v11.0;AttachDbFilename=|DataDirectory|\.mdf;Integrated Security=True" providerName="System.Data.SqlClient" />
<add name="StorageConnectionString" connectionString="DefaultEndpointsProtocol=https;AccountName=;AccountKey=" />
</connectionStrings>
Web.Release.Config
<connectionStrings>
<add name="DefaultConnection"
connectionString="Server=.database.windows.net,1433;Database=;User ID=#;Password=!;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
providerName="System.Data.SqlClient"
xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
I had a similar issue to yours. I'm not sure why but when you define the connection strings in the "Configure tab" in the azure portal and associate a "Linked Resource" on the linked resource tab it may override certain properties in the Web.config transform causing unexpected results. One of the options when you set up a new azure website is linking to (or creating a new) database to associate with your website - thereby automatically assigning the related connection string which may try to override your transform operation defined in the Web.Release.config.
Check and see if removing all connection strings and linked resources inside the "Azure Portal" fixes your problem. Just make sure that you have both your production database and storage connections strings defined properly in the Web.Release.config.
I struggled with this problem this morning and I came up with a solution for VS2015/17.
So I have an Azure VM, and to publish my web app on this machine, I used the Web deploy to an Azure VM proposed by VS.
I put my connection strings in an external file, so the useful part of my web.config looks like this :
</entityFramework>
<connectionStrings configSource="ConnectionStrings.config">
</connectionStrings>
</configuration>
in order to prevent VS of adding some connection strings during publication (ADO.Net code first MSSQL database connection string in my case), you can edit the following file in your project :
...\MyProject\Properties\PublishProfiles\YourPublishProfile - WebDeploy.pubxml
In this file look into the ItemGroup part and edit it to delete the connection strings you don't need:
<PublishDatabaseSettings>
<Objects xmlns="">
<ObjectGroup Name="MyProject.Models.MSSQL_DB" Order="1" Enabled="False">
<Destination Path="" />
<Object Type="DbCodeFirst">
<Source Path="DBContext" DbContext="MyProject.Models.MSSQL_DB, MyProject" Origin="Convention" />
</Object>
</ObjectGroup>
</Objects>
</PublishDatabaseSettings>
</PropertyGroup>
<ItemGroup>
<here are some entries delete the ones you don't need/>
</ItemGroup>
Be careful, if you add a file in this repertory, there is chances that it breaks the publication process on VS. Don't add file, just edit.
I'm using Windows Azure to host my python project and I'm trying to enable the diagnostics without good results.
As I'm using python and not .NET, the only way I can actually configure it is through config files.
Below my config files:
ServiceDefinition.csdef
...
<Imports>
<Import moduleName="Diagnostics" />
</Imports>
...
ServiceConfiguration.Cloud.cscfg
....
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<my-account-name>;AccountKey=<my-account-key"/>
....
diagnostics.wadcfg:
<DiagnosticMonitorConfiguration xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"
configurationChangePollInterval="PT10M"
overallQuotaInMB="1200">
<DiagnosticInfrastructureLogs bufferQuotaInMB="100"
scheduledTransferLogLevelFilter="Warning"
scheduledTransferPeriod="PT5M" />
<Logs bufferQuotaInMB="200"
scheduledTransferLogLevelFilter="Warning"
scheduledTransferPeriod="PT5M" />
<Directories bufferQuotaInMB="600"
scheduledTransferPeriod="PT5M">
<CrashDumps container="wad-crash-dumps" directoryQuotaInMB="200" />
<FailedRequestLogs container="wad-frq" directoryQuotaInMB="200" />
<IISLogs container="wad-iis" directoryQuotaInMB="200" />
</Directories>
<WindowsEventLog bufferQuotaInMB="200"
scheduledTransferLogLevelFilter="Warning"
scheduledTransferPeriod="PT5M">
<DataSource name="System!*" />
</WindowsEventLog>
</DiagnosticMonitorConfiguration>
In Diagnostics Manager, I can't actually see any data.
Thanks.
May i ask where your diagnostics.wadcfg located? For a regular worker role the diagnostics.wadcfg must be in the root folder and because you don't have worker role module in your project the location of the architecture of your role folder is very important. Be sure to have exact same folder structure in your Python application as a regular worker role and then drop the diagnostics.wadcfg in the role root folder. (add that info back to your question to verify)
Do you see a diagnostics configuration XML is created in your Windows Azure Blob storage which is configured in the *.Diagnostics.ConnectionString. This is a check which suggests that the diagnostics component in the Azure role was able to read the provided configuration and could create the configuration XML at destination blob stroage (same Azure Storage will be use to write log Azure Table storage). Please verify.
Finally your diagnostics.wadcfg need some more work. As this is a non .net worker role you have configured IIS logging (do you really have IIS running in worker role? ) and also have System event log scheduled to transfer "warning only" so if there are no warnings. Finally the log transfer time is set to 5 minutes which is long during test.
What i can suggest as below to test if diagnostics is working or not:
Remove the IIS log if you dont have IIS running the Azure VM
Replace event log DataSource from System!* to Application!* and set filter to Info level
Change the log transfer time to less then a minutes
Run the exact same code in Development Fabric with Diagnostics connection string connected to Actual Azure Storage.
Add custom event log in your machine and see if they are transferred within the time limit to Azure Table storage and specific tables are created
Above should help you to troubleshoot the problem.