My 'LocalClient' app is in a corporate LAN behind an HTTP proxy server (ISA). The first Azure API call i make - CloudQueue.CreateIfNotExist() - causes an exception: (407) Proxy Authentication Required. I tried following things:
Added the <System.Net> defaultProxy element to app.config, but it doesn't seem to be working (Reference: http://geekswithblogs.net/mnf/archive/2006/03/08/71663.aspx).
I configured 'Microsoft Firewall Client for ISA Server', but that did not help either.
Used a custom proxy handler as suggested here: http://dunnry.com/blog/2010/01/25/SupportingBasicAuthProxies.aspx. I am not able to get this working - getting a Configuration initialization exception.
As per MSDN, an HTTP proxy server can be specified in the connection string only in case of Development Storage (see http://msdn.microsoft.com/en-us/library/ee758697.aspx):
UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://myProxyUri
Is there any way to connect to the Azure Storage thru a proxy server?
I actually found that the custom proxy solution was not required.
Adding the following to app.config (just before the </configuration>) did the trick for me:
<system.net>
<defaultProxy enabled="true" useDefaultCredentials="true">
<proxy usesystemdefault="true" />
</defaultProxy>
</system.net>
The custom proxy solution (the third thing i tried as mentioned in my original question) worked perfectly. The mistake i was doing earlier was not putting the <configSections> element at the beginning of <configuration> in app.config as required. On doing that, the custom proxy solution given here solved my problem.
To by pass the proxy then please use like below, it works as expected and same has been tested.
public class AzureUpload {
// Define the connection-string with your values
/*public static final String storageConnectionString =
"DefaultEndpointsProtocol=http;" +
"AccountName=your_storage_account;" +
"AccountKey=your_storage_account_key";*/
public static final String storageConnectionString =
"DefaultEndpointsProtocol=http;" +
"AccountName=test2rdrhgf62;" +
"AccountKey=1gy3lpE7Du1j5ljKiupjhgjghjcbfgTGhbntjnRfr9Yi6GUQqVMQqGxd7/YThisv/OVVLfIOv9kQ==";
// Define the path to a local file.
static final String filePath = "D:\\Project\\Supporting Files\\Jar's\\Azure\\azure-storage-1.2.0.jar";
static final String file_Path = "D:\\Project\\Healthcare\\Azcopy_To_Azure\\data";
public static void main(String[] args) {
try
{
// Retrieve storage account from connection-string.
//String storageConnectionString = RoleEnvironment.getConfigurationSettings().get("StorageConnectionString");
//Proxy httpProxy = new Proxy(Proxy.Type.HTTP,new InetSocketAddress("132.186.192.234",8080));
System.setProperty("http.proxyHost", "102.122.15.234");
System.setProperty("http.proxyPort", "80");
System.setProperty("https.proxyUser", "ad001\\empid001");
System.setProperty("https.proxyPassword", "pass!1");
// Retrieve storage account from connection-string.
CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
// Create the blob client.
CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
// Get a reference to a container.
// The container name must be lower case
CloudBlobContainer container = blobClient.getContainerReference("rpmsdatafromhospital");
// Create the container if it does not exist.
container.createIfNotExists();
// Create a permissions object.
BlobContainerPermissions containerPermissions = new BlobContainerPermissions();
// Include public access in the permissions object.
containerPermissions.setPublicAccess(BlobContainerPublicAccessType.CONTAINER);
// Set the permissions on the container.
container.uploadPermissions(containerPermissions);
// Create or overwrite the new file to blob with contents from a local file.
/*CloudBlockBlob blob = container.getBlockBlobReference("azure-storage-1.2.0.jar");
File source = new File(filePath);
blob.upload(new FileInputStream(source), source.length());*/
String envFilePath = System.getenv("AZURE_FILE_PATH");
//upload list of files/directory to blob storage
File folder = new File(envFilePath);
File[] listOfFiles = folder.listFiles();
for (int i = 0; i < listOfFiles.length; i++) {
if (listOfFiles[i].isFile()) {
System.out.println("File " + listOfFiles[i].getName());
CloudBlockBlob blob = container.getBlockBlobReference(listOfFiles[i].getName());
File source = new File(envFilePath+"\\"+listOfFiles[i].getName());
blob.upload(new FileInputStream(source), source.length());
System.out.println("File " + listOfFiles[i].getName()+ " upload successful");
}
//directory upload
/*else if (listOfFiles[i].isDirectory()) {
System.out.println("Directory " + listOfFiles[i].getName());
CloudBlockBlob blob = container.getBlockBlobReference(listOfFiles[i].getName());
File source = new File(file_Path+"\\"+listOfFiles[i].getName());
blob.upload(new FileInputStream(source), source.length());
}*/
}
}catch (Exception e)
{
// Output the stack trace.
e.printStackTrace();
}
}
}
.Net or C# then please add below code to "App.config"
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" />
</startup>
<system.net>
<defaultProxy enabled="true" useDefaultCredentials="true">
<proxy usesystemdefault="true" />
</defaultProxy>
</system.net>
</configuration>
Related
I am using Azure.Storage.Blobs, Version=12.1.0.0.
Blobclient is working fine with AccessKey ,but we wanted to use SAS connectionstring
It is throwing exception here.
var blobClient = new BlobServiceClient(**Connectionstring**);
"No valid combination of account information found." this is the exception am getting at the above line.
I am using the below SAS connection format
BlobEndpoint=xxxxxxx;QueueEndpoint=xxxxxxx;FileEndpoint=xxxxxxx;TableEndpoint=xxxxxxx;SharedAccessSignature=xxxxxxx
For SAS connection, you should follow the steps below to generate sas-url: Nav to azure portal -> your storage account -> Shared access signature:
Then copy the "Blob Service SAS URL"(if you want to operate file share / queue, you should use the respective SAS URL).
Then in the code with library Azure.Storage.Blobs, Version=12.1.0.0.:
using Azure.Storage.Blobs;
using System;
namespace ConsoleApp16
{
class Program
{
static void Main(string[] args)
{
//replace the sas_url with the one you copied in the above steps.
string sas_url = "https://xxx.blob.core.windows.net/?sv=2019-02-02&ss=bfqt&srt=sco&sp=rwdlacup&se=2020-01-07T17:04:27Z&st=2020-01-07T09:04:27Z&spr=https&sig=xxx";
Uri uri = new Uri(sas_url);
BlobServiceClient blobServiceClient = new BlobServiceClient(uri);
var blobContainer = blobServiceClient.GetBlobContainerClient("test1");
var blobclient = blobContainer.GetBlobClient("yy1.txt");
blobclient.Upload(#"d:\aa.txt");
Console.WriteLine("**completed**");
Console.ReadLine();
}
}
}
I am using Azure storage blob to store image and I am trying to display it in my Xamrin.form application. I have find a simple tutorial and the code on github.
I have succeed to implement it by following the steps and and create an account on azure storage blob.
The problem is : I can see the name of the file but not the "image"
here is the error:
read started: <Thread Pool> #9
[0:] HTTP Request: Could not retrieve https://xxxxxx.blob.core.windows.net/yyyy/kakashi.jpg, status code NotFound
[0:] ImageLoaderSourceHandler: Could not retrieve image or image data was invalid: Uri: https://lxxxxxx.blob.core.windows.net/yyyy/kakashi.jpg
Thread finished: <Thread Pool> #4
Here is the tutorial:
click to see
Here is the Github:
click to see
Here is the output on screen:
and I have this error when I put the Urlof the image (https://lxxxxxx.blob.core.windows.net/yyyy/kakashi.jpg
) on my bronwser:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>ResourceNotFound</Code>
<Message>
The specified resource does not exist. RequestId:97933c69-a01e-014f-6669-f0502e000000 Time:2018-05-20T18:33:28.4774584Z
</Message>
</Error>
The error means you don't set Public access level to Blob.
See this requirement in your tutorial.
Code you use requires this setting, because it accesses the blob directly using blob Uri.
See PhotosBlobStorageService.cs
return blobList.Select(x => new PhotoModel { Title = x.Name, Uri = x.Uri }).ToList();
If you do want to keep Private level, you have to make some changes to the statement above. Here's the reference.
return blobList.Select(x => new PhotoModel { Title = x.Name,
Uri = new Uri(x.Uri+x.GetSharedAccessSignature(
new SharedAccessBlobPolicy {
Permissions = SharedAccessBlobPermissions.Read|SharedAccessBlobPermissions.Write,
// you can modify the expiration to meet your requirement
SharedAccessExpiryTime = DateTime.UtcNow.AddYears(1)
} ))
}).ToList();
This change allows you to visit private blob with a SAS.
1.Please check your subscription first.
2.Check the access policy of your container.
3.Here is the steps to Save and get blobs through the code.
1)Using NuGet we can install required Assembly packages.
Go to "Manage Package for Solution Menu" and search for WindowsAzure.Storage and WindowsAzure.ConfigurationManager and click on install.
2)Get access keys in the configuration.
3)Sample code to Create blob through the code:
public async Task<string> SaveImagesToAzureBlob(HttpPostedFileBase imageToUpload)
{
try
{
CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("sampleimage");
if (await cloudBlobContainer.CreateIfNotExistsAsync())
{
await cloudBlobContainer.SetPermissionsAsync(
new BlobContainerPermissions
{
PublicAccess = BlobContainerPublicAccessType.Blob
}
);
}
string imageFullPath = null;
string imageName = Guid.NewGuid().ToString() + "-" + Path.GetExtension(imageToUpload.FileName);
CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference(imageName);
cloudBlockBlob.Properties.ContentType = imageToUpload.ContentType;
await cloudBlockBlob.UploadFromStreamAsync(imageToUpload.InputStream);
imageFullPath = cloudBlockBlob.Uri.ToString();
return imageFullPath;
}
catch (Exception ex)
{
throw ex;
}
}
Now, check your storage account, you can see the container sample generated.
By default, the container will be private, no one can access from outside. To set the permissions we should use the SetPermission method as below.
CloudBlobContainer .SetPermissions( new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
Please try different permissions in the list.
Please note the permission level settings.In your case it may cause the issue.
For more details :
Reference
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-deployment-model
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-dotnet-how-to-use-blobs
I have a network access control list in my cloud service similar to the below. How do I configure this programmatically instead of from the config file?
Some of these IP addresses can change. I want to resolve the IP address from a domain name and add the configuration:
<NetworkConfiguration>
<AccessControls>
<AccessControl name="security">
<Rule action="permit" description="Allow access from A" order="100" remoteSubnet="xxx.xxx.xxx.xxx/32" />
<Rule action="permit" description="Allow access from B" order="200" remoteSubnet="xxx.xxx.xxx.xxx/32" />
<Rule action="permit" description="Allow access from C" order="300" remoteSubnet="xxx.xxx.xxx.xxx/32" />
<Rule action="deny" description="Deny access to everyone else" order="400" remoteSubnet="0.0.0.0/0" />
</AccessControl>
</AccessControls>
You could create a separate role or an Azure Function that generates the new configuration and updates the service through REST: https://msdn.microsoft.com/en-us/library/azure/ee460812.aspx
Ok. I ended up writing a console application which gets called during the build which gets the IP address of the remove cloud service and checks whether it corresponds to what is in the configuration file.
If not, then I update it. Pretty straightforward.
Here is the build command:
$(SolutionDir)<MyProjectName>\$(OutDir)$(ConfigurationName)\MyExeName Update-FrontEnd-IPAddress-For-Azure-MicroService "$(SolutionDir)<AzureDeploymentProjectName>\ServiceConfiguration.Cloud.cscfg"
The console application does:
private static void HandleCheckRoleEnvironment(string[] args)
{
if (args[0] == "Check-Role-Environment")
{
Console.WriteLine("Found Command: Check-Role-Environment");
if (RoleEnvironment.IsAvailable && !RoleEnvironment.IsEmulated)
{
Console.WriteLine("Running in Azure Cloud Environment");
Environment.Exit(0);
return;
}
else
{
Console.WriteLine("NOT Running in Azure Cloud Environment");
Environment.Exit(1);
return;
}
}
}
Here is the code to update the config file:
private static void ExecuteUpdateFrontEndIPAddressForAzureMicroService(string configFilePath)
{
if (!File.Exists(configFilePath))
{
return;
}
var ipAddressList = Dns.GetHostAddresses("MyDomainName");
Console.WriteLine($"The IP address for MyDomainName is {ipAddressList[0].ToString()}");
var correctValue = $"{ipAddressList[0].ToString()}/32";
var document = new XmlDocument();
document.Load(configFilePath);
//Rule nodes
var rules = document.ChildNodes[1].LastChild.FirstChild.FirstChild.ChildNodes;
var rule = (from XmlNode p in rules
where p.Attributes["description"].Value == "Allow access from MyDomainName"
select p).FirstOrDefault();
var ipAddressValue = rule.Attributes["remoteSubnet"].Value;
Console.WriteLine($"The IP address in the config file is {ipAddressValue}");
if (correctValue != ipAddressValue)
{
rule.Attributes["remoteSubnet"].Value = correctValue;
document.Save(configFilePath);
Console.WriteLine("The config file has been updated with the correct IP address.");
}
else
{
Console.WriteLine("The config file is upto date and will not be updated.");
}
}
I can't get shared access policies to work with virtual directories in blob storage. It works fine for containers. As far as I know, virtual directories are containers so SAS should work?
When I attempt to access a resource in a virtual directory using a SAS I get this response:
<?xml version="1.0" encoding="utf-8"?>
<Error>
<Code>AuthenticationFailed</Code>
<Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:XXXXXXXX-000X-00XX-XXXX-XXXXXX000000 Time:2016-08-15T13:28:57.6925768Z</Message>
<AuthenticationErrorDetail>Signature did not match. String to sign used was r 2016-08-15T13:29:53Z /blob/xxxxxxxxxx/mycontainer 2015-12-11</AuthenticationErrorDetail>
</Error>
Example code to demonstrate:
public static async Task<string> GetFilePath()
{
var storageAccount = CloudStorageAccount.Parse( "DefaultEndpointsProtocol=https;AccountName=xxxxxxxxxx;AccountKey=xxxxxxxxxx" );
var blobClient = storageAccount.CreateCloudBlobClient();
var containerName = "mycontainer/myvd"; // remove /myvd and SAS will work fine
var containerReference = blobClient.GetContainerReference( containerName );
var blobName = "readme.txt";
var blobReference = await containerReference.GetBlobReferenceFromServerAsync( blobName );
var sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes( 1 );
sasConstraints.Permissions = SharedAccessBlobPermissions.Read;
var sasContainerToken = containerReference.GetSharedAccessSignature( sasConstraints );
var path = $"{blobClient.BaseUri.ToString()}{containerName}/{blobName}{sasContainerToken}";
return path;
}
Reason for this error is because Shared Access Signatures are only supported either at blob container level or blob level. In fact, in Azure Blob Storage there's no such thing as a Virtual Directory; it only supports 2-level hierarchy: Blob Container & Blob. A Virtual Directory is simply a prefix that you apply to a file (blob) name.
Based on this, I would recommend making following changes to your code:
var containerName = "mycontainer"; // remove /myvd and SAS will work fine
var containerReference = blobClient.GetContainerReference( containerName );
var blobName = "myvd/readme.txt"; //Your blob name is actually "myvd/readme.txt"
var blobReference = await containerReference.GetBlobReferenceFromServerAsync( blobName );
I am new to Sharepoint and Client Object model. I am stuck with a problem and not been able to fix the issue. I want to upload files more than 10 MB using Client Object Model in Sharepoint 2013. I get the following exception
The request message is too large. The server does not allow messages
that are larger than 2097152 bytes.
I have tried everything. Here is the list of things that i did
1- Changed the settings in web.config file of my local web application
<system.web>
<httpRuntime useFullyQualifiedRedirectUrl="true" maxRequestLength="2147483647" requestLengthDiskThreshold="2147483647" executionTimeout="18000"/> </system.web>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true"/>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="2147483647" />
</requestFiltering>
</security>
</system.webServer>
2- In the powershell on my server ran the following commands and restarted the application in the IIS. Even restarted the whole IIS.
$ws = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$ws.ClientRequestServiceSettings.MaxReceivedMessageSize = 2147483647
$ws.Update()
Here is my code :
private void UploadDataToSharepointTest(List<UploadData> pDataObjList)
{
string lServerUrl = #"http://xxxxxxx:2000/";
string lFolderName = DateTime.Now.ToString(#"yyyyMMddHHmmss");
ClientContext context = new ClientContext(lServerUrl);
context.AuthenticationMode = ClientAuthenticationMode.Default;
context.Credentials = new System.Net.NetworkCredential("user", "password", "domain");
Web web = context.Web;
List docs = web.Lists.GetByTitle("ABC");
Folder lNewFolder = web.Folders.Add(lServerUrl + "ABC/" + lFolderName + "/");
docs.Update();
int fileIndex = 1;
foreach (var item in pDataObjList)
{
FileCreationInformation newFile = new FileCreationInformation();
newFile.Content = System.IO.File.ReadAllBytes(item.CompleteFilePath);
newFile.Url = fileIndex.ToString() + "-" + item.fileName;
fileIndex++;
Microsoft.SharePoint.Client.File uploadFile = lNewFolder.Files.Add(newFile);
context.Load(uploadFile);
context.ExecuteQuery();
Dictionary<string, string> metadata = new Dictionary<string, string>();
metadata.Add("Comments", item.comments);
metadata.Add("Plan_x0020_Size", item.planSize);
metadata.Add("Density", item.density);
metadata.Add("First_x0020_Name", txtFirstName.Text.Trim());
metadata.Add("Last_x0020_Name", txtLastName.Text.Trim());
metadata.Add("Company", txtCompany.Text.Trim());
metadata.Add("Contact", txtContact.Text.Trim());
metadata.Add("Additional_x0020_Comments", txtAdditionalComments.Text.Trim());
Microsoft.SharePoint.Client.ListItem items = uploadFile.ListItemAllFields;
context.Load(items);
context.ExecuteQuery();
foreach (KeyValuePair<string, string> metadataitem in metadata)
{
items[metadataitem.Key.ToString()] = metadataitem.Value.ToString();
}
items.Update();
context.ExecuteQuery();
}
}
Note: I am able to upload small files.
There are file size limit if you use the build-in upload function.
To upload a large file, please upload it with filestream.
Take a look at the article below:
http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
SharePoint allows you to configure this via Central Admin, I'd stick with that to make sure it makes all the appropriate changes for you. You need to have farm level permissions. Also in SharePoint 2013 you can have different file max limits for different file types so make sure your file type wasn't changed by anyone. Different Max based on File Types
Accessing SharePoint Webapp properties via central Admin