Faced a data copy issue using Azure Fabric Data - minIO
In Azure Fabric Data, I set up a connection to my minIO server, the test connection goes well, I see all the bucket
BUT when I try to access the contents of the cart I get an error
The operation has timed out"
or "The file operation is failed. A WebException with status NameResolutionFailure was thrown. The remote name could not be resolved: 'bucket-2.miniomyserverhost.com' Activity ID: 0794a825-7ba4-dfec4cfc8846"
Judging by the error, I need to adjust the "path style" URLs
But I can't find an example of how to do it right.
Can you suggest
Registered two parameters in the configuration file
MINIO_DOMAIN=(host)domain
MINIO_SERVER_URL=http://(host)domain
Here is my suggestion. To resolve the issue of "The operation has timed out" or:
The file operation is failed. A WebException with status NameResolutionFailure was thrown. The remote name could not be resolved
…while accessing the contents of a bucket in Azure Fabric Data using minIO, you need to set up the "path style" URLs correctly.
You have registered two parameters in the configuration file, MINIO_DOMAIN and MINIO_SERVER_URL. To use path style URLs, set MINIO_SERVER_URL to http://(host):9000.
Here is an example:
MINIO_DOMAIN=myminioserver.com
MINIO_SERVER_URL=http://myminioserver.com:9000
Make sure to replace myminioserver.com with the actual hostname or IP address of your minIO server.
Related
I am deploying a Service Fabric application and encountered this error for a resource of type Microsoft.ServiceFabric/clusters/applicationTypes/versions:
Status: Failed
Error:
Code: ClusterChildResourceOperationFailed
Message: Resource operation failed. Operation: CreateOrUpdate. Error details: {
"Details": "FABRIC_E_IMAGEBUILDER_VALIDATION_ERROR: DOWNLOAD PATH SANITIZED"
}
Has anyone run into this issue before? If so, what was the root cause of the error?
When I encountered this error, my application type name in my manifest did not match the application type name that I was deploying to.
It is possible to view far more useful/relevant error messages under these scenarios by going to the Service Fabric Explorer.
e.g.
https://{my-service-fabric-clustername.example.com}:19080/Explorer/old.html#
NOTE: The "new" UI does not show these useful error details, you need to select the "View old SFX" interface option
Then clicking on the "Type" that I was uploading the application to, revelaed far more descriptive and helpful errors:
From my experience, this is an issue with the version number of the sfpkg not aligning with the version in the template's Microsoft.ServiceFabric/clusters/applicationTypes/versions. Try looking into the application package's ApplicationManifest.xml file for ApplicationTypeVersion for the right version.
Today I tried to create read replica for MySQL 5.5.53 RDS, it give me below error
Cannot find version 5.5.53 for mysql (Service: AmazonRDS; Status Code:
400; Error Code: InvalidParameterCombination;
Create read replica in UI version did not worked. I tried there AWS cli mode to create
aws rds create-db-instance-read-replica --db-instance-identifier <read_replica_name> --source-db-instance-identifier <master-server-name> --db-instance-class <class-name> --availability-zone <zone> --no-multi-az --auto-minor-version-upgrade --no-publicly-accessible --vpc-security-group-ids <vpc-id>
And it worked.
I was getting this error today when trying to load the "Modify" page for one of my RDS instances. I discovered that this happens when I navigate to the instance from the "Resources" tab in a CloudFormation stack, but not when I navigate to the instance from the "Instances" list in the RDS console. (The two paths do result in different URLs but what looks like the same page.)
Thought I'd add this in case it's what was behind your error message, or for someone else who searches and finds this question as I did.
I'm trying to create a project of the labeling tool from the Azure form recognizer. I have successfully deployed the web app, but I'm unable to start a project. I get this error all every time I try:
I have tried with several app instances and changing the project name and connection name, none of those work. The only common factor and finding here as that it is related to the connection.
As I see it:
1) I can either start a new project or use one on the cloud:
First I tried to create a new project:
I filled the fields with these values:
Display Name: Test-form
Source Connection: <previuosly created connection>
Folder Path: None
Form Recognizer Service Uri: https://XXX-test.cognitiveservices.azure.com/
API Key: XXXXX
Description: None
And got the error from the question's title:
"Invalid resource name creating a connection to azure storage "
I tried several combinations of names, none of those work.
Then I tried with the option: "Open a cloud project"
Got the same error instantly, hence I deduce the issue is with the connection settings.
Now, In the connection settings I have this:
At first glance, since the values are accepted and the connection is created. I guess it is correct but it is the only point I failure I can think of.
Regarding the storage container settings, I added the required CORS configuration and I have used it to train models with the Forms Recognizer, So that part does works.
At this point at pretty much stuck, since I error message does not give me many clues on where is the error.
I was facing a similar error today
You have to add container name before "?sv..." in your SAS URI in connection settings
https://****.blob.core.windows.net/**trainingdata**?sv=2019-10-10..
Per this very helpful Q&A on StackOverflow I am able to build a private conda package and then install it by placing it in a specific folder. I can also host it somewhere on the web and simply use the URL as the channel, with the prefix url://.
In order to keep my code private, I put the conda channel in Azure Blob Storage and created an SAS to access it. So in theory, the way to keep it private is that only someone with the full SAS URL including the token can access it.
The problem is, the SAS format is in the form of a URL query: https://<storage-name>.blob.core.windows.net/<container-name>?se=2019-07-24T02%3A53%3A48Z&sp=rl&sv=2018-03-28&sr=c&comp=list&restype=container&sig=REDACTED_TOKEN, so when I pass it to conda it breaks the URL after the ? and doesn't use the full URL, and gets a 404 in response. See the Microsoft docs for the full specification.
PowerShell example:
PS C:\Users\ydima> $sas = "https://REDACTED.blob.core.windows.net/conda-channel-1?se=2019-07-24T02%3A53%3A48Z&sp=rl&sv=2018-03-28&sr=c&comp=list&restype=container&sig=REDACTED"
PS C:\Users\ydima> conda install -c "url:///"$sas crawford-utils
Collecting package metadata (current_repodata.json): failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://url/win-64/current_repodata.json>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
ConnectionError(MaxRetryError("HTTPSConnectionPool(host='url', port=443): Max retries exceeded with url: /win-64/current_repodata.json (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x0000026183E54EB8>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))"))
'sp' is not recognized as an internal or external command,
operable program or batch file.
'sv' is not recognized as an internal or external command,
operable program or batch file.
'sr' is not recognized as an internal or external command,
operable program or batch file.
Name of second file to compare:
Any idea how I can get conda to use a URL that includes queries in the body?
According to the source code of Channel class of conda tool as the figure below, it does not support the query string in an URL.
So if you want to use a container in Azure Blob Storage as a channel of a private conda mirror, you need to set public access level for the container or directly use the feature of static web hosting of Azure Storage.
Otherwise, a possible workaround solution is to set a custom proxy for conda tool to help automatically adding the sas token query string at the end of each resource url of conda channel, please refer to the document Using the .condarc conda configuration file to know how to set proxy server in the .condarc file.
Hope it helps.
So I found a cool solution. I create an Azure Storage account, and then create a blob container that is publicly accessible, but to protect it I name the blob container something random - like a long random string, which in effect can act as a token. So for example, in PowerShell:
PS C:> $azStorageName = "mystorage"
PS C:> $blobName = -join ((97..122) | Get-Random -Count 26 | ForEach-Object {[char]$_})
PS C:> "https://$azStorageName.blob.core.windows.net/$blobName"
https://mystorage.blob.core.windows.net/fwsjtizbpvaerukdomqhlgnycx
The code to generate the random string is based on this post.
It has 26^26 possible combinations, which I think is good enough for a secure token for this purpose.
I also plan on setting up CI in say Azure Pipelines, so that I can automate all this whenever I push code to the private GitHub repo.
(I'm quite new to Windows Azure development, so I hope I'm using the right terms.)
We have an Azure Worker Role that is supposed to fetch data stored in Blob Storage.
Somehow we occasionally get the following error message:
Microsoft.WindowsAzure.StorageClient.StorageServerException: The server encountered an unknown failure: The remote name could not be resolved: 'XXX.blob.core.windows.net' ---> System.Net.WebException: The remote name could not be resolved: 'XXX.blob.core.windows.net'
This seems strange, since requests only a second before and/or after works as expected.
If I understand things correctly, the CloudBlob class has internal retry functionality. It seems that this is not considered as a "retryable" error. Is this perhaps handled by the Transient Error Handling Block (Topaz), or do we have to handle this specific error in some other way?