I am debugging a cluster there nodes are not coming online after deployment using a ARM template. I think the issue is something to do with the certificate.
I have the following events that might help figuring out what the issue is:
SecurityUtil::GetX509SvrCredThumbprint(LocalMachine, My, FindByThumbprint:6a187334b4ba95589cd5ee733b9ca1c3499eab5f) failed: FABRIC_E_INVALID_CREDENTIALS
Unable to acquire ssl credentials: FABRIC_E_INVALID_CREDENTIALS
failed to set security settings to { provider=SSL protection=EncryptAndSign certType = 'cluster' store='LocalMachine/My' findValue='FindByThumbprint:6a187334b4ba95589cd5ee733b9ca1c3499eab5f' remoteCertThumbprints='6a187334b4ba95589cd5ee733b9ca1c3499eab5f' certChainFlags=40000000 isClientRoleInEffect=false claimBasedClientAuthEnabled=false }: FABRIC_E_INVALID_CREDENTIALS
Failed to set security on transport: FABRIC_E_INVALID_CREDENTIALS
federation open failed with FABRIC_E_INVALID_CREDENTIALS
Fabric Node open failed with error code = FABRIC_E_INVALID_CREDENTIALS
HostedService: _nt1vm_0 on node id 72e0ec579b75d9847ba5a43d6b365d7c terminated unexpectedly with code 7167 and process name Fabric.exe
The thumbprint matches the expected cert used in the template deployment.
The certificate was created in c# and stored in a secret with var certBase64 = Convert.ToBase64String(x509Certificate.Export(X509ContentType.Pkcs12)); and contenttype = application/x-pkcs12
I was using expired certificates which caused this. :(
Related
I am following the steps from this guide to connect to the on-prime database using spark.
https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/data-sources/apache-spark-sql-connector
tried this code:
servername = "XXXXX"
dbname = "poplesdb"
url = servername + ";" + "databaseName=" + dbname + ";"
dbtable = "Test"
user = "test\user"
password = mssparkutils.credentials.getSecret('xxxx','xxxxxxx')
I got this error:
Py4JJavaError: An error occurred while calling z:mssparkutils.credentials.getSecret.
: com.twitter.finagle.NoBrokersAvailableException: No hosts are available for XXXX.vault.azure.net:443, Dtab.base=[], Dtab.local=[]. Remote Info: Not Available
I am trying this connection test since days. please anyone help me?
Here my screen shots of linked service . I need to connect the source inside retail database
Py4JJavaError: An error occurred while calling
z:mssparkutils.credentials.getSecret. :
com.twitter.finagle.NoBrokersAvailableException: No hosts are
available for XXXX.vault.azure.net:443, Dtab.base=[], Dtab.local=[].
Remote Info: Not Available
Given error indicates that a request failed because no servers were available. This typically occurs under one of the following conditions:
The cluster is actually down. No servers are available. - A service discovery failure. This can be due to a number of causes, such as the client being constructed with an invalid cluster destination name or a failure in the service discovery system (e.g. DNS).
A good way to diagnose NoBrokersAvailableExceptions is to reach out to the owners of the service to which the client is attempting to connect and verify that the service is operational. If so, then investigate the service discovery mechanism that the client is using.
While applying terraform plan, we are getting error logs as highlighted in panic output box. It has been working very well on multiple terraform plan, apply and destroy.
And, we are unable to make a meaningful summary of this error. We have even searched in stackoverflow and github issues forums. Yet, none matched what we are facing. They even didn't closely match.
Terraform version: 1.1.8
Azure Provider Plugin Version: 3.3.0
terraform {
required providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.3.0"
}
}
backend "azurerm" {
resource_group_name = "abcd"
storage_account_name = "tfstate30303"
container_name = "tfstate"
key = "terraform.tfstate"
access_key = "abcdsample....."
}
}
provider "azurerm" {
features {}
skip_provider_registration = true
}
Error:
Failed to load plugin schemas
Error while loading loading schemas for plugin components: Failed to obtain provider schema: Could not load the schema for provider registry.terraform.io/hashicorp/azurerm: Plugin did not respond. The plugin encountered an error, and failed to respond to the plugin. (*GRPCProvider).getProviderSchema call. The plugin logs might contain more details.
TF_LOG= TRACE Error log
[ERROR] plugin (*GRPCProvider).getProviderSchema: error="rpc error: code=Unavailable desc = connection error: desc = "transport:error while dialing: dial tcp 127.0.0.1:10000: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. [WARN]: plugin failed to exit gracefully
Note: Azure region is Central India. Was working fine till yesterday. And, we are working in an air-gapped environment. Hence, plugins are downloaded manually and placed in plugin-dir of code folder.
Please let me know if there is any mistake from my end. I am unable to make a meaning out of this error. never faced this error from terraform.
Thank you CK5 you get it works by simple rebooting the system. Posting this as solution with RCA why you were getting an error.
RCA : This is because you might earlier install other plugin for the same terraform file which you might deleted and now you are using with specific version of terraform plugin this might be possibilties to cause error while communicating with azure even though plugin is install.
So make sure to reboot your system and run the VS code editor as adminstiator if getting this kind of error so plugin sync properly to communitcate with azure.
Upon using the DataFacotry Connector to Snowflake, I consistently get the error message below. Does anyone have any idea how to fix this?
I am using an Azure-managed Integration Runtime.
ERROR [HY000] [Microsoft][Snowflake] (4) REST request for URL
https://xxxxxxx.east-us-2.azure.snowflakecomputing.com.snowflakecomputing.com:443/session/v1/login-request?requestId=2fb149b1-5f57-47ad-a471-8a8db718336c&request_guid=25dcec4f-f680-4f18-b018-363084843708&databaseName=DEMO_DB&warehouse=COMPUTE_WH failed: CURLerror (curl_easy_perform() failed) - code=60 msg='SSL peer
certificate or SSH remote key was not OK'.
ERROR [HY000] [Microsoft][Snowflake] (4) REST request for URL
https://xxxxxxx.east-us-2.azure.snowflakecomputing.com.snowflakecomputing.com:443/session/v1/login-request?requestId=2fb149b1-5f57-47ad-a471-8a8db718336c&request_guid=25dcec4f-f680-4f18-b018-363084843708&databaseName=DEMO_DB&warehouse=COMPUTE_WH failed: CURLerror (curl_easy_perform() failed) - code=60 msg='SSL peer
certificate or SSH remote key was not OK'.
Activity ID: 376547c0-6604-454d-b881-544cb6e7811a.
Probably not a good idea, from a security perspective, to leave your account id visible like this.
Anyway, the issue is probably that you have mis-configured your connection as snowflake.com is repeated: ...snowflakecomputing.com.snowflakecomputing.com
I'm trying to create a project of the labeling tool from the Azure form recognizer. I have successfully deployed the web app, but I'm unable to start a project. I get this error all every time I try:
I have tried with several app instances and changing the project name and connection name, none of those work. The only common factor and finding here as that it is related to the connection.
As I see it:
1) I can either start a new project or use one on the cloud:
First I tried to create a new project:
I filled the fields with these values:
Display Name: Test-form
Source Connection: <previuosly created connection>
Folder Path: None
Form Recognizer Service Uri: https://XXX-test.cognitiveservices.azure.com/
API Key: XXXXX
Description: None
And got the error from the question's title:
"Invalid resource name creating a connection to azure storage "
I tried several combinations of names, none of those work.
Then I tried with the option: "Open a cloud project"
Got the same error instantly, hence I deduce the issue is with the connection settings.
Now, In the connection settings I have this:
At first glance, since the values are accepted and the connection is created. I guess it is correct but it is the only point I failure I can think of.
Regarding the storage container settings, I added the required CORS configuration and I have used it to train models with the Forms Recognizer, So that part does works.
At this point at pretty much stuck, since I error message does not give me many clues on where is the error.
I was facing a similar error today
You have to add container name before "?sv..." in your SAS URI in connection settings
https://****.blob.core.windows.net/**trainingdata**?sv=2019-10-10..
I am trying to declare the following Terraform provider:
provider "mysql" {
endpoint = "${aws_db_instance.main.endpoint}:3306"
username = "root"
password = "root"
}
I get the following error:
Error refreshing state: 1 error(s) occurred:
* dial tcp: lookup ${aws_db_instance.main.endpoint}: invalid domain name
It seems that Terraform is not performing interpolation on my endpoint string, yet I don't see anything in the documentation about this -- what gives?
Yes, it does. There's an example in the docs at https://www.terraform.io/docs/providers/mysql/
# Configure the MySQL provider based on the outcome of
# creating the aws_db_instance.
provider "mysql" {
endpoint = "${aws_db_instance.default.endpoint}"
username = "${aws_db_instance.default.username}"
password = "${aws_db_instance.default.password}"
}
I ran into a similar set of error messages ("connect failed," "invalid domain lookup") and looked into this a bit. I hope this helps you or someone else working across cloud and database providers in Terraform.
This seems to come down to the MySQL provider attempting to establish a database connection as soon as it's initialized, which could be a problem if you're trying to build a database server and configure the database / grants on it as part of the same Terraform run. Providers get initialized based on Terraform finding a resource owned by that provider in your Terraform code, and since this connection attempt happens when the provider gets initialized, you can't work around this with -target=<SPECIFIC RESOURCE>.
The workarounds I can think of would be to have a codebase for setting up the database server and a different codebase for setting up the database grants and suchlike ... or to have Terraform kick off a script that does that work for you (with dynamic parameters, of course!). Either way, you're effectively removing mysql_* resources from your initial Terraform run and that's what fixes this.
There are a couple of code changes that probably need to happen here - the Terraform MySQL provider would need to delay connecting to the database until Terraform tells it to run an operation on a resource, and it may be necessary to look at how Terraform handles dependencies across providers. I tried hacking in deferred connection logic just for the mysql_database resource to see if that solved all my problems and Terraform still complained about a dependency loop in the graph.
You can track the MySQL provider issue here:
https://github.com/terraform-providers/terraform-provider-mysql/issues/2
And the comments from before providers were split into their own releasable codebases:
https://github.com/hashicorp/terraform/issues/5687