how to upload a certificate in VM of azure cluster - azure

This line is creating problem as it requires the Cert to be present in the machine in which it is currently executing..
topologyConfigurationManager = new TopologyConfigurationManager(new Uri("https://int2.metrics.nsatc.net"), GenevaCertThumbprint, StoreLocation.LocalMachine, TimeSpan.FromMinutes(1));
I have gone through this link deploying-application-certificates-to-the-cluster
but still i am not able to get how to upload certificate in VM(nodes) of azure cluster.Can some one give me detailed step of where to upload the cert(.pfx file).

I had this same problem few days ago, i was needing to change to a new certificate because the old has expired, and i solved it by deploying the azure resource template for service fabric again, which means that i'd basically recreated the all environment.
In the template i've only changed the certificate link and the thumbprint.

Finally Got the answer::
Login to the Node of Remote cluster using following command in cmd:mstsc /v:mycluster.eastus.cloudapp.azure.com:3389
Where "mycluster.eastus.cloudapp.azure.com" is cluster name.After logging in Install certifcates Manually.
3389-is first node 3390-second node and so on.

Related

az login issue to use azure commandline interface

I am using azure command line interface in my linux machine to run an image with azure container instance.
I am facing issue to login using az login command. I understood that the issue is because i am working behind corporate proxy and i tried appending certificates into cacert.pem file. But the issue is not resolved. I guess i might be doing something wrong which i am not able to identify. See the error in the screen short. Please suggest me which CA certificate i have to add to cacert.pem file and how to get the certificate. Thanks in advance! Command used: az login

Installed certificates on Batch account and Pool not available for task

I have an Azure Batch account setup with system assigned identity (the account was created through TF and User assigned identities are not yet supported).
A certificate is available to the batch account and on the pool as well.
When inspecting the node on the pool (scaled to one for now), it shows a certificate reference:
I've manually created a job and a simple task (/bin/bash -c 'ls -la $AZ_BATCH_CERTIFICATES_DIR/') to list contents and everything comes empty.
This seems to be the case for all self-signed certificates I've used to try this.
Can somebody please point out what I'm doing wrong?
(I've tried all combinations for Task-NonAdmin, TaskAdmin, Pool-NonAdmin, Pool-Admin together with LocalMachine, currentUser).
Thanks all!
Well, this thing happened:
Issue with Windows LocalMachine certificates:
If you are adding certificate references on your pool which install into the Windows LocalMachine certificate store, and are running tasks without admin access which need access to the certificate's private key, your tasks will work on the old agent but not work in the new agent.
Only pfx files where your non-admin task needs access to the private key should be moved to "My" in CurrentUser
https://github.com/Azure/Batch/issues/1
If I upload the certs to CurrentUser\My, the tasks do get the certs.

How can I issue a certificate after I've moved to a new cluster?

I setup up a prototype cluster in Azure Kubernetes Service to test the ability to configure HTTPS ingress with cert-manager. I was able to make everything work, now I'm ready to setup my production environment.
The problem is I used the sub domain name I needed (sub.domain.com) on the prototype and now I can't seem to make Let's Encrypt give a certificate to the production cluster.
I'm still very new to Kubernetes and I can't seem to find a way to export or move the certificate from one to the other.
Update:
It appears that the solution provided below would have worked, but it came down to needing to suspend/turnoff the prototype's virtual machine. Within a couple minutes the production environment picked up the certificate.
you can just do something like:
kubectl get secret -o yaml
and just copy\paste your certificate secret to a new cluster, or use something like heptio ark to do backup\restore.
ps. I dont know why it wouldn't let you create a new cert, at worst you would need to wait 7 days for your rate limit to refresh.

DC/OS private registry with authentication fails

I got a running DC/OS cluster on Azure and i'm trying to configure it to use private registry credentials.
I'm running Azure Private Registry with admin. I can log in and use the images.
I followed the guide provided by DC/OS but it recommends saving it on the nodes themselves. I want to use Azure File Storage instead.
I saved the config.json file to auth to the loginserver on a blob and provide the URI with deployment configuration.
config.json:
auths:
stageon.azurecr.io:
auth "..."
Now the configuration just keeps running without any output so I assume it's hanging on pulling the image.
I am providing the direct link URL to the file and when I access it through webbrowser it returns the JSON.
Did anyone do something similar before I found this thread for amazon before but I can't seem to get it working.
I've used a customization to acs-engine a few times to push registry credentials to the agent nodes.
This approach makes sure that the credentials will be present even when you add nodes later on.
The code is here: https://github.com/xtophs/acs-engine-1/tree/xtoph-registry. Example cluster API model is at: https://github.com/xtophs/acs-engine-1/blob/xtoph-registry/examples/privateregistry/dcos1.8.4.json

Azure webapp "Certificate with thumbprint 'XXXXXX...' not found" error

I'm having the following issue with one of my webapps at Azure. It was working fine and suddenly I'm getting the following error, even though no changes were made to the webapps:
"Certificate with thumbprint 'XXXX' not found at
APP.Security.CertificateEncryptionServiceProvider.FindCertificate(X509Store
certStore, String thumbprint) at
APP.Security.CertificateEncryptionServiceProvider.Decrypt(String
thumbprint, String encryptedSetting) at
APP.Configuration.CloudConfiguration.GetSetting"
The other web apps who are using the same certificates are working perfectly fine.
When I navigate to the KUDU powershell console https://MyAppThatHasProblems.scm.azurewebsites.net/DebugConsole/?shell=powershell and navigate to the Certificate Store
cd cert:/currentuser/my
I cannot see any certificate. If I do the same for any other working web app I can see my certificates listed. I tried to remove and add again the certificates, but no luck.
Anyone had a similar issue before?
I have managed to solve the issue with the help of Azure Support. After a series of investigations on the app service they told me to scale up from S2 Standard to S3 Standard and then back to S2. Apparently this changes the virtual machine where the app service is hosted.
Problem fixed!

Resources