Do we need kubernetes instance to connect Eclipse Hono with Eclipse Ditto both running in sandbox environment? - eclipse-hono

I would like to know whether we need kubernetes cluster as required by cloud2edge package as mentioned in the link (https://www.eclipse.org/ditto/2018-05-02-connecting-ditto-hono.html) even if both hono and ditto are running in the sandbox environment provided for evaluation purpose ?

When you use the sandboxes, you don't have the need for you own Kubernetes cluster, no.
However, using the sandboxes, you have some restrictions as you are not the "admin" of the 2 installations and therefore you e.g. can't create new tenants in Hono or create new connections in Ditto.

Related

inject secrets (API keys etc) into node js project

I'm migrating a nodeJS project from GCP to DigitalOcean.
I'm running this nodeJS code on a kubernetes cluster in DigitalOcean. I'm using GitHub Actions to automatically build a docker image and deploy it to my kubernetes cluster. Everything works as expected, but I have a question.
On GCP, I used the secret manager to inject secrets (database credentials, API keys, ...) into my NodeJS project. I am looking for a similar solution on DigitalOcean. I have found SecretHub, it looks interesting but I'm unable to sign up.
I have found this from 1password connect, but it looks like I have to setup a server?
Does anyone know some interesting tool or trick to secure inject secrets into my nodejs code?
Yes, you can check out the Hashi corp vault which is mainly used with Kubernetes as a secret solution to inject the configuration and variables to the deployment of Kubernetes.
It's easy to set up and integrate with Kubernetes.
Hashi corp vault : https://www.hashicorp.com/products/vault
Enterprise version is paid one however you can open-source version which will solve your all needs with UI and login panel, you can use it for Production purpose it's safe, secure, and easy to integrate.
You can run one simple POD(deployment) on the Kubernetes server.
here you can follow the demo with minikube setup: https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes

Host HashiCorp Vault in Azure App Services

Is it possible to host/deploy HashiCorp Vault on MS Azure App Services so that I can create, read, update and delete Vault secrets from my apps deployed on Azure App Services?
I can't find any documentation. I only know that I can host it on Windows virtual machine on-prem.
That's seems doable. I could think of a few options (#1 is specifically for AppServices as you have asked)
HashiCorpVault -> Docker -> App Service: I'm assuming you are familiar with Docker which is required for this step. You can create a container locally and deploy on AppService.
To do this, create a docker file and as a part of the build use brew to download Vault.
You will need to create your dockerfile in multi-steps to have Node and brew installed first.
Once that's done, the next step in build process is to get HashiCorp vault via brew https://www.vaultproject.io/downloads.
Alternatively, you could download the packages on your machine using brew, and then package your container.
You can run your container locally, make any configuration changes you prefer and create image once you are ready.
Once you have your image on your preferred repository, you could follow the Microsoft guide to deploy: https://learn.microsoft.com/en-us/learn/modules/deploy-run-container-app-service/
HashiCorp Integration with Azure: It can be integrated with Azure https://www.hashicorp.com/integrations/microsoft and ready to be used at scale.
I think a better option would be to run Vault in an Azure Container Instance. You can find the official vault container here: https://github.com/hashicorp/docker-vault
The App Service platform execution environment differs from a local execution environment mainly due to multi-tenancy — because a single physical machine in the data center can be concurrently executing apps and services belonging to a large number of differing customers, resources are more constrained than in the case of an app running on a single machine. The sandbox mechanism mitigates the risk of service disruption due to resource contention and depletion in two ways: it (1) ensures that each app receives a minimum guarantee of resources and quality-of-service, and conversely (2) enforces limits so that an app can not disrupt other concurrently-executing apps on the same machine.
More Details on Azure App Service Sandbox: https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox

Restricting allowed kubernetes types to be deployed with ArgoCD

We'd like to allow our developers to automatically deploy their changes to a kubernetes cluster by merging their code and k8s ressources in a git repo which is watched by ArgoCD. The release management teams would responsible to manage the ArgoCD config and setting up new apps as well as for creation of namespaces, roles and role bindings on the cluster while the devs should be able to deploy their applications through GitOps without the need to interact with the cluster directly. Devs might have read access on the cluster for debugging purposes.
Now the question: in theory it would be possible that a dev would create a new yaml and specify a rolebinding ressource, which binds his/her account to a cluster admin role. As ArgoCD has cluster admin rights, this would be a way to escalate privileges for the dev. (or an attacker impersonating a developer)
Is there a way to restrict, which k8s ressources are allowed to be created through ArgoCD.
EDIT:
According to the docs, this is possible per project using clusterResourceWhitelist.
Is it possible to do that globally?
You are right about Argo CD project. The project CRD supports allowing/denying K8S resources using clusterResourceWhitelist, clusterResourceBlacklist etc fields. The sample project definition is also available in Argo CD documentation.
In order to restrict the list of managed resources globally, you can specify the resource.exclusions/resource.inclusions field in the argocd-cm ConfigMap. The example is available here.

Pre-define custom scripts to run on Azure Container Service cluster VMs

I would like to provide some pre-defined scripts which need to be run in Azure Container Service Swarm cluster VMs. Is there a way I can provide these pre-defined scripts to ACS, so that when the cluster is deployed, this pre-defined scripts automatically gets executed in all the VMs of that ACS cluster?
This is not a supported feature of ACS, though you can run arbitrary scripts on the VMs once created. Another option is to use ACS Engine which allows much more control over the configuration of cluster as it outputs templates that you can configure. See http://github.com/Azure/acs-engine.

Troubleshooting Azure Service Fabric: "The ServiceType was not registered within the configured timeout."

I have deployed a Web API written with .net Core to a local dev Azure Service Fabric cluster on my machine. I have plenty of disk space, memory, etc, and the app gets deployed there. However, success is intermittent. Sometimes it doesn't deploy to all the nodes, and now I have it deployed to all the nodes, but within the Azure Service Fabric Manager page, I see each application in each node has an Error status with the message: "The ServiceType was not registered within the configured timeout." I don't THINK I should have to remove and redeploy everything. Is there some way I can force it to 're-register' the installed service type? Microsoft docs are really really thin on troubleshooting these clusters.
Is there some way I can force it to 're-register' the installed service type?
On your local machine you can set the deployment to always remove the application when you're done debugging. However, if it's not completing in the first place I'm not sure if this workflow would still work.
Since we're on the topic, in the cloud I think you'd just have to use the Powershell scripts to first compare the existing app types and version and remove them before "updating". Since the orchestration of this is complicated I like to use tools to manage it.
In VSTS for instance there is an overwrite SameAppTypeAndVersion option.
And finally, if you're just tired of using the Service Fabric UI to remove the Application over and over while you troubleshoot it might be faster to use the icon in the system tray to reset the cluster to a fresh state.

Resources