I have written my chaincode in golang and now I want to deploy it on a cloud (aws).
Is the golang file enough to deploy it or do I need to package additional files for deployment?
Just the golang file. Is going to ask your for the dependencies also, but you can get them on the machine that you will deploy the chaincode.
Related
I'm migrating a nodeJS project from GCP to DigitalOcean.
I'm running this nodeJS code on a kubernetes cluster in DigitalOcean. I'm using GitHub Actions to automatically build a docker image and deploy it to my kubernetes cluster. Everything works as expected, but I have a question.
On GCP, I used the secret manager to inject secrets (database credentials, API keys, ...) into my NodeJS project. I am looking for a similar solution on DigitalOcean. I have found SecretHub, it looks interesting but I'm unable to sign up.
I have found this from 1password connect, but it looks like I have to setup a server?
Does anyone know some interesting tool or trick to secure inject secrets into my nodejs code?
Yes, you can check out the Hashi corp vault which is mainly used with Kubernetes as a secret solution to inject the configuration and variables to the deployment of Kubernetes.
It's easy to set up and integrate with Kubernetes.
Hashi corp vault : https://www.hashicorp.com/products/vault
Enterprise version is paid one however you can open-source version which will solve your all needs with UI and login panel, you can use it for Production purpose it's safe, secure, and easy to integrate.
You can run one simple POD(deployment) on the Kubernetes server.
here you can follow the demo with minikube setup: https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes
Im looking into using terraform to automate setting up an environment for demos.
Works for VM instance and can be fully automated but management prefers to use Cloud run with Docker containers.
When I read this article it starts with manually having to build and register a docker container. I don't get that step, why can't that be automated as well with terraform?
Terraform is a deployment tool. More or less, it invokes API to build, update or delete things. So now what do you want to do? To take a container and to deploy it on Cloud Run. Build sources, uploading files, perform git clone aren't actions designed for Terraform.
It's not surprising to have a CI pipeline that build things and at the end a CD tool called for the deployment.
Im hosting a node.js web app with firebase, and i need to run a powershell script. I have installed the node module "node-powershell" which works perfectly locally, however when deployed, it tells me that i need to install powershell (install it in the firebase 'computer'). Is there any way to do this?
Firebase Hosting is a so-called static hosting service. This means it serves the content as is, it does not interpret/execute that content in any way.
So most likely you're using the Cloud Functions integration with Firebase Hosting to run those Node scripts. And that turns this into a question whether Cloud Functions can run Powershell scripts.
I don't immediately seen an answer there, although you could potentially upload the binary yourself if that is available for the platform Cloud Functions runs on (Debian). For an example of this, see Can you call out to FFMPEG in a Firebase Cloud Function
It is possible to update app.yaml or dispatch.yaml for services running in Google Cloud Platform by running the following in the terminal:
gcloud app deploy dispatch.yaml
However, when I replace dispatch.yaml with server.js, I get the following message:
ERROR: (gcloud.app.deploy) [path to the file] could not be identified as a valid source directory or file.
Is the only way to deploy the application completely again?
The gcloud app deploy takes YAML configuration files as input for determining what aspects of your application's configuration will be updated. If you specify gcloud app deploy app.yaml, the tool will deploy a new version of your app. If you want to override an existing version, then use gcloud app deploy app.yaml --version=NAMEOFCURRENTVERSION
If you need to upload changed files, you need to redeploy the app. Its tempting to think of App Engine like a standard web hosting environment, but the application code is containerized and possibly in multiple running instances. You don't have direct access to the files for things like direct editing or replacement.
Is it possible to setup Continuous Integration on VSTS without using external VM as build agent (https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration/)?
What I would like to achieve is to have one Service Fabric Solution with 2 statefull/stateless services (serviceA and serviceB). I want to build and deploy them separately as different build jobs on VSTS, but to deployed them to the same Service Fabric Cluster on Azure (fabric:/App/ServiceA, fabric:/App/ServiceB).
As of the Service Fabric SDK 2.1.150 and Runtime 5.1.150 release, it is possible to deploy Service Fabric application using VSTS's hosted build agent as the dependencies can be added via a NuGet package - refer to the following video for details. http://www.dotjson.uk/azure-service-fabric-continous-integration-and-deployment-in-15-minutes/
In your specific case; just create 2 build definitions (1 for each service) and 2 release definitions (1 for each service) and hook them up to the same hosted Service Fabric cluster.
Unfortunately deploying applications relies on the Service Fabric SDK being installed so you'll need to set up an agent as the instructions suggest. If you don't want to pay for the Azure VM, you might want to consider running the agent service locally e.g. On your devbox.
Note that with Service Fabric you deploy applications, not services. You can however update services independently.
It sounds like you need to have service fabric SDK installed on the build machine, and I'm guessing the hosted agent doesn't have that. If that's the case, then yes you need to create your own build server VM