I'm trying to follow tutorials on using MBrace with f# (one is here (youtube video). The problem is that with all the videos I've seen, they are either using Azure or running some form of local cluster on the machine.
Since I'll not be using Azure now, how do I setup a local cluster which I can use to test mbrace locally without having to go online?
If you want to test MBrace with a local cluster on your machine you can
git clone https://github.com/mbraceproject/MBrace.Core and for a sample check this https://github.com/mbraceproject/MBrace.Core/blob/master/samples/wordcount.fsx
One important note is that we are currently working towards MBrace 1.0 and you may find some API differences between MBrace.Core and MBrace.StarterKit (https://github.com/mbraceproject/MBrace.StarterKit)
Related
I would like to use Terraform for the infrastructure of my project and am starting to look into it for local development. I am brand new to it so apologies if this is a basic thing to know. So far from my research it appears like it is more useful / helpful when deploying it to a Cloud provider. Is it possible to use it for local development such as connecting to local Docker containers or am I better off waiting and using it when I start to set it up in the Cloud?
I am working on an app that needs to be self-hosted on a Windows 10 PC where all the clients are inside a company network. I am using docker-compose for the various microservices and I am considering JHipster for an API Gateway and Registry service.
As I am reading the documentation, there is a line in the JHipster docs (https://www.jhipster.tech/jhipster-registry/
) that says "You can run a JHipster Registry instance in the cloud. This is mandatory in production, but this can also be useful in development". I am not a cloud expert so I not sure what is different in the environments that would cause the Registry app issues when running on a local PC. Or perhaps there is a way to give the local Windows PC a 'cloud' environment.
Thanks for any insight you have for me.
I think that this sentence does not make sense.
Of course, you must run a registry if you generated apps with this option but it does not mean that you must run it in the cloud. The same doc shows you many alternative ways to run it.
Running all your micro services on a single PC is a bit strange, it defeats the purpose of a microservice architecture because you got a single point of failure that can't scale. Basically, you are paying for extra complexity without getting the benefits, where a monolith would be so much simpler and more productive.
How many developers are working on your app?
I am trying to figure out the best way about how we can use local IDE such as microsoft visual studio code to use distributed computing power. Currently, we are brining data locally but it doesn't seem like sustainable solution because of reasons like in future scale of data will grow, cloud data security, etc. One workaround we thought of is to tunnel into EC2 instances but would like to hear what's best way to solve this in machine learning/data science environment (we are using databricks and AWS services).
Not sure why you are connecting IDE to ccomputer . I have used VS Code for running scripts against HDInsight cluster . Before I fire by scripts I do configure the clusters against which it is going to run . The same is true on the Databricks also.
I will shortly be in the process of rewriting a node app with the intention of
implementing Continuous Integration and TDD.
I also want to design and set up a deployment pipeline for development, staging, and production.
Currently I'm using Shipit to push changes to different instances that have pre-configured environments. I've heard about deploying Docker containers with the needed environments, and I'd like to learn more about that.
I'm looking at TravisCI and for automated testing/builds, and from my understanding, one can push the Docker image to a registry after the build succeeds.
I'm also learning about scaling, and looking at a design for production that incorporates Google Cloud servers/services serving 3 clustered versions of the node app, a Redis cluster, and 2 PostgreSQL nodes, which each service being behind a load balancer.
I've heard of Kubernetes being used to manage and deploy containerized applications, but I'm curious on how it all fits together.
In my head I think that it would seem like the process would be as follows:
commit changes on dev machine - push to repository.
TravisCI builds and runs tests, (what about migrations and pushing changes to the postgreSQL service?), pushes to a Google Cloud Container Registry.
Log into the Google Container Engine and run the app with Kubernetes.
What about the Redis Cluster? The PostgreSQL nodes?
I apologize in advance if this question is lacking in clarity and knowledge, but I'm trying to learn more before I move along.
Have you considered Google Cloud Container Builder? It's very easy to set up a trigger from your Github repository, which would start a new build on changes (branch or tag).
As part of the build, you can push the new image to GCR.
And you could also deploy to Kubernetes as part of the same build.
The Car-Lease-Demo seems to be a perfect demo to understand Hyperledger Fabric. However, it seems to be configured to run in IBM Cloud, is anyone successful in running it locally?
I presume that you are referring to this demo. I have not tried this, but it should be possible to run all of this on your laptop. First, follow the directions for running one or more peer instances (and a CA) here. Then, you should be able to run the demo server after a few tweaks.
Looking at the code, you'd have to set some environment variables (VCAP_APP_HOST and VCAP_APP_PORT) to run the node app locally, as these will not be provided unless running in a Cloud Foundry environment.
Further, you'll need to change Server_Side/configurations/configuration.js to provide appropriate values for config.api_* as those values are also specific to the IBM Blockchain service running in Bluemix.