Deploying docker-compose application on Azure Serverless - azure

I have the following application running on a Cloud VM. That is proving to be not economical.
As you can see this is a collection of different images. Is there a way to run this application on Azure serverless. I understand perhaps AKS(Kubernetes) might be one way to go. However, the YML config part might end up becoming too daunting for me. Kindly suggest if there are other approaches to this.

Related

Is Terraform possible to use for local development?

I would like to use Terraform for the infrastructure of my project and am starting to look into it for local development. I am brand new to it so apologies if this is a basic thing to know. So far from my research it appears like it is more useful / helpful when deploying it to a Cloud provider. Is it possible to use it for local development such as connecting to local Docker containers or am I better off waiting and using it when I start to set it up in the Cloud?

JHipster Registry - why is it mandatory to run in the cloud for production?

I am working on an app that needs to be self-hosted on a Windows 10 PC where all the clients are inside a company network. I am using docker-compose for the various microservices and I am considering JHipster for an API Gateway and Registry service.
As I am reading the documentation, there is a line in the JHipster docs (https://www.jhipster.tech/jhipster-registry/
) that says "You can run a JHipster Registry instance in the cloud. This is mandatory in production, but this can also be useful in development". I am not a cloud expert so I not sure what is different in the environments that would cause the Registry app issues when running on a local PC. Or perhaps there is a way to give the local Windows PC a 'cloud' environment.
Thanks for any insight you have for me.
I think that this sentence does not make sense.
Of course, you must run a registry if you generated apps with this option but it does not mean that you must run it in the cloud. The same doc shows you many alternative ways to run it.
Running all your micro services on a single PC is a bit strange, it defeats the purpose of a microservice architecture because you got a single point of failure that can't scale. Basically, you are paying for extra complexity without getting the benefits, where a monolith would be so much simpler and more productive.
How many developers are working on your app?

Correct container technolgy on azure for long running service

I want to run a docker container which hosts a server which is going to be long running (e.g. 24x7).
Initially I looked at Azure Container Instances (ACI) and whilst these seems to fit the bill perfectly I've been advised they're not designed for long running containers, also they can prove to be quite expensive to run all the time compared to a basic VM.
So I've been looking at what else I should run this as:
AKS - Seems overkill for just one docker container
App Service for containers - my container doesn't have an http endpoint so I believe I will have issues with things like health checks
VM - this seems all a bit manual as I'd really like not to deal with VM maintenance and I'm also unsure I can use CI/CD techniques to build / spin up-down / do releases on a VM image (we're using terraform to deploy infra).
Are there any best practise guides on this, I've tried searching but I'm not finding anything relevant, I'm assuming I'm missing some key term to get going with this!
TIA
ACI is not designed for long-running (uninterrupted) processes have a look here
Recommendation is to use AKS where you can fully manage lifecycle of your machines or just use VMs

ACI deployment issue

I’m unable to deploy machine learning models using ACI.
service = Model.deploy(workspace=ws,
name=service_name,
models=[word2vec_model],
inference_config=inf_config,
deployment_config=aci_config)
service.wait_for_deployment(show_output=True)
Can you please suggest how can I debug the problem?
Typically run into a variety of issues usually when uploading or downloading… Here are details on single node AKS.
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service#create-a-new-aks-cluster
“User Errors” for image build failures, I’m assuming this is when you’re intentionally attempting to break it with messed up dependencies. And “Timeouts” where we are unable to talk to the ACI container which could very well be because of the size of the model.

How is it possible to debug an AWS Lambda function from remote?

We are taking over a whole application from another company, and they have built the whole pipeline for deploying, but we still don't have access to it. What we know, that there's a lambda function is running triggered by certain SNS messages, and all the code is in Node.js, and the development is in VS Code. We also have issues debugging it locally, but it's a bigger problem, that we need to debug it remotely.
Since I am new in AWS services, I'd really appreciate if somebody could help me in this.
Does it necessary to open a port? How is it possible to connect to a lambda? Do we need serverless to setup? Many unresolved questions.
I don't think there is way you can debug a lambda function remotely. Your best bet is to download the code on local machine, setup the env variables as you have set up on your lambda function and take it from there.
Remember at the end of the day lambda is just a container which is running the code for you. AWS doesn't allow any ssh or connection with those container. In your case you should be able to debug it on local till you have the same env variables. There are other things as well which are lambda specific but considering it is a running code which you have got so you should be able to find out the issue.
Hope it makes sense.
Thundra (https://www.thundra.io/aws-lambda-debugger) has live/remote debugging support for AWS Lambda through its native IDE plugins (VSCode and IntelliJ IDEA).
The way AWS have you 'remote' debug is to execute the lambda locally through Docker as it proxies the requests to the cloud for you, using AWS Toolkit. You have a lambda running on your local computer via docker that can access resources on the cloud, such as databases, api's etc. You can step through debug them using editors like vscode.
I use SAM with a template.yaml . This way, I can pass event data to the handler, reference dependency layers (shared code libraries) and have a deployment manifest to create a Cloudformation stack (release instance with history and resource management).
Debugging can be a bit slow as it compiles, deploys to Docker and invokes, but allows step through debugging and variable inspection.
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html
While far from ideal, any console-printing actions would likely get logged to CloudWatch, which you could then access to go through printed data.
For local debugging, there are many Github projects with Dockerfiles which which you can build a docker container locally, just like AWS does when your Lambda is invoked.

Resources