I am running 3 tier application: frontend (react), backend (spring boot) and managed azure database in Azure.
To run backend and frontend - I do have Azure App Service leveraging containers.
It works fine until we restricted backend to be accessible via private endpoint in vNet.
Frontend is obviously connected to the very same vNet via SWIFT(known as vNet integration) connection.
So far it is all good.
The issue arises when there is a problem with backend which cannot connect to database e.g. because I messed up connection string. So I fixed that issue and restarted backend with a new version - fixed connection string.
Buuuut, and here it comes ... since backend previously crashed it is not running and the way how to bring up a new version is to simply call App Service URL (curl https://my-backend.azurewebsites.net). The issue is that it is not possible to call it since it is behind a private endpoint.
Workaround would be to start a VM inside the very same vNet and call backend like this:
admin#debug:~$ curl -k https://10.0.20.4 -I -H "Host: my-backend.azurewebsites.net"
and this works.
But this is a very cumbersome solution which is not a solution at all in fact.
Anyone has idea how to make it work ?
I am using terraform, and I also notice that when I totally un-deploy App Service and deploy it again - it boots up again.
Thx
Related
I'm facing issue with my multiple project solution in .net core webAPI. I've gatewayAPI which internally makes call to different microservices via http call.
Gateway API URI exposed to outer world which has domain as azure app name but the internal calls from gateway to microservices are configured with http://localhost:5001/{apiEndPoint} which is working fine in my local machine but after deploying it on azure app service I'm getting below error:
PostToServer call URL:'http://localhost:5001/api/authservice/authenticate' with Exception message An attempt was made to access a socket in a way forbidden by its access permissions. (localhost:5001).
Can someone please help me with this, I'm new to azure and learning on my own but could not find any solution for this yet.
PS: After going through some YouTube videos and blogs I got to know we have to use AKS but I'm not confident in that.
Would really appreciate any help on this issue.
The Gateway API you deployed to azure app service, it doesn't support custom port usage for 5001. Azure App Service only supports port 80|443(HTTP|HTTPS).
If you must use multiple ports in your actual project, then it is recommended to check whether Azure Cloud Service meets your needs. But it not the best choice.
The Best Practice:
Microservices architecture design
In short,create a Azure Gateway service, and your other microservice can be deployed in any where.(azure app service, vm or aks)
You just make sure you can access your microservices in your internal or public network environment.
If you're just learning, or the app isn't actually used by a lot of users, you can try the following suggestions:
Use SignalR (not azure signalr) to replace the websocket in your current project.
You have on azure app service, you can deploy your Gateway API Application to app service, and your other microservices can be deployed to Virtual Application in azure app service.
So I'm trying to deploy my NodeJS rest API on Cloud Run and for the most part it deploys successfully except a couple endpoints seem to be failing with either a 404 or 500 error. However when I run the container locally using docker run -p 8080:8080 <image> all the endpoints work. The common thing between all the failing endpoints seem to be that they are accessing the remote database using the credentials stored in the .env file.
EDIT: I think it is because the database is on a private internal ip so I'm trying to figure out what I would need to do for that
As mentioned by #guillaume, if your database is on a private internal network then VPC connector is necessary. You can use the Connecting from Cloud Run to Cloud SQL documentation as guide whether you stick to the private IP only or decided to use a public IP which will not require a VPC connector.
I am in a kubernetes cluster with two services running. One of the services expose a endpoint like /customer/servcie-endpoint and other service is a nodejs application which is trying to access data from this service. Axios doesn't work as it needs a host to work with.
If I do a kubectl exec shell and run curl /customer/servcie-endpoint I receive all the data.
I am not sure how to get this data in a nodejs application. Sry for naive ask!
I'm trying to build an api gateway for an app in development using aws. I followed the steps in the doc https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-step-by-step.html.
However when I set my endpoint to be 127.0.0.1:3000/users for example, it returns an 500 {"message": "Internal server error"}.
The app is a simple nodejs endpoint run in docker. I'm just trying to discover api gateways.
I'm guessing the error is that the endpoint that I provide is not valid because I'm using it locally. So is there a way to use aws api gateway locally and is it the best option for an api gateway.
The doc you mention doesn't describe any method for deploying and running an api gateway locally, nor am I aware of any method to do this. I'm only aware of running API Gateway in AWS. The problem may be that you are trying to hit an API Gateway endpoint on your local machine, which is not possible.
Perhaps I misunderstand and you're trying to integrate an API Gateway in AWS with a service running locally. If this is the case, API Gateway will not be able to have an integration endpoint on your local machine, unless you expose your machine to the public internet AND provide api gateway with a public internet address for your machine. 127.0.0.1 is not a public internet address.
I'm taking my first foray into Azure Service Fabric using a cluster hosted in Azure. I've successfully deployed my cluster via ARM template, which includes the cluster manager resource, VMs for hosting Service Fabric, a Load Balancer, an IP Address and several storage accounts. I've successfully configured the certificate for the management interface and I've successfully written and deployed an application to my cluster. However, when I try to connect to my API via Postman (or even via browser, e.g. Chrome) the connection invariably times out and does not get a response. I've double checked all of my settings for the Load Balancer and traffic should be getting through since I've configured my load balancing rules using the same port for the front and back ends to use the same port for my API in Service Fabric. Can anyone provide me with some tips for how to troubleshoot this situation and find out where exactly the connection problem lies ?
To clarify, I've examined the documentation here, here and here
Have you tried logging in to one of your service fabric nodes via remote desktop and calling your API directly from the VM? I have found that if I can confirm it's working directly on a node, the issue likely lies within the LB or potentially an NSG.