An attempt was made to access a socket in a way forbidden by its access permissions. (localhost:5001) - azure

I'm facing issue with my multiple project solution in .net core webAPI. I've gatewayAPI which internally makes call to different microservices via http call.
Gateway API URI exposed to outer world which has domain as azure app name but the internal calls from gateway to microservices are configured with http://localhost:5001/{apiEndPoint} which is working fine in my local machine but after deploying it on azure app service I'm getting below error:
PostToServer call URL:'http://localhost:5001/api/authservice/authenticate' with Exception message An attempt was made to access a socket in a way forbidden by its access permissions. (localhost:5001).
Can someone please help me with this, I'm new to azure and learning on my own but could not find any solution for this yet.
PS: After going through some YouTube videos and blogs I got to know we have to use AKS but I'm not confident in that.
Would really appreciate any help on this issue.

The Gateway API you deployed to azure app service, it doesn't support custom port usage for 5001. Azure App Service only supports port 80|443(HTTP|HTTPS).
If you must use multiple ports in your actual project, then it is recommended to check whether Azure Cloud Service meets your needs. But it not the best choice.
The Best Practice:
Microservices architecture design
In short,create a Azure Gateway service, and your other microservice can be deployed in any where.(azure app service, vm or aks)
You just make sure you can access your microservices in your internal or public network environment.
If you're just learning, or the app isn't actually used by a lot of users, you can try the following suggestions:
Use SignalR (not azure signalr) to replace the websocket in your current project.
You have on azure app service, you can deploy your Gateway API Application to app service, and your other microservices can be deployed to Virtual Application in azure app service.

Related

Google cloud run give me 403 since other app of the same project

I have the frontend and backend on cloud run, each whit his own service, but when I put "internal traffic" on the backend API, It doesn't work, give me 403 since the frontend and it is another service of the same project, and in the documentation says that internal means "only for the same project" so...
two services since the same project are not internal traffic?
I think that is because I use a custom domain and not the exact URL of the service but I am not sure because here says that the custom domains are allowed too.
So what do I have to do to auth my frontend service on cloud run?
I tried whit JWT auth, but there is a better option, isn't it
Cloud Run services set to internal only accepts traffic coming from the VPC network. In order to connect to a Cloud Run service that's serving internal traffic, the connecting service must be attached to a VPC connector. In this case, you need to setup Serverless VPC access connector as mentioned in this note:
For requests from other Cloud Run services or from Cloud Functions in the same project, connect the service or function to a VPC network and route all egress through the connector, as described in Connecting to a VPC network. Note that the IAM invoker permission is still enforced.
For authenticating between service-to-service, you can simply fetch an ID token from the Compute medatada server. You can do that on any GCP compute environment (Cloud Run, App Engine, Compute Engine, etc.). You can follow the steps provided in this documentation.
two services since the same project are not internal traffic?
Two services in the same project should be considered as internal traffic.
I believe what you need to do is follow the authentication steps with token as recommended here (service to service authentication):
https://cloud.google.com/run/docs/authenticating/service-to-service
https://cloud.google.com/run/docs/securing/service-identity#per-service-identity
Please note that even though you've set the ingress traffic to internal, the IAM role cloud run invoker is still needed for the service account.

How to deploy Api Rest C# in Virtual Machine Azure

I have an Api Rest developed with entity framework core 3.1 in C #, I need to deploy the application in a virtual machine in Azure, but it does not work, most of the tutorials that I have taken talk about how to create the virtual machine and publish a web application simple, any guide, help or tutorial?
Generally the error is 500 (internal server error), and problems with the web config
You need to make sure that external requests can land and be processed by the Web Server (typically IIS) running inside the VM. For that you need to open firewall ports to allow inbound traffic within the VM as well as through the network interface (found on the Networking tab) of the VM within the portal.
An API is technically deployed as part of a web application. Hence the following links would help.
Link 1
Link 2 (Note: Video has no voice)
That being said, deploying your API as a App Service in Azure (PaaS) is a much better approach rather than using VMs (unless your API has specific requirements that it needs to be deployed in a VM). App Services also makes setting up other associated services e.g. Logging and monitoring, authentication, etc. much easier.

Azure - Connecting multiple app service containers with custom domain and ssl

I am getting to the point of my project where I am ready to deploy it online with my custom domain via Azure once I make the upgrade from my Free Subscription.
So a little context, I have 1 web app service and 4 api services and each one is hosted in a separate app service such as:
www.sitename.azurewebsites.net
www.sitename-api1.azurewebsites.net
www.sitename-api2.azurewebsites.net
www.sitename-api3.azurewebsites.net
www.sitename-api4.azurewebsites.net
And the above web app communicates to all 4 api's and some api's may or may not talk to another. (Would have loved an application gateway so hopefully I'll be changing this architecture later down the road).
So as I get ready to associate my domain to the services, the web container seems pretty straight forward to me as it just becomes www.sitename.com, but I am a little confused about the api services. The way I am thinking about this is that each api service will be in it's own subdomain, such as:
www.api1.sitename.net
www.api2.sitename.net
www.api3.sitename.net
www.api4.sitename.net
where I believe I can register my SSL and domain to each app service somehow, but this leaves me with a few questions.
Do I host each api in a subdomain using the same domain as the web
app, or is there a different way preferred like where I host them
all on the same domain with different exposed ports per API and web
listening 80/443, or maybe just use the IP address of the api app
service and allow www.sitename.com as the origin for CORS?
I am assuming that since I am associating my SSL cert to the web
service, I will need to do the following on the api services?
Would it be better (and still affordable) if I just had a VNET
associated to the app services and the domain only registered with
the web app?
Any insight into this would be greatly appreciated on how I can establish communication between my app services with my custom domain and SSL as I am fairly new to this part of the stack, but excited about learning!
As I known, on Azure cloud, there are two services can help to manage your APIs deployed on multiple app service containers: API Management and Application Gateway.
The Premium tier of API Management has the feature for multiple custom domain names, please see the offical document Feature-based comparison of the Azure API Management tiers as the figure below.
You can refer to the quick start tutorial of Create a new Azure API Management service instance and other related documents to kown how to.
"Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications." said in the introduce What is Azure Application Gateway?. And as the figure of its architecture below, "With Application Gateway, you can make routing decisions based on additional attributes of an HTTP request, such as URI path or host headers. For example, you can route traffic based on the incoming URL. So if /images is in the incoming URL, you can route traffic to a specific set of servers (known as a pool) configured for images. If /video is in the URL, that traffic is routed to another pool that's optimized for videos."
I recommended to use Azure Application Gateway which be a good choice to manage multiple app services and expose the unified urls of APIs.

Getting a BlueMix app through a firewall

I am currently working on a java back end server that I am deploying to bluemix. It is a standard web app, built out with maven and hits a mounted data base. Standard stuff.
The question becomes is that two of the end points that I am using are hitting services that exist on networks that have their own firewall.
Now if I deploy the application to a server that is punched through the firewalls, all is well. However on bluemix where I am not punched through these firewalls, I often get 401 errors.
Does anyone know of a way to pass these credentials when doing a post or get so that I can authenticate through the firewall, and then authenticate through the service?
Thank you all.
You can use one of the following Bluemix services to connect your application running on Bluemix to your on-premise application/database behind the firewall:
Secure Gateway
Cloud Integration
With Secure Gateway you can create a secure tunnel between Bluemix application and your on-premise application. The official documentation is available here, but there is also an excellent article in the link below to start with this service:
https://developer.ibm.com/bluemix/2015/03/27/bluemix-secure-gateway-yes-can-get/
Alternatively the Cloud Integration service documentation is available here.

Azure Service Bus Relay and node.js

We've been writing services to access our on-premises databases through Azure Service Bus Relay for awhile now. That means that we've had to deploy them as WCF services. Our web site development is moving to node.js and I would like to begin deploying our API services on node as well. However, while the Azure NPM package has good support for queues/topics on Azure Service Bus, I can find no mention of the relaying capabilities. I've had a look at the code for the Azure SDK on github, but again, relay seems to be conspicuously absent.
Is it possible to use Azure Service Bus Relay with a node.js backend?
Now Azure support Node.js. You can find the infomration from here. This link is the samples for Node.js.
Right now, Relay only supports a WCF service. You can try to use Clemens Vasters' post on Port Bridge to get your scenario working. In his post, he describes creating a WCF client / service that will forward requests to a specific port.

Resources