Get IP address of a website which can be provided to AWS security - linux

I have developed an application which is hosted on an iPaas provider. This application has to make a REST API call to a service which is running inside an AWS EC2 instance.
Please note that the application is not pushed to AWS. In order to do so, I've to provide access to the cloud provider so that the REST call can be made successfully to the API residing in AWS. That means, in order to make a successful REST call, my application (hosted on some cloud) has to get permission from AWS (where a service is hosted and to which the application is making REST request). But AWS doesn't provide the option to enter the URL. How can we make this possible?

You should look up the documentation of your Cloud Provider.
They for sure must have provided you with the public IP of the machine made available to you.
Another way of solving your problem could be by using the ping command. If you trying to ping the URL of your cloud provider, it will show you the IP address.
But here one issue you may encounter is, depending on your cloud providers size there could be more than one machines which may be providing the service
and it may happen that the IP provided by ping command and the IP of the machine where your app is pushed may be different and your purpose may not be fulfilled.
Here you may try to use the network masks (which you use while providing the access to any IP in AWS security groups)
Try providing access to a supernet.
Or you can also explore the usage of some standard tools like "tracert", "traceroute", "nslookup"

Related

Google cloud run give me 403 since other app of the same project

I have the frontend and backend on cloud run, each whit his own service, but when I put "internal traffic" on the backend API, It doesn't work, give me 403 since the frontend and it is another service of the same project, and in the documentation says that internal means "only for the same project" so...
two services since the same project are not internal traffic?
I think that is because I use a custom domain and not the exact URL of the service but I am not sure because here says that the custom domains are allowed too.
So what do I have to do to auth my frontend service on cloud run?
I tried whit JWT auth, but there is a better option, isn't it
Cloud Run services set to internal only accepts traffic coming from the VPC network. In order to connect to a Cloud Run service that's serving internal traffic, the connecting service must be attached to a VPC connector. In this case, you need to setup Serverless VPC access connector as mentioned in this note:
For requests from other Cloud Run services or from Cloud Functions in the same project, connect the service or function to a VPC network and route all egress through the connector, as described in Connecting to a VPC network. Note that the IAM invoker permission is still enforced.
For authenticating between service-to-service, you can simply fetch an ID token from the Compute medatada server. You can do that on any GCP compute environment (Cloud Run, App Engine, Compute Engine, etc.). You can follow the steps provided in this documentation.
two services since the same project are not internal traffic?
Two services in the same project should be considered as internal traffic.
I believe what you need to do is follow the authentication steps with token as recommended here (service to service authentication):
https://cloud.google.com/run/docs/authenticating/service-to-service
https://cloud.google.com/run/docs/securing/service-identity#per-service-identity
Please note that even though you've set the ingress traffic to internal, the IAM role cloud run invoker is still needed for the service account.

schedule a python code execution from a static ip (azure architecture)

I have a python code that uses a virtual machine with reserved static ip, to receive data from a web service every minute.
Since we are working with azure architecture, I built the virtual machine with this ip, and I installed python to schedule and execute the code. After a month, I realized that this method is not so consistent, cause I cant monitor the errors that appears, sometime the machine restarts,,, etc.
Is there another way to do similar solution, keeping in mind that I can use only this static ip to receive the data from the web service.
I tried to check the azure function or azure run book, but there are no way to run them from the cloud under the reserved ip.
Curious on why it needs to be a static IP that you send data to and not a host name (e.g. myfunction.azurewebsites.net). You are right here, but seems like a tough requirement. Other option would be spinning up something like Azure API Management in front of the Azure Function. API Management will give you a static IP and can then proxy the request to the function - but will be a cost to adding API Management.
Other option is likely AKS as Ami mentioned, which should do a little nicer at orchestrating and making sure the container is available while giving a static IP, but may be a bit overkill depending on what the code is doing.

Azure API Management resolving internal URL

We've implemented a setup as follows:
App Service Environment having different app services exposing different APIs. This instance is configured as an internal instance, so no public access.
We've configured an internal (private) DNS zone. This zone is used to create internal URLs for the API's.
API Management instance which is exposed to the outside. Here the API's need to be registered using the Swagger files exposed by the APIs themselves.
Everything is contained within the same VNet.
Now what we see is two things:
From a VM inside the VNet, I can browse the URL of the API without any issue and download the Swagger file.
When we try to register the API within API management, it throws an error stating the file could not be downloaded. When we register manually and then try to call the API, we get a DNS resolve error.
So it seems as if the API Management instance is not able to resolve our custom DNS zone as setup in Azure. I could not find any information that tells me whether this scenario is supported or not. Any pointers that might help find the problem are very welcome indeed.
Update when we register the API via uploading a file and then try to call one of the API methods, the following error appears:
The remote name could not be resolved
This same address resolved just fine from a VM within the exact same VNet.
I have the same issue, when I look for online, the below solution looks promising. It is self-explanatory, DNS forwarded need to enable between vnets. More information is here
https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server

Azure management API without certificate

I have a Java application running on Azure inside a VM. Let say I have 10 VMs like this one.
I'd like to get details about other virtual machines running within the same cloud service.
I managed to do that using certificates but I don't like this solution as each VMs has locally a certificate which might stolen if a hacker get an access to my machine.
I would love not having to provide such a certificate or any authentication as soon as my VM is running inside Azure platform.
I can do exactly the same with Google Compute Engine. If you want to access management APIs on GCE, you don't need to provide anything. I guess that as soon you run a GET request to http://metadata/computeMetadata/v1/instance/service-accounts/ metadata knows exactly "who" is doing the request using IP Address or whatever.
Is it doable on Azure?
Thanks!

Cloud foundy - Discovering backend application without public route

I'd like to implement micro service architecture on CF (run.pivotal.io) and have problems with creating my private backend services.
As I see I have to options at deployment: with and without route.
With route my services becomes public which is ok for my public site and my public REST API, but I don't want it for my backend services.
Without route I don't see how should I do service discovery.
What I found already:
Use VCAP_APPLICATION env variable and create my own service discovery (or use something like Eureka) based on that. Does this give me always a valid IP:PORT? No matter what DEA my app is running it is reachable on this IP:PORT by other apps on other DEAs?
Register my backend app as a service and bind it, than use VCAP_SERVICES. I'd like to do this but only found documentation about registering services outside CF. Is there a simple way to bind my own app as a service?
So what would be really nice is to be able to mark an app as private but still assign a host and domain to it, so (only) my other apps could call it though CF load balancers but it would be protected from the public.
Answers inline...
As I see I have to options at deployment: with and without route.
This depends on the Cloud Foundry installation and how it's configured. On PWS, you cannot talk directly between application instances. It's a security restriction. You have to go through the router.
With route my services becomes public which is ok for my public site and my public REST API, but I don't want it for my backend services.
The best you can do here is to add application level (or container level, if you prefer) security to prevent unauthorized access.
If you don't want to do password based authentication, you could do IP based filtering. On PWS, we just added a service with Statica. You can use that to send your outbound traffic through a proxy which will assign a static IP to that traffic. You could then restrict access to your app to only the Statica IPs.
Without route I don't see how should I do service discovery.
If you remove the route, you can't sent traffic to the app.
Use VCAP_APPLICATION env variable and create my own service discovery (or use something like Eureka) based on that. Does this give me always a valid IP:PORT? No matter what DEA my app is running it is reachable on this IP:PORT by other apps on other DEAs?
You'd probably need to use this enhancement. It was added to support this type of deployment. However this will only work on Cloud Foundry installation where the networking restrictions between application instances have been relaxed. Normally you cannot talk directly between instances.
Register my backend app as a service and bind it, than use VCAP_SERVICES. I'd like to do this but only found documentation about registering services outside CF. Is there a simple way to bind my own app as a service?
You can create a "user provided" service. Look at the cf cups command. It lets you create a service with an arbitrary set of parameters and data. This could contain the URLs for your services. Once you create the service, you can bind it to any number of apps.

Resources