Why does AWS API Gateway not support VPCs? - security

I have just read following article
And I really don't get why the AWS API Gateway doesn't support VPCs out of the box and we have to proxy all the requests through a lambda function?
Does anyone have an idea about why is that?

Though I never found any AWS official statement about this matter, I strongly believe that accessing private resources (VPCs, subnets) from an always public entity (as is API Gateway) would require much more effort (testing) regarding the product security.
I don't believe their plan is to keep it like this forever, though. From this same article you linked, they state (my emphasis):
Today, Amazon API Gateway cannot directly integrate with endpoints
that live within a VPC without internet access.
My guess is that "tomorrow" API Gateway access to private resources will exist and, yes, our lives will be easier (and cheaper, btw).
In the end of the day, and given that my assumption is right, I believe it was the right decision: launch a useful (but more limited) version first and learn with it.
EDIT: Since 2017 November, API Gateway integrates with private VPCs. https://aws.amazon.com/pt/about-aws/whats-new/2017/11/amazon-api-gateway-supports-endpoint-integrations-with-private-vpcs/

Related

Isolate AWS lambda function

In a hobby side project I am creating an online game where the user can play a card game by implementing strategies. The user can submit his code and it will play against other users strategies. Once the user has submitted his code, the code needs to be run on server side.
I decided that I want to isolate code execution of user submitted code into an AWS lambda function. I want to prevent the code from stealing my AWS credentials, mining cryptocurrency and doing other harmful activity.
My plan is to do following:
Limit code execution time
Prevent any communication to internet & internal services (except trough the return value).
Have a review process in place, which prevents execution of user submitted code before it is considered unharmful
Now I need your advice on how to achieve best isolation:
How do I configure my function, so that it has no internet access?
How do I configure my function, so that it has no access to my internal services?
Do you see any other possible attack vector?
How do I configure my function, so that it has no internet access?
Launch the function into an isolated private subnet within a VPC.
How do I configure my function, so that it has no access to my internal services?
By launching the function inside the isolated private subnet you can configure which services it has access to by controlling them via the security groups and further via Route Table this subnet attached including AWS Network ACLs.
Do you see any other possible attack vector?
There could be multiple attack vectors here :
I would try to answer from the security perspective in AWS Services. The most important would be to add AWS Billing Alerts setup, just in case there is some trouble at least you'll get notified and take necessary action and I am assuming you already have MFA setup for your logins.
Make sure you configure your lambda with the least privilege IAM Role
Create a completely separate subnet dedicated to launching the lambda function
Create security for lambda and control this lambda access to other services in your solution.
Have a separate route table for the subnet where you allow only the selected services or be very specific with corresponding IP addresses as well.
Make sure you use Network ACLs to configure all the outgoing traffic from the subnet by adding ACL as well as an added benefit.
Enable the VPC flow logs and have the necessary Athena queries with analysis in place and add alerts using AWS CloudWatch.
The list can be very long when you want to secure this deployment fully in AWS. I have added just few.
I'd start by saying this is very risky and allowing people to run their own code in your infrastructure can be very dangerous. However, that said, here's a few things:
Limiting Code Execution Time
This is already built in to Lambda. Functions have an execution limit on time which you can configure easily through IaC, the AWS Console or the CLI.
Restricting Internet Access
By default Lambda functions can be thought of as existing outside the constraints of a VPC for more applications. They therefore have internet access. I guess you could put your Lambda function inside a private subnet in a VPC and then configure the networking to not allow connections out except to locations you want.
Restricting Access to Other Services
Assuming that you are referring to AWS services here, Lamdba functions are bound by IAM roles in relation to other AWS services they can access. As long as you don't give the Lambda function access to something in it's IAM role, it won't be able to access those services unless a potentially malicious user provides credentials via some other means such as putting them in plain text in code which could be picked up by an AWS SDK implementation.
If you are referring to other internal services such as EC2 instances or ECS services then you can restrict access using the correct network configuration and putting your function in a VPC.
Do you see any other possible attack vector?
It's hard to say for sure. I'd really advise against this completely without taking some professional (and likely paid and insured) advice. There are new attack vectors that can open up or be discovered daily and therefore any advice now may completely change tomorrow if a new vulnerability is discovered.
I think your best bets are:
Restrict the function timeout to be as low as phyisically possible (allowing for cold starts).
Minimise the IAM policy for the function as far as humanly possible. Careful with logging because I assume you'll want some logs but not allow someone to run GB's of data in to your CloudWatch logs.
Restrict the language used so you are using one language that you're very confident in and that you can audit easily.
Run the lambda in a private subnet in a VPC. You'll likely want a seperate routing table and you will need to audit your security groups and network ACL's closely.
Add alerts and VPC logs so you can be sure that a) if something does happen that shouldn't then it's logged and traceable and b) you are able to automatically get alerted on the problem and rectify it as soon as possible.
Consider who will be reviewing the code. Are they experienced and trained to spot attack vectors?
Seek paid, professional advice so you don't end up with security problems or very large bills from AWS.

How to ping sap s4 service endpoint from cloud sdk?

we are building ping api for one s4 odata service. From scp application we want to call service endpoint at a repeated interval. How to call s4 service endpoint from cloud-sdk.Generated VDM only gives us the operations endpoints.
Revert for more info.
Thanks
Swastik
You can leverage the HttpClientAccessor to obtain a client for your target system and then perform a simple head request towards the service:
final HttpClient httpClient = HttpClientAccessor.getHttpClient(
DestinationAccessor.getDestination("MyDestination").asHttp());
final HttpResponse response = httpClient.execute(
new HttpHead(BusinessPartnerService.DEFAULT_SERVICE_PATH));
assertThat(response.getStatusLine().getStatusCode()).isEqualTo(HttpStatus.SC_OK);
Here I took the BusinessPartnerService of S/4HANA Cloud as an example.
Is it like a check server availability ping or what do you mean by ping API?
You can use any operation like getAll() or getByKey() to implement a "ping". It very much depends on your knowledge of the service to identify which exact operation to use to make sure the service behaves as you'd expect.
Do you have a certain OData protocol feature in mind that would help to solve your problem, by the way?
You can find more details on the OData client capabilities here. Also, take a look at connectivity options.
If you explain more behind what you call ping we might suggest a bit more ideas. It overall seems like the task is beyond what the SDK should do, but more of an implementation detail of certain service infrastructure.

What are the changes necessary to be made in the Go SDK's API calls to make it work for Azure Govcloud

I've used Go SDK for using deploying applications in Azure. I'm now planning to do the same for Azure Gov Cloud. Is there any change that has to be made in the calls from Compute, Storage, Network, Subscription client APIs to make it work for Azure Govcloud? For example for getting the list of locations, should there be any change in the API call arguments to make it work in GovCloud?
Based on the documentation you should just be able to set the AZURE_ENVIRONMENT environment variable prior to calling the NewAuthorizerFromEnvironment() method. Set the value of this environment variable to: AZUREUSGOVERNMENTCLOUD.
When interacting with sovereign clouds, which are often updated at a slower cadence than the Public cloud, you'll need to make sure that you choose API Versions which are available in your cloud. The Azure-SDK-for-Go has been specifically designed to facilitate this, putting API-Version in the path name.
In addition, as #Steve Michelotti pointed out, you'll want to inject the Government Environment so that the correct base URLs are used by the SDK.

Direct connect to SQL Azure vs connection via API service layer?

Currently our DB works in customer's local network and we have client app on C# to consume data. Due to some business needs, we got order to start moving everything to Azure. DB will be moving to Azure SQL.
We had discussion about accessing DB. There are two points:
One guy said that we have to add one more layer between our app (that will be working outside Azure at end-user PCs) and SQL Azure. In other words he suggested adding API service that will be translated all requests to DB, i.e. app(on-premises) -> API service (on Azure)-> SQL Azure. This approach looks more reliable and secure, since we are hiding SQL Azure behind facade of API service and the app talks to our API service only. It looks more like a reverse proxy. Obviously, behind this API we can build more sophisticated structure of DBs.
Another guy suggested connecting directly to DB, i.e. app(on-premises) -> SQL Azure. So far we don't have any plans to change structure of DB or even increase count of DBs. He claims it more simple and we can secure our connection the same way. Having additional service that just re-translates our queries to DB and back looks like wasting time.In the future, if needed, we would add this API.
What would you select and recommend, and why ?
Few notes:
We are going to use Azure AD to authenticate users.
Our application will be moving to Azure too, but later (in 1-2 years), we have plans to create REST API and move to thin client instead of fat client we have right now.
Good performance is our goal, we don't want to add extra things that can decrease it, but security is our most important goal as well.
Certainly an intermediate layer is one way to go. There isn't enough detail to be sure, but I wonder why you don't try the second option. Usually some redevelopment is normal. But if you can get away without it, and you get sufficient performance then that's even better.
I hope this helps.
Thank you.
Guy
If your application is not just a prototype (it sounds like it is not), then I advise you to build the intermediate API. The primary reasons for this are:
Flexibility
Rolling out a new version of an API is simple: You have either only one deployment or you have something like Octopus Deploy that deploys to a few instances at the same time for you. Deploying client applications is usually much more involved: Creating installers, distributing them, making sure users install them, etc.
If you build the API, you will be able to make changes to the DB and hide these changes from the client applications by just modifying the API implementation, but keeping the API interfaces the same. Moving forward, this will simplify the tasks for your team considerably.
Security
As soon as you have different roles/permissions in your system, you will need to implement them with DB security features if you connect to the DB directly. This may work for simple cases, but even there it is a pain to manage.
With an API, you can implement authorization in the API using C#. Like this, you can build whatever you need and you're not restricted by the security features the DB offers.
Also, if you don't take extra care about this, you may end up exposing the DB credentials to the client app, which will be a major security flaw.
Conclusion
Build the intermediate API. Except you have strong reasons not to. As always with architecture considerations, I'm sure there are cases where the above points don't apply. Just make sure you understand all the implications if you decide to go the direct route.

Adding a new DB service to CloudFoundry

I would like to add Cassandra to CloudFoundry. How can that be achieved? I was looking at the information posted here: CouchDB in CloudFoundry? but that is using the included CouchDB.
I also have been combing through this wiki https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service, but that doesn't give me enough information on how to point to my externally hosted Cassandra service.
Any help would be appreciated.
Although there's not much information on it, the Service Broker tool will let you expose an external service to a VCAP deployment (so that the service is displayed when running vmc services).
https://github.com/cloudfoundry/vcap-services/tree/master/service_broker
There isn't a how-to or other docs to speak of, so your best bet is to read the source and post questions on the vcap-dev google group. Here's an existing thread on Service Broker:
https://groups.google.com/a/cloudfoundry.org/d/topic/vcap-dev/sXF9rWzMMHc/discussion
If you want to connect directly your existing services from your private cloud, I then see 2 solutions :
Do nothing special and have your code connect to those services, assuming they are visible from the network and no firewall sits between them. Of course, you'll want to make their address configurable, but other than that, it is as if you were hitting a third party
create some kind of "gateway" service whose role would be to proxy the connection to your private service
Of course, a third solution would be to have a real "CloudFoundry" oriented Cassandra service, and migrate your existing data to it (but then it would not be accessible from the rest of your IS, unless you create a bridge the other way around)
I would start with option 1) and depending on your processes and usage, research solution 2) afterwards.

Resources