Require help in hiding AWS Lambda inline code - security

For Privacy and security reasons, I would like to hide my aws lambda function code.
To Disable the following abilities:
The ability to edit lambda code inline.
The ability to download a lambda deployment package
My code is less than 5 MB (So 10 MB concept is ruled-out)
I can't create any IAM role, As most of the admins are superior users than me.
Adding more details
Security Threat - Anyone who has access to the account can modify my
code
Privacy - I don't want other peep into my code unnecessarily
TIA

If there are other users in the same AWS Account that have Admin capabilities, then it is not possible to "protect" your code. They would be able to access and modify the code. (Hopefully you are also keeping your code in a repository for safety and maintenance purposes, so that would need to be protected, too.)
An alternative would be to put the AWS Lambda function into a different AWS Account, then provide cross-account permissions that allow the function to be invoked but not otherwise accessed.

Related

Is it safe to put in secrets inside Google App Script code?

I'm creating a Google Workspace Add-On and need to make some requests using OAuth. They provide a guide here explaining how to do so. In the sample code, it's suggested that the OAuth client secret be inline:
function getOAuthService() {
return OAuth2.createService('SERVICE_NAME')
.setAuthorizationBaseUrl('SERVICE_AUTH_URL')
.setTokenUrl('SERVICE_AUTH_TOKEN_URL')
.setClientId('CLIENT_ID')
.setClientSecret('CLIENT_SECRET')
.setScope('SERVICE_SCOPE_REQUESTS')
.setCallbackFunction('authCallback')
.setCache(CacheService.getUserCache())
.setPropertyStore(PropertiesService.getUserProperties());
}
Is this safe for me to do?
I don't know how Google App Script is architected so I don't have details on where and how the code is being run.
Most likely it is safe since the script is only accessible to the script owner and Workspace Admins if it is for Google workspace (which may or may not be an issue).
Well, you can add some security/safety by making use of a container, by using Container-bound script which makes use of Google Spreadsheet, Google Doc or any other that allows user interaction. Or a standalone script but also makes use of other way to connect to UI for interaction. Refer to this link for more detailed explanation on that: What is the appropriate way to manage API secrets within a Google Apps script?
Otherwise, the only way I see that you can do is store the keys and secrets in User Properties. Here's how you can do it: Storing API Keys and secrets in Google AppScript user property
Also you can refer to this link below for more general information on how you can manage or add some security: https://softwareengineering.stackexchange.com/questions/205606/strategy-for-keeping-secret-info-such-as-api-keys-out-of-source-control

How could I secure this API?

I'm building a full-stack application with Next-JS. I'm building an API that works with Firebase. I was wondering if there is a way to make this API secure.
Let me elaborate. There is an option to your account called Premium. This variable is stored in the Firestore and will determine if you have purchased a Premium membership. This will determine whether or not you have access to certain features. I will use an API to change this variable.
I had the following in mind:
Have a button on the page to upgrade account.
Button pressed? Call to the API with the following params: email, upgrade to. This is because the same function can also be used to downgrade an account, for example when the user doesn't pay for the upgrade.
That API function changes the variable in the Firestore. It returns a status and a message.
I want to make option 2 more secure because otherwise, it would allow anyone to change the premium variable. That is obviously not what I want. Is there anything I can do about that? For example, a token system, the thing with that is that I have been thinking about that and I don't really know how to implement that and how it would work exactly.
For anyone wondering why I am using an API: I will also be creating an app, probably with react-native. The user will also be able to change their account status and interact with the API to do other stuff in that app.
Thanks for reading and responding! I hope this is at least a bit clear. If you have any questions, please comment them.
I do similar things in my app. I use Cloud Functions (which operate in a secure environment) to both save settings in Security-Rules -protected tables, as well as setting Custom Claims in the users Auth profile. All authorizations are then verified in the Cloud Functions before any changes are made - You may need to "seed" some values in a protected collection/document from the Console to get the process started.

How to restrict server less lambda function

I implemented lambda function and it's working fine.Can I restrict user to access lambda function by using I AM role? How?
You brought up a good point. Here are things you need to understand about permissions on AWS with Lambda.
What Lambda can do?
This comes under resource-level permissions, create policies or roles and assign it to the lambda. Lambda will have access to those resources.
What a user can do on a Lambda?
This comes under User-Level permissions. As of this writing below screenshot shows the permissions you can control over the lambda. Create a Role and assign it to the user, it will take care of security accordingly.
Hope it helps.

Is there a way to create groups for an Amazon Mechanical Turk Requester task?

I am a part of a group trying to create an Amazon Mechanical Turk Requester task. We'd like to either have a group account or have multiple accounts with access to the same project. I've been looking around and cannot find a way to do this. Is it possible to make this happen without sharing a single account?
This may not be perfect, but if you're an MTurk API customer, you can use Identity and Access Management (IAM) to have a single account (with a credit card on file) but provide multiple sets of API credentials (AWS Access Keys and Secret Keys) that you can provide to reach person/group that wants to use the account. This isn't a perfect solution because:
It is only applicable to the MTurk Application Programming Interface (API)
There aren't quotas or controls to limit spending on one person vs. another
Each account can still access each other's HITs (it isn't separate accounts)
You can learn more about IAM support in MTurk here: https://blog.mturk.com/introducing-mechanical-turk-api-support-for-iam-credentials-8f2de8cd6afb
There is not currently a way to do something similar in the Requester Website (requester.mturk.com).
Hope that helps a little.

Azure blob storage: Shared access signature for multiple containers?

I'm creating an application that will be hosted in Azure. In this application, users will be able to upload their own content. They will also be able to configure a list of other trusted app users who will be able to read their files. I'm trying to figure out how to architect the storage.
I think that I'll create a storage container named after each user's application ID, and they will be able to upload files there. My question relates to how to grant read access to all files to which a user should have access. I've been reading about shared access signatures and they seem like they could be a great fit for what I'm trying to achieve. But, I'm evaluating the most efficient way to grant access to users. I think that Stored access policies might be useful. But specifically:
Can I use one shared access signature (or stored access policy) to grant a user access to multiple containers? I've found one piece of information which I think is very relevant:
http://msdn.microsoft.com/en-us/library/windowsazure/ee393341.aspx
"A container, queue, or table can include up to 5 stored access policies. Each policy can be used by any number of shared access signatures."
But I'm not sure if I'm understanding that correctly. If a user is connected to 20 other people, can I grant him or her access to twenty specific containers? Of course, I could generate twenty individual stored access policies, but that doesn't seem very efficient, and when they first log in, I plan to show a summary of content from all of their other trusted app users, which would equate to demanding 20 signatures at once (if I understand correctly).
Thanks for any suggestions...
-Ben
Since you are going to have a container per user (for now I'll equate a user with what you called a user application ID), that means you'll have a storage account that can contain many different containers for many users. If you want to have the application have the ability to upload to only one specific container while reading from many two options come to mind.
First: Create a API that lives somewhere that handles all the requests. Behind the API your code will have full access to entire storage account so your business logic will determine what they do and do not have access to. The upside of this is that you don't have to create Shared Access Signatures (SAS) at all. Your app only knows how to talk to the API. You can even combine the data that they can see in that summary of content by doing parallel calls to get contents from the various containers from a single call from the application. The downside is that you are now hosting this API service which has to broker ALL of these calls. You'd still need the API service to generate SAS if you go that route, but it would only be needed to generate the SAS and the client applications would make the calls directly with the Windows Azure storage service bearing the load which will reduce the resources you actually need.
Second: Go the SAS route and generate SAS as needed, but this will get a bit tricky.
You can only create up to five Stored Access Policies on each container. For one of these five you create one policy for the "owner" of the container which gives them Read and write permissions. Now, since you are allowing folks to give read permissions to other folks you'll run into the policy count limit unless you reuse the same policy for Read, but then you won't be able to revoke it if the user removes someone from their "trusted" list of readers. For example, if I gave permissions to both Bob and James to my container and they are both handed a copy of the Read SAS, if I needed to remove Bob I'd have to cancel the Read Policy they shared and reissue a new Read SAS to James. That's not really that bad of an issue though as the app can detect when it no longer has permissions and ask for the renewed SAS.
In any case you still kind of want the policies to be short lived. If I removed Bob from my trusted readers I'd pretty much want him cut off immediately. This means you'll be going back to get a renewed SAS quite a bit and recreating the signed access signature which reduces the usefulness of the signed access policies. This really depends on your stomach of how long you were planning on allowing the policy to live and how quickly you'd want someone cut off if they were "untrusted".
Now, a better option could be that you create Ad-hoc signatures. You can have as many Ad-hoc signatures as you want actually, but they can't be revoked and can at most last one hour. Since you'd make them short lived the length or lack of revocation shouldn't be an issue. Going that route will mean that you'd be having the application come back to get them as needed, but given what I mentioned above about when someone is removed and you want the SAS to run out this may not be a big deal. As you pointed out though, this does increase the complexity of things because you're generating a lot of SASs; however, with these being ad-hoc you don't really need to track them.
If you were going to go the SAS route I'd suggest that your API be generating the ad-hoc ones as needed. They shouldn't last more than a few minutes as people can have their permissions to a container removed and all you are trying to do is reduce the load on hosted service for actually doing the upload and download. Again, all the logic for handling what containers someone can see is still in your API service and the applications just get signatures they can use for small periods of time.

Resources