I pushed api key to public repository in github - security

Yes, i did it.
What amaze me is that bots are scanning github for searching free api keys. And i can understand that, but what is weird. They were able to activate different api (compute engine) host 3 virutal machines and use it to mine crypto.
A question is, isn't it vulnerabilty that they can host virtual machines and use different api ?
I had to shut down whole project.

Depending on the role assigned to compromised service-account - attacker can do everything or nothing.
There are some basic "best practices" regarding keys and service accounts that should be usefull to you.
Generally use (if possible) different service account to manage VM's or/and rotate keys weekly or twice a week (just like the Google-managed ones) and avoid putting any API keys into repositories that can/will be synchronised with public ones :)
Yes - sounds silly but slip-up's happen and this will make unathorised access way less likely or impossible.
Also fallowing "least privilege" rule may be worth going for - compromised credentials will not be much usefull then.

Note that, since Aug. 2021:
Secret scanning org-level REST API
GitHub Advanced Security customers can now retrieve private repository secret scanning results at the organization level via the GitHub REST API.
This new endpoint, in beta, supplements the existing repository-level endpoint.
The API is: GET /organizations/:organization_id/secret-scanning/alerts.
See "About secret scanning for private repositories"
In your case, querying that new (still beta) API endpoint can be a good practice, to be alerted before an attacker has time to do much damage.
And this is also for public repositories!
Secret scanning is now available for free on public repositories (Dec. 2022)
Previously, only organizations with GitHub Advanced Security could enable secret scanning's user experience on their repositories.
Now, any admin of a public repository on GitHub.com can detect leaked secrets in their repositories with GitHub secret scanning.
The new secret scanning user experience complements the secret scanning partner program, which alerts over 100 service providers if their tokens are exposed in public repositories.
You can read more about this change and how secret scanning can protect your contributions in our blog post.
Read our blog post to learn how you can secure your public repositories…for free
Learn how to secure your repositories with secret scanning

Related

Adding personal access token in Gitlab : What are the different token scope use cases?

I am connecting my GitLab account to PyCharm and while creating the token access in GitLab, I was uncertain what are the practical uses.
I am very new to this so if someone can dumb this down, that would be appreciated.
The idea is to:
limit the actions you can do with one PAT
have several PAT for several usage
easily revoke one PAT if compromised/not needed, without invalidating the others
As illustrated here, if you intend to use a PAT as your GitLab password, you would need the "api" scope.
If not, "read_repository" or (if you don't need to clone) "read_user" is enough.
"read_registry" is only needed if your GitLab host docker images as a docker registry.
What I don't understand is it allows me to select all at once
That is because each scope matches a distinct use case, possibly covered by another scope.
By selecting them all, you cover all the use cases:
api covers everything (which is too much, as illustrated by issue 20440. GitLab 12.10, Apr. 2020, should fix that with merge request 28944 and a read_api scope)
write (since GitLab 1.11, May 2019): "Repository read-write scope for personal access tokens".
Many personal access tokens rely on api level scoping for programmatic changes, but full API access may be too permissive for some users or organizations.
Thanks to a community contribution, personal access tokens can now be scoped to only read and write to project repositories – preventing deeper API access to sensitive areas of GitLab like settings and membership.
read_users and registry each address a distinct scenario
As mentioned in gitlab-ce/merge_request 5951:
I would want us to (eventually) have separate scopes for read_user and write_user, for example.
I've looked at the OpenID Connect Core spec - it defines the profile, email, address, and phone scopes, which need to be accompanied by the openid scope.
Since we can have multiple allowable scopes for a given resource, my preference would be to leave the read_user scope here, and add in the openid and profile scopes whenever we're implementing OpenID compliance.
The presence of other scopes (like read_user and write_user) shouldn't affect the OpenID flow.

GoogleDrive API V3 in Browser: exposed Client ID and API Key. Any security issue?

I'm trying to develop a Webapp that would allow users authenticate with their Google Account and store some information and files on their Google Drive.
It would be a static html/js page with no backend.
There is a quickstart example here:
https://developers.google.com/drive/api/v3/quickstart/js
It works fine but I am wondering if exposing my Client ID and API Key to everybody like in this example could not be a security problem. Anyone could use these id and key in their own app.
What do you think?
As per Changes to the Google APIs Terms of Service, you are asked to keep your private keys private.
Asking developers to make reasonable efforts to keep their private
keys private and not embed them in open source projects.
The author of that post was contacted and the exchange was made available on DaImTo's blog. This is a part of the reply:
Yes, you are not making your personal data available to them. You are,
however, allowing them to “impersonate” you in Google’s eyes. If our
abuse systems detect abuse (say, should someone try to DoS one of our
services using your key), you run the risk that they would terminate
your account because of it (and please note — they wouldn’t just cut
access to the key, they would shut down your console account).

Custom Users when using Jenkins Google Login Plugin

I am attempting to our company's Jenkins from the Jenkins user database + matrix based security to using Google Login Plugin and Role based strategy plugin to give us better control of our user accounts.
With this new set up I am wondering how I could go about creating a designated user which is used by scripts which trigger Jenkins jobs remotely. I would like to do this without having to add a user to our company's GSuite account as this costs a few $ per month. Before the switch to Google Login I could just create a user manually in the Jenkins user database and take the API token from there but since switching to Google Login there is no option to add a user (which makes sense given than the users are managed by Google now). At the moment it seems like I have to choose from:
Use the old approach and forget about authenticating through google. This is not a great result as we want to minimize the number of user accounts we have to set up for new people joining the company to overhead of onboarding.
Use Google Login Plugin and create a new dedicated "Jenkins" user in GSuite for these scripting / requirements. This costs money.
Use an existing users API Token to avoid the cost of a new Google User in our GSuite account. This seems like bad practice which I'll regret at some point.
Is there a workaround which doesn't require a designated GSuite user or repurposing an existing Google users credentials just for this purpose?
I did a similar research a while ago and it seems like there is no way to do so right now.
However, I'm using SAML plugin with GSuite instead of Google Login Plugin, but from Jenkins security perspective I assume they work in the same way.
When you're using such plugin, Jenkins creates a securityRealm in its config. In my case it is:
<securityRealm class="org.jenkinsci.plugins.saml.SamlSecurityRealm" plugin="saml#1.0.7">
Therefore, to have SAML and Jenkins security matrix work simultaneously, you have to have several security realms.
Here is a ticket, which describes this issue, but it's still open
Regards!
I was also looking at how to trigger builds remotely when using the Google Login Plugin.
I ended up using the "Build Token Root Plugin" which solved this problem, without any need to create a dedicated user for this.
This plugin offers an alternate URI pattern which is not subject to the usual overall or job read permissions. Just issue an Http GET or POST to buildByToken/build?job=NAME&token=SECRET. This URI is accessible to anonymous users regardless of security setup, so you only need the right token.
https://wiki.jenkins.io/display/JENKINS/Build+Token+Root+Plugin

Multiple API Keys for multiple websites for web designers

I am a beginner web designer and I am struggling to find relevant information online as to how I should go about managing my API keys for clients! I would really appreciate any tips or insights on how I should go about this!
I hold my own google account and already have my own API key (Javascript API) for my own website. Although, when creating websites for clients, is it okay to use the same API Key? Or should i create a new API Key for each client in my own account (creating new "projects")? Or should i be creating a google account for each client and then creating each client an API Key through their own account?
I also know that there are usage limits on API Keys so I want to ensure I dont exceed these if using one API for multiple sites. How can I monitor this?
Looking for any advice on the best and most efficient way to go about this. I do not know too much on how API Keys work!
Much appreciated :)
I will be using Google API as an example. Yes, you should always Create a new project for each client there are a multitude of reasons why you should do this and you already mentioned some of this
API query usage limit.
Separated client billing & usage breakdown for each project.
Security and revocation of compromised APIs.
Restricted security profiles, domain whitelisting, IP address, device usage etc..
Access management and role management.
Traffic and analytical reasons.
Creating credentials
Depending on your organisation needs and project scale, for us, we Create credentials (API key/ OAuth ID/ Service Account Key) for every platform the key will be used. For example, if we are developing an e-commerce website that comes with an app, we would issue 3 keys. (1 for web, 1 for Android apk, 1 for iOS app). This allows us to fine tune the access permissions and let us track usage.
What works for you?
If you are a freelancer or work in a small enterprise, the least you should do is separate every client by projects. There is no need to create a new Google account for each project. (You can always transfer ownership of projects to another account if your client requests at a later time)
The above screenshot is how we categorize items in our account, for each project we are contracted for (could be the same client) we will create a separate project entry.

Where do you store your db password and get it in your J2EE app?

How do you make sure it is secure when there are some devs who can access the machine?
Baring the whole discussion about not storing passwords in files you use the machine's own ACL to prevent them from accessing it.
Make the file readable only by the admin account, or some other account used to run your software. Then you dont give the developers the admin account/process account information.
The bigger question is, if you are concerned about them accessing the file on your machine, why do they have access to said machine? Any developer that is able to replace the code on the server without checks will be able to access your database.
Lets give a nice real world example of why you would want to do something like this.
You hire developers to create a Bank of Stackoverflow website. For whatever reason you store all your clients account information, including SSN, in a single database that needs to be accessed by the Bank of Stackoverflow website.
Do not give developers permission to put code directly onto a live machine
Do not give developers the access information to the database.
All code has to go onto a stage machine to be verified. For the most part it is easy enough to allow developers to use stage databases consisting of fake client information.
It is the responsibility of vetted engineers, to move products from the staged machine to the production machine.
I did not completely understand your problem but I think following article is for you:
Data Storage Security in J2EE

Resources