Enabling IAM Authentication for an existing RDS instance. What happens to current users? - amazon-rds

I understand we can enable IAM DB Authentication for RDS at anytime. If I were to do so, what happens to existing users who are connecting to RDS using traditional username/passwords? do they get impacted?

There will be absolutely no impact since when you connect to RDS via IAM, you NEED to use a locally created user/role in the database.
And that role is simply granted with already existing iam role (like "rds_iam" for postgresql for example) and other priviledges you will grant to it.
I have done this with RDS Aurora postgresql from EKS pods and it work perfectly.
For your needs, you can find more information in the several AWS article that are available. Example:
https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam

There will be no impact.
Those kinds of user, they use different plugins for authentication.
select distinct plugin from user;
+-------------------------+
| plugin |
+-------------------------+
| mysql_native_password |
| AWSAuthenticationPlugin |
+-------------------------+
2 rows in set (0.00 sec)

Related

connect AAD to existing AKS that has

Working with Azure, we started with AKS last year. On creation of the AKS clusters we use, we checked what needed to be done up front to enable rbac at a later moment and we then thought that setting 'rbac' to 'enabled' was the only thing we needed. This results in the following:
Now we're trying to implement rbac integration of AKS with AAD, but I read some seemingly conflicting pre-requisites. Some say that in order to integrate AAD and AKS, you need rbac enabled at cluster creation. I believe we have set that correct, looking at the picture above.
But then in the Azure docs, it is mentioned that you need to create a cluster and add some AAD-integration keys for the client and server applications.
My question is actually two-fold:
when people say you need rbac enabled in your aks cluster during creation do they actually mean you should select the 'rbac:enabled' box AND make sure you create the AAD-related applications up front and also configure these during cluster creation?
Is there a way to setup the AKS-AAD rbac connection on a cluster that has rbac:enabled but misses the aadProfile configuration?
I believe we indeed need to re-create all our clusters, but I want to know for sure by asking here as it's not 100% clear to me from what I've read online (also here at stack exchange) and it's going to be an awful lot of work.
For all of your requirements, you only need to make sure the RBAC enabled for your AKS cluster and it only can enable in the creation time. Then you can update the credential of the existing AKS AAD profile like this:
Before update:
CLI update command:
az aks update-credentials -g yourResourceGroup -n yourAKSCluster --reset-aad --aad-server-app-id appId --aad-server-app-secret appSecret --aad-client-app-id clientId --aad-tenant-id tenantId
After update:
yes, that is correct
no, there is no way of doing that. you need to recreate.

AWS RDS PostgreSQL access to pg_catalog.pg_authid forbidden in all contexts?

PostgreSQL said: permission denied for relation pg_authid
Is pg_authid just unavailable on AWS RDS in all contexts because of RDS locking down the super role? My role created the table, so pg_catalog should come by default (and not need to be added to search path) if I'm reading psql docs right. Just need SELECT, not create ability.
Haven't been able to find a definitive AWS RDS documentation page where it says that pg_catalog.pg_authid is not allowed in any context, but I've inherited a documentation project that is relying on being able to form queries and joins on the pg_authid table in the DB I just created. I always get the above permission denied.
Tried adding a postgres role and giving it to myself, and also explicitly adding the db to my search path, to no avail.
The catalog pg_authid contains information about database authorization identifiers (roles). As you might be aware, that due to managed nature off RDS as a service, unfortunately it is not possible to have the full superuser role in RDS.
Unfortunately as the above mentioned is a limitation on RDS, if the access to 'pg_authid' is utmost necessary for performing your business, I would suggest you to look for EC2 hosted Postgres (community Postgres) as an option. The workaround to view the contents of 'pg_authid' is to use 'pg_roles', but the passwords are masked and would not tell you if it is encrypted or not.
Kindly note, not all catalogs are restricted from being read on RDS, below is the SQL Query which shows the privileges rds_superuser/master user has on each catalog.
SELECT relname, has_table_privilege('rds_superuser',relname,'SELECT') as SELECT,has_table_privilege('rds_superuser',relname,'UPDATE') as UPDATE,has_table_privilege('rds_superuser',relname,'INSERT') as INSERT,has_table_privilege('rds_superuser',relname,'TRUNCATE') as TRUNCATE FROM pg_class c , pg_namespace n where n.oid = c.relnamespace and n.nspname in ('pg_catalog') and relkind='r';

enable binary logging Azure mysql server

I have created one Azure MYSQL server (DaaS/PaaS). Now I have to enable binary logging. I have checked the Azure MYSQL server parameter & I couldn't find this specific setting.May I know, How can I enable this log ? Currently Log_bin = OFF.
It is not possible on Azure MYSQL Service (PaaS / DaaS). We have to setup Mysql on Azure VM.
in case of you are using AzureChina, it's possible to enable it.
if your mysql is only master node, you need ask request to enable it by azure customer service center, create ticket to them.
if your mysql has replica node, the binlog parameter should be enabled by default.
check if it works:
mysql> show variables like '%log_bin%';
+---------------------------------+------------------------------+
| Variable_name | Value |
+---------------------------------+------------------------------+
| log_bin | ON |
| log_bin_basename | c:\work\binlogs\mysql-bin |
| log_bin_index | c:\mysqldata\mysql-bin.index |
| log_bin_trust_function_creators | OFF |
| log_bin_use_v1_row_events | OFF |
| sql_log_bin | ON |
+---------------------------------+------------------------------+
As I just check it, You can create temporary replication server in azure portal web and then delete it.
It seams that azure change master server configuration and enable the binary log for replication purpose and this setting seams to be permanent, even when you delete all replica servers.
It's possible to set the log bit (log_bit=ON). Just create a replica of your Azure MySQL server. You can delete the replica when it's created, the log bit will remain set.

Handling run time and build time secrets in AWS CodePipeline

We are dealing with the problem of providing build time and run time secrets to our applications built using AWS CodePipeline and being deployed to ECS.
Ultimately, our vision is to create a generic pipeline for each of our applications that achieves the following goals:
Complete separation of access
The services in the app-a-pipeline CANNOT access any of the credentials or use any of the keys used in the app-b-pipeline and visa-versa
Secret management by assigned developers
Only developers responsible for app-a may read and write secrets for app-a
Here are the issues at hand:
Some of our applications require access to private repositories for dependency resolution at build time
For example, our java applications require access to a private maven repository to successfully build
Some of our applications require database access credentials at runtime
For example, the servlet container running our app requires an .xml configuration file containing credentials to find and access databases
Along with some caveats:
Our codebase resides in a public repository. We do not want to expose secrets by putting either the plaintext or the cyphertext of the secret in our repository
We do not want to bake runtime secrets into our Docker images created in CodeBuild even if ECR access is restricted
The Cloudformation template for the ECS resources and its associated parameter file reside in the public repository in plaintext. This eliminates the possibility of passing runtime secrets to the ECS Cloudformation template through parameters (As far as I understand)
We have considered using tools like credstash to help with managing credentials. This solution requires that both CodeBuild and ECS task instances have the ability to use the AWS CLI. As to avoid shuffling around more credentials, we decided that it might be best to assign privileged roles to instances that require the use of AWS CLI. That way, the CLI can infer credentials from the role in the instances metadata
We have tried to devise a way to manage our secrets given these restrictions. For each app, we create a pipeline. Using a Cloudformation template, we create:
4 resources:
DynamoDB credential table
KMS credential key
ECR repo
CodePipeline (Build, deploy, etc)
3 roles:
CodeBuildRole
Read access to DynamoDB credential table
Decrypt permission with KMS key
Write to ECR repo
ECSTaskRole
Read access to DynamoDB credential table
Decrypt permission with KMS key
Read from ECR repo
DeveloperRole
Read and write access to DynamoDB credential table
Encrypt and decrypt permission with KMS key
The CodeBuild step of the CodePipeline assumes the CodeBuildRole to allow it to read build time secrets from the credential table. CodeBuild then builds the project and generates a Docker Image which it pushes to ECR. Eventually, the deploy step creates an ECS service using the Cloudformation template and the accompanying parameter file present in the projects public repository The ECS task definition includes assuming the ECSTaskRole to allow the tasks to read runtime secrets from the credential table and to pull the required image from ECR.
Here is a simple diagram of the AWS resources and their relationships as stated above
Our current proposed solution has the following issues:
Role heavy
Creating roles is a privileged action in our organization. Not all developers who try to create the above pipeline will have permission to create the necessary roles
Manual assumption of DeveloperRole:
As it stands, developers would need to manually assume the DeveloperRole. We toyed with the idea of passing in a list of developer user ARNs as a parameter to the pipeline Cloudformation template. Does Cloudformation have a mechanism to assign a role or policy to a specified user?
Is there a more well established way to pass secrets around in CodePipeline that we might be overlooking, or is this the best we can get?
Three thoughts:
AWS Secret Manager
AWS Parameter Store
IAM roles for Amazon ECS tasks
AWS Secret ManagerAWS Secrets Manager helps you protect secrets to access applications, services, and IT resources. With you can rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
AWS Parameter Store can protect access keys with granular access. This access can be based on ServiceRoles.
ECS provides access to the ServiceRole via this pattern:
build:
commands:
- curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq 'to_entries | [ .[] | select(.key | (contains("Expiration") or contains("RoleArn")) | not) ] | map(if .key == "AccessKeyId" then . + {"key":"AWS_ACCESS_KEY_ID"} else . end) | map(if .key == "SecretAccessKey" then . + {"key":"AWS_SECRET_ACCESS_KEY"} else . end) | map(if .key == "Token" then . + {"key":"AWS_SESSION_TOKEN"} else . end) | map("export \(.key)=\(.value)") | .[]' -r > /tmp/aws_cred_export.txt
- chmod +x /tmp/aws_cred_export.txt
- /aws_cred_export.txt && YOUR COMMAND HERE
If your ServiceRole provided to the CodeBuild task has access to use the Parameter store key you should be good to go.
Happy hunting and hope this helps
At a high level, you can either isolate applications in a single AWS account with granular permissions (this sounds like what you're trying to do) or by using multiple AWS accounts. Neither is right or wrong per se, but I tend to favor separate AWS accounts over managing granular permissions because your starting place is complete isolation.

Jenkins Slave 403 although Anonymous Slave connect has been enabled

We are using a Jenkins Master and Slave (both Linux) type setup. Recently upgraded to LTS version and for some reason Slaves connects to Master only when Anonymous is given Admin privileges.
I have read the posts about providing Anonymous slave connect privileges but I receive a 403 request forbidden error when I try that.
The only way around for this is to provide Anonymous Admin privileges (which is risky) save it and then go back to Manage Jenkins > Configure Security > Remove Anonymous Admin > Add Slave connect privileges.
The issue in doing this workaround is, I get the same 403 error when slave restarts until I give Anonymous admin privileges.
I have tried laying down a new slave.jar that didn’t help.
We are using a LDAP Bind account, is there an easy fix to this 403 issue without having to enter the bind password again (which we recently did after the Jenkins upgrade)
Nothing like an answer 1.5 years later but I just ran across this!
The way I handled this is with the Role-Based Strategy plugin.
Summary
The basics are:
Add and enable the Role-Based Strategy plugin
Create a global group swarmclient
Grant the swarmclient group the slave privileges only
I currently allow the Anonymous group to be in the swarmclient group.
In the future I will probably deny swarmclient privileges for the Anonymous group and will instead create accounts in the swarmclient group.
Details
In Manage Jenkins > Configure Global Security > Authorization, enable Role-Based strategy.
In Manage Jenkins > Manage Roles > Manage and Define Roles I added "swarmclient" to the global roles. Give this group Create permissions in the slave section of the global settings:
In newer versions of Jenkins the term "Slave" is replaced by "Agents"
Then in Manage Jenkins > Manage Roles > Assign Roles you add the Anonymous group to the swarmclient group:
And finally, as mentioned above, if you want some restrictions on the machines that can connect as a swarm client, just:
create user(s) for the swarm
add them to the swarmclient group
remove swarmclient permissions (on the Assign Roles) page from the Anonymous group.

Resources