I'm new to AWS and I'm having trouble connecting to an Aurora (Postgres compatible) database that I've created.
I can connect to it via the AWS CLI using the following command:
aws rds-data execute-statement --resource-arn "<my rds cluster ARN>" --database "<My database>" --secret-arn "<My secret in the secret manager>" --sql "select count(*) from information_schema.tables" --profile <my profile>
and this returns a result of 175 which is correct (the same result I get if I run this in the query editor tool in the AWS console.
I then put together a little C# console app and referenced the Npgsql.EntityFrameworkCore.PostgreSQL nuget package as suggested in the AWS docs. When I run it on my PC I get a connection timeout exception.
var connectionString = "Server=<My db cluster>; Database=<database name>; User ID=<my user id>; Password=<my password>; Port=5432";
using (var connection = new NpgsqlConnection(connectionString))
{
var sql = "select count(*) from information_schema.tables";
return connection.ExecuteScalar(sql).ToString();
}
When trying to troubleshoot the problem lots of AWS docs & videos suggest setting the Public accessibility property to true by selecting the Db in the RDS console, choosing modify and setting the Public accessibility property.
The only problem is, when I choose to modify the DB, that panel is not visible. The AWS docs & videos do go on to talk about changing inbound rules and whatnot in VPC and subnets but not before they've changed the Public accessibility property.
Can anyone advise please?
In case this helps a future reader and as I stated in my comment. The DB in question was created with a capacity type of "Serverless" instead of "Provisioned". Serverless Aurora DB's have all sorts of different characteristics: docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/… One of the characteristics is that they can't be set to publicly accessible.
Related
According to this doc :
You can require that all user connections to your Aurora MySQL DB cluster use SSL/TLS by using the require_secure_transport DB cluster parameter.
Been looking through terraform docs and samples. Not seeing if this setting is available.
Does terraform have a method to set arbitrary values if they aren't supported as module params?
#ethrbunny If my guess is not wrong, you are trying to set SSL/TLS connection for mysql DB cluster in aws aurora using terraform? I guess for this first you need to create custom DB cluster parameter group in aws aurora and specify that group name in your terraform module.
Reference :
Terraform Registery Look for 'db_cluster_parameter_group_name' tag
Terraform aws aurora github link
As per AWS document we can set this parameter in a custom DB cluster parameter group. The parameter isn't available in DB instance parameter groups.
Reference : aws aurora documentation refer 'Notes' section
The require_secure_transport parameter is only available for Aurora MySQL version 5.7. You can set this parameter in a custom DB cluster parameter group. The parameter isn't available in DB instance parameter groups.
I'd like to restore an Aurora PostgreSQL cluster from a snapshot. The cluster was created using the AWS CDK rds.DatabaseCluster construct.
The CDK version is 1.139.0 using Typescript and the db cluster is 11.9
For an existing cluster created with rds.DatabaseCluster there doesn't appear to be a way to specify a snapshot should you wish to trigger a snapshot restore of the cluster through CDK.
In the past I have restored clusters that have been deployed using CloudFormation (CF) by adding the snapshotIdentifier to the AWS::DB::Cluster resource in the CF template. This property can be seen in the CDK CfnDBCluster & CfnDBInstance resources.
I'm aware of the rds.DatabaseClusterFromSnapshot construct which offers the ability to create the database (and restore?) by specifying a snapshot name. However as mentioned the database cluster that I'd like to restore has already been created and is associated in CDK with the rds.DatabaseCluster constuct.
I'd rather not restore the database cluster outside of CDK (using console/cli) as the new cluster this results in would not be associated with the CDK stack.
Is it possible to perform a snapshot restore of a RDS Aurora PostgreSQL cluster in anyway purely from within the existing CDK stack/code? Specifically if the cluster was created using the rds.DatabaseCluster construct?
Thank you
You can access the underlying CloudFormation resource as an L1 construct.
const cluster = new rds.DatabaseCluster(this, 'Database', {
engine: rds.DatabaseClusterEngine.AURORA,
instanceProps: { vpc },
});
const cfnCluster = cluster.node.defaultChild as rds.CfnDBCluster;
cfnCluster.snapshotIdentifier = "arn:snapshot";
cfnCluster.masterUsername = undefined;
cfnCluster.masterUserPassword = undefined;
Updating this value would terminate your cluster and create a new one to replace it.
Parameter documentation: https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-rds.CfnDBCluster.html#snapshotidentifier
Edit: Updated to set masterUsername and masterUserPassword to undefined
I developed a cron trigger azure fuction who needs to search for soe data in my database.
Localy i can connect whit sql server, so i change the connection string in loca.settings.json to connect in azure sql and published the function, but the function cant connect with database.
I need to do something more than configure the local.settings.json?
The local.settings.json is only used for local testing. It's not even exported to azure.
You need to create a connection string in your application settings.
In Azure Functions - click Platform features and then Configuration.
Set the connection string
A function app hosts the execution of your functions in Azure. As a best security practice, store connection strings and other secrets in your function app settings. Using application settings prevents accidental disclosure of the connection string with your code. You can access app settings for your function app right from Visual Studio.
You must have previously published your app to Azure. If you haven't already done so, Publish your function app to Azure.
In Solution Explorer, right-click the function app project and choose Publish > Manage application settings.... Select Add setting, in New app setting name, type sqldb_connection, and select OK.
Application settings for the function app.
In the new sqldb_connection setting, paste the connection string you copied in the previous section into the Local field and replace {your_username} and {your_password} placeholders with real values. Select Insert value from local to copy the updated value into the Remote field, and then select OK.
Add SQL connection string setting.
The connection strings are stored encrypted in Azure (Remote). To prevent leaking secrets, the local.settings.json project file (Local) should be excluded from source control, such as by using a .gitignore file.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-scenario-database-table-cleanup
If you are using entity framework core to make a connection, Other Way of connection to SQL is by using dependency injection from .netcore library.
You can keep the connection string in Azure Key-vault or the config file from there you can read the same using azure function startup class. which need below code setup in your function app.
using Microsoft.Azure.Functions.Extensions.DependencyInjection;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
[assembly: FunctionsStartup(typeof( TEST.Startup))]
namespace TEST
{
internal class Startup : FunctionsStartup
{
public override void Configure(IFunctionsHostBuilder builder)
{
Contract.Requires(builder != null);
builder.Services.AddHttpClient();
var configBuilder = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("local.settings.json", optional: true, reloadOnChange: true)
.AddAzureKeyVault($"https://XYZkv.vault.azure.net/");
var configuration = configBuilder.Build();
var conn = configuration["connectionString"];
builder.Services.AddDbContext<yourDBContext>(
options => options.UseSqlServer(configuration["connectionString"]));
}
}
}
after that where ever you are injecting this dbcontext, with context object you can do all CRUD operations by following microsoft's entity framework core library documentation.
Having just dealt with this beast (using a custom handler with Linux), I believe the simple way is to upgrade your App to premium-plan, allowing you to access the "Networking" page from "App Service plans". This should allow you to put both sql-server and app in the same virtual network, which probably makes it easier. (but what do I know?)
Instead, if you don't have the extra cash laying around, you can try what I did, and set up a private endpoint, and use the proxy connection setting for your database:
Create a virtual network
I used Address space: 10.1.0.0/16 (default I think)
Add subnet 10.1.0.0/24 with any name (adding a subnet is required)
Go to "Private link center" and create a private endpoint.
any name, resource-group you fancy
use resource type "Microsoft.Sql/Server" and you should be able to select your sql-server (which I assume you have created already) and also set target sub-resource to "sqlServer" (the only option)
In the next step your virtual network and submask should be auto-selected
set Private DNS integration to yes (or suffer later).
Update your firewall by going to Sql Databases, select your database and click "Set Server Firewall" from the overview tab.
Set Connection Policy to proxy. (You either do this, or upgrade to premium!)
Add existing virtual network (rule with any name)
Whitelist IPs
There probably is some other way, but the azure-cli makes it easy to get all possible IP's your app might use: az functionapp show --resource-group <group_name> --name <app_name> --query possibleOutboundIpAddresses
https://learn.microsoft.com/en-us/azure/app-service/overview-inbound-outbound-ips
whitelist them all! (copy paste exercise)
Find your FQDN from Private link center > Private Endpoints > DNS Configuration. It's probably something like yourdb.privatelink.database.windows.net
Update your app to use this url. You just update your sql server connection string and replace the domain, for example as ADO string: Server=tcp:yourdb.privatelink.database.windows.net,1433;Initial Catalog=somedbname;Persist Security Info=False;User ID=someuser;Password=abc123;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;
Also note that I at some point during all of this I switched to TrustServerCertificate=True and now I can't bother to figure out if it does a difference or not. So I left it as an exercise to the reader to find out.
So what we have done here...?
We have forced your function app to go outside the "azure-sphere" by connecting to the private endpoint. I think that if you bounce between azure-services directly, then you'll need some sort of authentication (like logging in to your DB using AD), and in my case, using custom handler and linux base for my app, I think that means you need some trust negotiation (kerberos perhaps?). I couldn't figure that out, so I came up with this instead.
PostgreSQL said: permission denied for relation pg_authid
Is pg_authid just unavailable on AWS RDS in all contexts because of RDS locking down the super role? My role created the table, so pg_catalog should come by default (and not need to be added to search path) if I'm reading psql docs right. Just need SELECT, not create ability.
Haven't been able to find a definitive AWS RDS documentation page where it says that pg_catalog.pg_authid is not allowed in any context, but I've inherited a documentation project that is relying on being able to form queries and joins on the pg_authid table in the DB I just created. I always get the above permission denied.
Tried adding a postgres role and giving it to myself, and also explicitly adding the db to my search path, to no avail.
The catalog pg_authid contains information about database authorization identifiers (roles). As you might be aware, that due to managed nature off RDS as a service, unfortunately it is not possible to have the full superuser role in RDS.
Unfortunately as the above mentioned is a limitation on RDS, if the access to 'pg_authid' is utmost necessary for performing your business, I would suggest you to look for EC2 hosted Postgres (community Postgres) as an option. The workaround to view the contents of 'pg_authid' is to use 'pg_roles', but the passwords are masked and would not tell you if it is encrypted or not.
Kindly note, not all catalogs are restricted from being read on RDS, below is the SQL Query which shows the privileges rds_superuser/master user has on each catalog.
SELECT relname, has_table_privilege('rds_superuser',relname,'SELECT') as SELECT,has_table_privilege('rds_superuser',relname,'UPDATE') as UPDATE,has_table_privilege('rds_superuser',relname,'INSERT') as INSERT,has_table_privilege('rds_superuser',relname,'TRUNCATE') as TRUNCATE FROM pg_class c , pg_namespace n where n.oid = c.relnamespace and n.nspname in ('pg_catalog') and relkind='r';
I'm trying to create a table in a local instance of DynamoDB using PowerShell cmdlets. In VS AWS Explorer I created a DDB instance and bound it to port:10000. Right after that, the new DB was created where name is KEYID_us-east-1.db
In the PS script, I'm setting up the AWS context and the table to create it in eu-central-1 region. Despite this, the new table is created in us-east-1 db, so the PS cmdlet ignored my region settings and used default one.
In the mean time, when I specify a different region in NodeJS, but the same endpoint that I use in PS script, after accessing the db, the new DB appears with region that I specified.
Why does this happen?
Please refer the "Notes" section on the below link. Looks like the local dynamodb instance use the region to create the database file. However, the local instance is not using the region effectively in the same way as AWS remote dynamodb instance.
"Bullet Point : 2 - The values that you supply for the AWS access key and the Region are only used to name the database file."
https://aws.amazon.com/blogs/aws/dynamodb-local-for-desktop-development/