When I create an RDS secret in the AWS console, it does not seem to work in connecting to the database.
The new credentials do not appear to work (e.g. in PgAdmin):
Additionally there is no entry in any of these tables:
SELECT * FROM pg_catalog.pg_user
SELECT * FROM pg_catalog.pg_roles
SELECT * FROM pg_catalog.pg_auth_members
What else needs to be done?
Related
I am deploying to an azure synapse environment queries using sqlcmd to serverless pool.
The environment contains an SQL database that my deploying account has access to.
I am creating first the credentials to access a cosmosDB with :
CREATE DATABASE SCOPED CREDENTIAL [mycosmos] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = '<accessKeyToCosmosAccount>'
Then using the openrowset with the created credentials to retrieve records from the aforementioned COSMOSDB
SELECT TOP (100) *
from OPENROWSET (
PROVIDER = 'CosmosDB',
CONNECTION = 'Account=mycosmos;Database=reporting;',
OBJECT = 'data',
CREDENTIAL = 'mycosmos'
) as o;
however executing the latter gives the following error:
Resolving CosmosDB path has failed with error 'Secret is not base64 encoded.'.
Does anyone has tips or ideas on how to get more information or to understand the issue at hand?
the credentials are indeed created and I checked that by using:
SELECT * FROM SYS.database_scoped_credentials
I also tried to base64 the secret accessKeyToCosmosAccount using
echo $mysecret | tr -d '\n\r' | base64 -w 0
to no avail (I still keep getting the same error)
thanks
I tried to reproduce similar scenario in my environment and faced similar error.
After researching I found that cause of error is the incorrect secret.
To resolve this, check your secret/primary key is correct.
To get connection string you can go to your cosmos db account >> Settings >> Keys >> Primary Key.
My code:
CREATE CREDENTIAL MyCosmosDbAccountCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 'primary key from cosmos db';
SELECT TOP 10 *
FROM OPENROWSET(
PROVIDER = 'CosmosDB',
CONNECTION = 'Account=cosmosdb_name;Database=databassename',
OBJECT = 'container2',
SERVER_CREDENTIAL = 'MyCosmosDbAccountCredential1'
) with ( id varchar(10), name varchar(10) ) as rows
Execution and Output:
Hi thanks for your quick answer. turns out it was indeed the secret which we retrieve with an az cosmosdb keys list --name <name> -g <resource-group> --query 'primaryReadonlyMasterKey' -o tsv
Something wrong happens when we use the sqlcmd and pass the secret with -v.
turns out passing variables with = sign results in sqlcmd trimming the rest (see Escape variable in sqlcmd / Invoke-SqlCmd)
I think this issue is not much relevant to Synapse than it is to sqlcmd. I will continue the conversation there.
I'm pretty new to Azure (and SQL for that matter). I've been trying to configure Elastic Jobs Agent with a few specific jobs that would run queries against some of my databases on the server.
For right now I am targeting a test database where I want to execute a simple select query. However, I can't create the job step because of the "can't reference the credential" error.
I'm not sure why the error is popping up. I have followed Use T-SQL to create and manage Elastic Database Jobs article and I created all of the credentials and logins as described there.
The one exception here is that the masterkey already exists so I didn't create that and I also did not create a separate server for my agent host DB as suggested in some of the tutorials. My agent host DB sits on the same server where my target databases are but I would not think that would be an issue.
I have successfully created a target group and a target group member which is the specific database on this server that I want to query. I have also created the job I want to use.
The problem happens when I try to run this
DECLARE #step_id1 INT, #job_version1 INT;
EXEC jobs.sp_add_jobstep
#job_name = N'Job1',
#step_id = #step_id1 OUTPUT,
#step_name = N'Step1',
#command = N'select * from table',
#credential_name = N'agentjobuser',
#target_group_name = N'TestTarget'
I am at a loss here, I have no idea why it's saying that the credential doesn't exist. I am using the sql server admin login so I should definitely have the permissions for it.
I tried to repro this and got the same error.
When SQL server username is given as credential name parameter in the sp_add_jobstep, the same error is reproduced.
Cannot reference the credential 'user', because it does not exist or you do not have permission.
Database scoped credential name which is created for SQL server user is given as value for the parameter #credential_name. It is executed successfully without error.
I have a PostgreSQL DB at GCP. Right now I can login using a username, password e.g
import pandas as pd
import pyodbc
conn_str = (
"DRIVER={PostgreSQL Unicode};"
"DATABASE=test;"
"UID=user;"
"PWD=a_very_strong_password;"
"SERVER=34.76.yy.xxxx;"
"PORT=5432;"
)
with pyodbc.connect(conn_str) as con:
print(pd.read_sql("SELECT * from entries",con=con))
Is there a way to use the .json credentialsfile which is downloaded when I created my IAM user, instead of "hard typing" the credentials like above? I recon I can use the file to connect to a GCP storage, where I then can save my credentials for the DB thus I can write a script which loads the username,password etc. from the storage, but I feel it is a kinda "clunky" workaround.
From the guide here it seems like you can create IAM roles for such, but you only grants access for an hour at a time, and you need to create a token-pair each time.
Short answer: Yes, you can connect to a Cloud SQL instance using SA keys (json file) but only with PostgreSQL but you need to refresh the token every hour.
Long answer: The purpouse of the json is more intended to make operations in the instance at resource level or when using the Cloud SQL proxy.
For example when you use the Cloud SQL proxy with a service account you make a "magical bridge" to the instance but at the end you need to authenticate the way you're doing right now but using as SERVER = 127.0.0.1. This is the recommended method in most cases.
As well you've mentioned that using IAM authentication can work, this approach works for 1 hour since you depend on token refresh. If you're okay with this, just keep in mind you need to be refreshing the token.
Another approach I can think of for now is to use Secret Manager. The steps can be as follows:
Create a service account and a key for that.
Create a secret which contains your password.
Grant access to this particular secret to the SA created in step 1:
Go to Secret Manager.
Select the secret and click on Show info panel
Click on Add member and type or paste the email of the SA
Grant Secret Manager Secret Accessor
Click on Save
Now in your code you can get the secret content (which is the password) with maybe this sample code:
import pandas as pd
import pyodbc
from google.cloud import secretmanager
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('/path/to/key.json')
client = secretmanager.SecretManagerServiceClient(credentials=credentials)
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
response = client.access_secret_version(request={"name": name})
secret_password = response.payload.data.decode("UTF-8")
conn_str = (
"DRIVER={PostgreSQL Unicode};"
"DATABASE=test;"
"UID=user;"
"PWD=" + secret_password + ";"
"SERVER=34.76.yy.xxxx;"
"PORT=5432;"
)
with pyodbc.connect(conn_str) as con:
print(pd.read_sql("SELECT * from entries",con=con))
BTW, you can install the lib using pip install google-cloud-secret-manager.
Finally, you can also use this approach to keep the instance IP, user, DB name, etc. creating more secrets if you prefer
I am building a Node.js server to run queries against BigQuery. For security reasons, I want this server to be read only. For example, if I write a query with DROP, INSERT, ALTER, etc. statement my query should get rejected. However, something like SELECT * FROM DATASET.TABLE LIMIT 10 should be allowed.
To solve this problem, I decided to use a service account with "jobUser" level access. According to BQ documentation, that should allow me to run queries, but I shouldn't be able to "modify/delete tables".
So I created such a service account using the Google Cloud Console UI and I pass that file to the BigQuery Client Library (for Node.js) as the keyFilename parameter in the code below.
// Get service account key for .env file
require( 'dotenv' ).config()
const BigQuery = require( '#google-cloud/bigquery' );
// Query goes here
const query = `
SELECT *
FROM \`dataset.table0\`
LIMIT 10
`
// Creates a client
const bigquery = new BigQuery({
projectId: process.env.BQ_PROJECT,
keyFilename: process.env.BQ_SERVICE_ACCOUNT
});
// Use standard sql
const query_options = {
query : query,
useLegacySql : false,
useQueryCache : false
}
// Run query and log results
bigquery
.query( query_options )
.then( console.log )
.catch( console.log )
I then ran the above code against my test dataset/table in BigQuery. However, running this code results in the following error message (fyi: exemplary-city-194015 is my projectID for my test account)
{ ApiError: Access Denied: Project exemplary-city-194015: The user test-bq-jobuser#exemplary-city-194015.iam.gserviceaccount.com does not have bigquery.jobs.create permission in project exemplary-city-194015.
What is strange is that my service account (test-bq-jobuser#exemplary-city-194015.iam.gserviceaccount.com) has the 'Job User' role and the Job User role does contain the bigquery.jobs.create permission. So that error message doesn't make sense.
In fact, I tested out all possible access control levels (dataViewer, dataEditor, ... , admin) and I get error messages for every role except the "admin" role. So either my service account isn't correctly configured or #google-cloud/bigquery has some bug. I don't want to use a service account with 'admin' level access because that allows me to run DROP TABLE-esque queries.
Solution:
I created a service account and assigned it a custom role with bigquery.jobs.create and bigquery.tables.getData permissions. And that seemed to work. I can run basic SELECT queries but DROP TABLE and other write operations fail, which is what I want.
As the error message shows, your service account doesn't have permissions to create BigQuery Job
You need to grant it roles/bigquery.user or roles/bigquery.jobUser access, see BigQuery Access Control Roles, as you see in this reference dataViewer and dataEditor don't have Create jobs/queries, but admin does, but you don't need that
To do the required roles, you can follow the instructions in Granting Access to a Service Account for a Resource
From command line using gcloud, run
gcloud projects add-iam-policy-binding $BQ_PROJECT \
--member serviceAccount:$SERVICE_ACOUNT_EMAIL \
--role roles/bigquery.user
Where BQ_PROJECT is your project-id and SERVICE_ACOUNT_EMAIL is your service-account email/id
Or from Google Cloud Platform console search or add your service-account email/id and give it the required ACLs
I solved my own problem. To make queries you need both bigquery.jobs.create and bigquery.tables.getData permissions. The JobUser role has the former but not the latter. I created a custom role (and assigned my service account to that custom role) that has both permissions and now it works. I did this using the Google Cloud Console UI ( IAM -> Roles -> +Add ) then ( IAM -> IAM -> <set service account to custom role> )
I have set up an Azure Mobile Service (AMS) that's associated with an Azure SQL database, as usual. However, when I try to use a custom api to query another table (NOT a mobile services table) with the custom API mssql object, I get a permissions error:
Error: [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'zwxABOesblahblahHYzLogin'.
Some things to note:
I had to drop delete the database and then re-create it with the same name after the mobile service was created.
Mobile service name is 'abc', and the table I'm trying to access is owned by an 'abc' schema, not dbo. The other table was created from SQL Server Management studio via a standard T-SQL script.
The AMS api script is very basic:
exports.get = function(request, response) {
var mssql = request.service.mssql;
var sql = "select * from abc.TestTable";
mssql.query(sql, {
success : function(results) {
console.log("Results from SQL Query to TestTable:\n"+results);
response.send(statusCodes.OK, results);
},
error: function(err) {
console.log("Error in SQL Query to TestTable:\n"+err);
response.send(statusCodes.Error,err.message);
}
});
};
So to my question(s)... what credentials are used by AMS to access the SQL database? How can I change permissions so that the script above just works (as implied by all the docs I've seen!). Or am I stuck with having to pass a connection string as suggested by this question.
Thanks!
When you create a Mobile Service it generates the SQL Database backend, or connects to an existing SQL database. When it does this is creates a SQL Login user with a random name. In your case the user was 'zwxABOesblahblahHYzLogin'. When you dropped and recreated your database you lost this user having access to the database (which I think you already knew).
To determine the permissions that were assigned to the created user I created a new Mobile Service and I then used SQL Management Studio to script the entire database (I modified the scripting options to ensure the permissions would be included in the script). I then trimmed it down to just what pertained to the user and the schema. If you already recreated your schema you can skip that part.
CREATE USER [zwxABOesblahblahHYzLogin] FOR LOGIN [zwxABOesblahblahHYzLogin] WITH DEFAULT_SCHEMA=[abc]
GO
GRANT CONNECT TO [zwxABOesblahblahHYzLogin] AS [dbo]
GRANT CREATE TABLE TO [zwxABOesblahblahHYzLogin] AS [dbo]
CREATE SCHEMA [abc]
GRANT CONTROL ON SCHEMA::[abc] TO [zwxABOesblahblahHYzLogin] AS [dbo]
From this it looks like the AMS user is granted a login in the database, Connect permissions, create table permissions and then granted control of the schema as DBO.
I test this by dropping a mobile service then recreating it which I think would put us in the same scenario.