I've set up an AWS Glue connection to an RDS database (in the same account and region). When doing the test connection I get the following error
rds-prod-snapshot test connection failed. For more information see the logs
Following the link to CloudWatch I get the error
There was an error getting log events.
The specified log stream does not exist.
The role has IAM permissions for CloudWatch logs
I followed the troubleshooting doc to get this far > https://aws.amazon.com/premiumsupport/knowledge-center/glue-test-connection-failed/ (in fact I got most of these resolved when setting up the connection in the first place and resolved)
Turns out AWSGlueServiceRole policy was not attached to the role
Related
I'm trying to create an Azure Synapse Link for Azure SQL Database, using the steps from here:
https://learn.microsoft.com/en-us/azure/synapse-analytics/synapse-link/connect-synapse-link-sql-database
After I create the link connection and I want to start it I receive the following error:
The connection to the sink database is failed. Detailed error message is: Login failed for user ''.
ConnectionToAzureDB
LinkConnection
Also I have configurated the Azure SQL database to use ADD Auth. The connection to the Azure Database seems to be working.
My user ( used to create the Synapse workspace is Subscription Owner)
The user is also owner of the storage account.
I added the SQL Managed Identity as Storage Blob Data Contributor
Did anyone else got this error and manage to fix it?
There are certain limitations while connecting SQL Database to Synapse Link as per document:
When setting up your workspace, users must select "Disable Managed Virtual Network" and "Allow connections from any IP addresses."
A link connection cannot be enabled by Azure Synapse link for SQL if the database owner does not have a mapped log in. it will cause to get error.The (ALTER AUTHORIZATION command can be used to workaround this problem by changing the database owner to an user.)
With fewer than 100 DTUs, the Free, Basic, or Standard tiers do not allow Azure Synapse Link for SQL.
With is limitation I tried to Connect SQL Database to Synapse Link and able to connect without error:
I was trying to create a Synapse Link service with On Premises SQL Server and getting following error
Failed to enable Synapse Link on the source due to 'Failed to enable the source database: Some internal error happened due to 'Calling internal service failed: Failed to execute non query on change publisher with status code 400 and error Fail to non-query change publisher with error: 'sqlErrorCode - 22301; exceptionCode - TransferServiceUnknowError; error - A database operation failed with the following error: 'Could not update the metadata. The failure occurred when executing the command '(null)'. The error/state returned was 15517/1: 'Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.'. Use the action and error to determine the cause of the failure and resubmit the request.'; detailedError - A database operation failed with the following error: 'Could not update the metadata. The failure occurred when executing the command '(null)'. The error/state returned was 15517/1: 'Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.'. Use the action and error to determine the cause of the failure and resubmit the request.'
I resolved by by changing the corresponding database user to 'sa' and it works.
use [YourCorrespondingDatabase] EXEC sp_changedbowner 'sa'
Currently attempting to connect to Neptune via NodeJS Lambda.
The code works to the point of getUrlAndHeaders in both libraries and I am getting response back and a connection is created, however, on attempt to insert/select, I get the 403.
There is a policy attached to the execution role, either "neptune-db:*" or "neptune-db:connect", but neither work.
All the same subnets are being used as a temporary measure
The docs mention Neptune lives in EC2 instances, but not seeing any reference to them
Confirmed that there are policies attached to said execution role for ec2: CreateNetworkInterface,DescribeNetworkInterface,DeleteNetworkInterface
What am I missing? I am working on testing other things in the process, but not gaining any traction.
Documentation:
AWS Neptune - IAM Auth Policy
AWS Neptune - Temp Credentials
Code being used/modeled after:
AWS Lambda Examples
gremlin-aws-sigv4
In Progress:
AWSLambdaVPCAccessExecutionRole - SF
#aws-sdk/client-neptune - NPM
We have ElasticSearch domain created in one of the AWS account.
We are trying to use AWS cli command to "describe" this domain.
aws es describe-elasticsearch-domain --domain-name <domain-name>
But receiving error:
An error occurred (ResourceNotFoundException) when calling the
DescribeElasticsearchDomain operation: Domain not found:
We than used list-domain command:
aws es list-domain-names
But received empty response:
{
"DomainNames": [] }
We double checked account info. and credentials in .aws folder and we are pointing to correct aws account also able to view other resources in that account except ElasticSearch.
Any help is appreciated.
It's not the permission issue, It can be profile issue may be command run in other account but I am sure your Elastic search cluster is in a different region and you set the different region in aws configure
All you need to pass region to the aws command
aws es list-domain-names --region DOMAIN_REGION
or
aws es list-domain-names --region us-west-1
The exception clearly said the resources not found in the region by default which specified in aws configure using aws-cli.
aws es describe-elasticsearch-domain --domain-name youdomain domain_region
I'm trying to report Node.js errors to Google Error Reporting, from one of our kubernetes deployments running on a GCP/GKE cluster with RBAC. (i.e. permissions defined in a service account associated to the cluster)
const googleCloud = require('#google-cloud/error-reporting');
const googleCloudErrorReporting = new googleCloud.ErrorReporting();
googleCloudErrorReporting.report('[test] dummy error message');
This works only in certain environments:
it works when run on my laptop, using a service account that has the "Errors Writer" role
it works when running in my cluster as a K8S job, after having added the "Errors Writer" role to that cluster's service account
it causes the following error when called from my Node.js application running in one of my K8S deployments:
ERROR:#google-cloud/error-reporting: Encountered an error while attempting to transmit an error to the Stackdriver Error Reporting API.
Error: Request had insufficient authentication scopes.
It feels like the job did pick up the permission changes of the cluster's service account, whereas my deployment did not.
I did try to re-create the deployment to make it refresh its auth token, but the error is still happening...
Any ideas?
UPDATE: I ended up following Jérémie Girault's suggestion: create a service account and bind it to my deployment. It works!
The error message has to do with the access scopes set on the cluster when using the default service account. You must enable access to the appropriate API.
As you mentioned, creating a separate service account, providing it the appropriate IAM permissions and linking it to your cluster or workload will bypass this error as well.
I have an Azure WebJob and when it picks up a message from the queue I get the error
"Function had errors. See Azure WebJobs SDK dashboard for details".
The dashboard shows no errors, it just insists my connection strings are not set which they are or else it wouldn't pick up the message.
When debugging locally how can I find out what the actual error is?
You can debug locally by explicitly setting the Dashboard connection string to null in the JobHostConfiguration object. That will make the host show all errors in the console rather than dashboard.
What connection strings do you set (their names)?