Unable to copy RDS parameter group across regions - aws-cli

I am using the RDS command line tool from here and am having trouble copying the parameter group to a different region. Running the rds-copy-db-parameter-group fails with the following error:
rds-copy-db-parameter-group: Could not find the resource you requested: DB ParameterGroup not found, not allowed to do cross region copy.
The command I am using is:
rds-copy-db-parameter-group arn:aws:rds:ap-southeast-1:myAccntId:pg:myParamGroup-utf8mb4 -t copyOfMyParam -td testcopy
I'm pretty sure the ARN is correct and the parameter does exist. Is this a problem with the tool or aws? Is anyone else encountering a similar issue?

I ran into this same issue recently and opened a support ticket with AWS. The response I got was that the RDS team added this feature to the documentation but haven't yet built the actual support for this feature.

This bothered me a lot and ate up a couple of hours so I put this simple script together. There's loads of room for improvement so please share if you improve upon it or find issues!
https://gist.github.com/phill-tornroth/f0ef50f9402c7c94cbafd8c94bbec9c9

Related

Why doesn't Compute VM COS metadata not get carry over to "Equivalent command line"?

I'm deploying a container to a Container Optimized OS or COS on Google Compute.
I want to specify Logging and Monitoring for the VM. There are 2 ways to do this:
Specify metadata flags:
Mark the checkboxes
But when I then click on "Equivalent command line", there's no indication of these options.
Am I just misinterpreting something here or am I not allowed to specify these flags in the command?
I tried with the non-COS VM instance and the expected metadata flag showed up to indicate the metadata. But this does not show up in the COS command.
gcloud compute instances create instance-1 \
...
--metadata=MY_TEST_FLAG=test_value
Yes. When using container optimized OS images while creating a VM this issue is coming, But this is for command line code only. REST equivalent is generated properly, As a work around for this you can add the metadata flag to the generated command as mentioned below.
--metadata=google-logging-enabled=true,google-monitoring-enabled=true
I have raised a request on this issue. Please monitor the Google Public Issue Tracker for further updates on the fix of the issue.
In case you find any such issues in future you can report to Google using Report issues and request features with issue trackers.

How to copy local MLflow run to remote tracking server?

I am currently tracking my MLflow runs to a local file path URI. I would also like to set up a remote tracking server to share with my collaborators. One thing I would like to avoid is to log everything to the server, as it might soon be flooded with failed runs.
Ideally, I'd like to keep my local tracker, and then be able to send only the promising runs to the server.
What is the recommended way of copying a run from a local tracker to a remote server?
To publish your trained model to a remote MLflow server you should use 'register_model' API. For example, if you are using spacy flavor of MLflow you can use as below, where 'nlp' is the trained model:
mlflow.spacy.log_model(spacy_model=nlp, artifact_path='mlflow_sample')
model_uri = "runs:/{run_id}/{artifact_path}".format(
run_id=mlflow.active_run().info.run_id, artifact_path='mlflow_sample'
)
mlflow.register_model(model_uri=model_uri, name='mlflow_sample')
Make sure that the following environment variables should be set. In below example S3 storage is used:
SET MLFLOW_TRACKING_URI=https://YOUR-REMOTE-MLFLOW-HOST
SET MLFLOW_S3_BUCKET=s3://YOUR-BUCKET-NAME
SET AWS_ACCESS_KEY_ID=YOUR-ACCESS-KEY
SET AWS_SECRET_ACCESS_KEY=YOUR-SECRET-KEY
I have been interested in a related capability of copying runs from one experiment to another for a similar reason, ie set one area for arbitrary runs and another into which the results for promising runs that we move forward with are moved. Your scenario with separate tracking server is just the generalization of mine. Either way, apparently there is not a feature for this capability built-in to Mlflow currently. However, the mlflow-export-import python-based tool looks like it may cover both our use cases, and it cites usage on both Databricks and the open-source version of Mlflow, and it appears current as of this writing. I have not tried using this tool yet myself though - if/when I try it I'm happy to jot a follow-up here saying whether it worked well for this purpose, and/or anyone else could do same. Thanks and cheers!

How can I connect to my redshift cluster using Node.js?

I am trying to connect to one of my Redshift clusters so that I can fetch data from one of the tables there. I am using Node.js for it.
I used the createCluster() method and created a cluster, but I cannot seem to find a method to read from/connect to it. The aws docs are rather confusing for me as I am new to the aws environment.
How can I connect to an existing cluster and get some data out of a table in it?
Thanks :)
npm i node-redshift
you can connect using this npm name node-redshift
https://www.npmjs.com/package/node-redshift
OR
regarding the AWS, i think this will help you
Trying to Connect to Redshift Over AWS Lambda
I was trying to make the 'node-redshift' module work before I asked this question here. I found out what I was missing out on. I had to have a security group associated with my cluster. There was no option to create a security group in my region (Asia Pacific - Mumbai). I changed the region and was able to create the group, set the appropriate port and IP and it worked.
To those who are using node-redshift, it was working fine with 12.13.1 but was not responding for 14.15.0 LTS.
Maybe you may want to check your Node version once.
I was able to connect AWS-Lambda to Redshift importing 'node-redshift' module into the Lambda Function. The most important thing while creating lambda layer try to make the layer for 'node-redshift' package for version nodejs10.x. I tried adding the layer for nodejs14.x and was struggling to make a connection to Redshift which was not happening. So after changing the version from nodejs14.x to nodejs10.x it worked. Moreover thanks to Vijender R, his answer also gave me the direction to change the versioning for working with the 'node-redshift' package.
Had the same issue, the reason - I've updated node.js. After returned it to version 11.1.0, everything works ok.
The solution is to find something instead node-redshift (since it was updated 4 years ago)
It may be worth trying to use https://www.npmjs.com/package/#aws-sdk/client-redshift-data, which seems to be maintained by AWS itself.

About container options of Azure Batch

I am in trouble with the container options of Azure Batch.
To change the hostname of the container to be started, set --hostname="test" to the containerRunOptions of Task.
However, it is an error!
ContainerSettings: --hostname="test" Message: create_container() got an unexpected keyword argument 'hostname '
Even -h test will result in a similar error.
Other options work fine.(--volume etc...)
Pool Infomation:
Publisher:microsoft-azure-batch
OS:centos-container
sku:7-4
image:centos:latest(docker hub)
Is this a bug in Azure Batch?
Is the option to specify it wrong?
Updated answer (2018-08-23):
The fix for this issue has been rolled out.
Previous answer:
This was identified as a service defect and will be addressed in a future version. You can track the Azure Batch Node Agent release notes for when the fix is released.
If you are using Batch to execute tasks without performing deeper integration, e.g., you are using the Azure CLI or similar tooling, you can use Batch Shipyard in "non-native mode" to work around this problem in the meantime. (Disclaimer: I'm a contributor for this code).

Jenkins error trying to raise on-demand linux ec2 slave

Whenever I try to trigger a job that depends on that ec2 slave, it just stands in queue. I looked at the logs and saw this exception:
com.amazonaws.services.ec2.model.AmazonEC2Exception: Network interfaces and an instance-level security groups may not be specified on the same request
Whenever I click on build executor status on the left, there is a button that says "provision via ". I click on it and see the correct amazon linux image name that I entered under cloud on Jenkins' System Configuration, but when I click on that, I see that same exception as well... I just don't know how to fix this and cannot find any helpful information on this.
Any help would be much appreciated.
Ok, I'm not exactly sure what was causing the error since I don't really know how the Jenkins plugin interfaces with the aws api. But after a good amount of trial and error, I was able to provision the On Demand worker by adding more details/parameters in Configuration, under Cloud.
Adding a subnet ID for the VPC and a IAM Instance profile did the trick (I already had everything else including security groups, availability zone, instance type, etc). So it seems like you either leave out security groups, or go all in and fill in pretty much everything.
As an FYI if you see this with Jenkins EC2 Plugin v1.46 it looks like a genuine bug:
https://issues.jenkins-ci.org/browse/JENKINS-59543
The solution is to use 1.45 until it's fixed (see link above for more details).

Resources