I'm attempting to bring a copy of our production database to our staging database. They are on different RDS instances. I tried using the native SQL Backup/Restore methods but I keep getting an error.
Code I ran on production:
exec msdb.dbo.rds_backup_database #source_db_name='DBName',#s3_arn_to_backup_to=N'arn:aws:s3:::path/to/backup/DBName2019-09-17.bak'
It worked just fine. When I attempt to restore using this command (I ran this on the staging RDS instance):
exec msdb.dbo.rds_restore_database #restore_db_name='DBName',#s3_arn_to_restore_from='arn:aws:s3:::path/to/backup/DBName2019-09-17.bak'
I receive an error:
Msg 50000, Level 16, State 1, Procedure msdb.dbo.rds_restore_database, Line 91
Database 'DBName' already exists. Two databases that differ only by case or accent are not allowed. Choose a different database name.
I've been googling for a bit and can't seem to find a definitive answer. I have several databases on the staging RDS instance; I don't relish the idea of having to create a new RDS instance every time I want to bring a copy of production into staging...
How do I restore a copy of production into staging without having to create a new RDS instance?
AFAIK - there is no Overwrite option in RDS for restoring the database to the existing name.
Execute the drop database command first and then do the restore.
USE [master];
GO
IF EXISTS (SELECT 1 FROM sys.databases WHERE name = 'database_name')
BEGIN
ALTER DATABASE [database_name] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE [database_name]; END;
GO
EXEC msdb.dbo.rds_restore_database #restore_db_name='database_name',
#s3_arn_to_restore_from= 'arn:aws:s3:::bucket_name/file_name_and_extension';
GO
EXEC msdb.dbo.rds_task_status;
you can use the last command to check the status of the backup or Restore
Related
I have deployed my MERN stack app on AWS EC2 and have done clustering but my RDS is 2CPU and 8GB ram now with the increase in traffic my DB instance gives an error of maximum connections so how can I increase connection or upgrade my RDS instance?
Do I have to reconfigure RDS Settings as my website is in production so I don't want it to go down? Kindly Guide me.
You haven't specified what DB engine you are using so it's difficult to give a firm answer but, from the documentation,:
The maximum number of simultaneous database connections varies by the DB engine type and the memory allocation for the DB instance class. The maximum number of connections is generally set in the parameter group associated with the DB instance. The exception is Microsoft SQL Server, where it is set in the server properties for the DB instance in SQL Server Management Studio (SSMS).
Assuming that you are not using MSSQL, you have a few different options:
Create a new ParameterGroup for your RDS instance, specifying a new value for max_connections (or whatever the appropriate parameter is called).
Use a different instance class with more memory as this will have a higher default max_connections value.
Add a read-replica.
Make code changes to avoid opening so many connections.
1 and 2 will require a change to be made to your database in a maintenance window so there would be downtime. It sounds like you have a single RDS instance so it's possible to upgrade without downtime. The process is backup-db -> restore-db to new instance -> upgrade restored instance -> change application to use restored instance (you will need to manage any writes done between backup + switchover yourself).
3 is only relevant if the issue is that the number of connections are making SELECT queries. If this is an issue you would need to update connection strings to use the read-replica.
4 is a huge scope but it's probably where I would start (e.g. could you use connection pooling, or cache data to reduce the number of connections?).
Using terraform and AWS I've created a Postgres RDS DB within a VPC. During creation, the superuser is created, but it seems like a bad idea to use this user inside my app to run queries.
I'd like to create an additional access-limited DB user during the terraform apply phase after the DB has been created. The only solutions I've found expect the DB to be accessible outside the VPC. For security, it also seems like a bad idea to make the DB accessible outside the VPC.
The first thing that comes to mind is to have terraform launch an EC2 instance that creates the user as part of the userdata script and then promptly terminates. This seems like a pretty heavy solution and terraform would create the instance each time terraform apply is run unless the instance is not terminated at the end of the script.
Another option is to pass the superuser and limited user credentials to the server that runs migrations and have that server create the user as necessary. This, however, would mean that the server would have access to the superuser and could do some nefarious things if someone got access to it.
Are there other common patterns for solving this? Do people just use the superuser for everything or open the DB to the outside world?
I have created a Cassandra database in DataStax Astra and am trying to load a CSV file using DSBulk in Windows. However, when I run the dsbulk load command, the operation never completes or fails. I receive no error message at all, and I have to manually terminate the operation after several minutes. I have tried to wait it out, and have let the operation run for 30 minutes or more with no success.
I know that a free tier of Astra might run slower, but wouldn't I see at least some indication that it is attempting to load data, even if slowly?
When I run the command, this is the output that is displayed and nothing further:
C:\Users\JT\Desktop\dsbulk-1.8.0\bin>dsbulk load -url test1.csv -k my_keyspace -t test_table -b "secure-connect-path.zip" -u my_user -p my_password -header true
Username and password provided but auth provider not specified, inferring PlainTextAuthProvider
A cloud secure connect bundle was provided: ignoring all explicit contact points.
A cloud secure connect bundle was provided and selected operation performs writes: changing default consistency level to LOCAL_QUORUM.
Operation directory: C:\Users\JT\Desktop\dsbulk-1.8.0\bin\logs\LOAD_20210407-143635-875000
I know that DataStax recently changed Astra so that you need credentials from a generated Token to connect DSBulk, but I have a classic DB instance that won't accept those token credentials when entered in the dsbulk load command. So, I use my regular user/password.
When I check the DSBulk logs, the only text is the same output displayed in the console, which I have shown in the code block above.
If it means anything, I have the exact same issue when trying to run dsbulk Count operation.
I have the most recent JDK and have set both the JAVA_HOME and PATH variables.
I have also tried adding dsbulk/bin directory to my PATH variable and had no success with that either.
Do I need to adjust any settings in my Astra instance?
Lastly, is it possible that my basic laptop is simply not powerful enough for this operation or just running the operation crazy slow?
Any ideas or help is much appreciated!
I'm trying to stay sane while configuring Bacula Server on my virtual CentOS Linux release 7.3.1611 to do a basic local backup job.
I prepared all the configurations I found necessary in the conf-files and prepared the mysql database accordingly.
When I want to start a job (local backup for now) I enter the following commands in bconsole:
*Connecting to Director 127.0.0.1:9101
1000 OK: bacula-dir Version: 5.2.13 (19 February 2013)
Enter a period to cancel a command.
*label
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Automatically selected Storage: File
Enter new Volume name: MyVolume
Defined Pools:
1: Default
2: File
3: Scratch
Select the Pool (1-3): 2
This returns
Connecting to Storage daemon File at 127.0.0.1:9101 ...
Failed to connect to Storage daemon.
Do not forget to mount the drive!!!
You have messages.
where the message is:
12-Sep 12:05 bacula-dir JobId 0: Fatal error: authenticate.c:120 Director unable to authenticate with Storage daemon at "127.0.0.1:9101". Possible causes:
Passwords or names not the same or
Maximum Concurrent Jobs exceeded on the SD or
SD networking messed up (restart daemon).
Please see http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00260000000000000000 for help.
I double and triple checked all the conf files for integrity and names and passwords. I don't know where to further look for the error.
I will gladly post any parts of the conf files but don't want to blow up this question right away if it might not be necessary. Thank you for any hints.
It might help someone sometime who made the same mistake as I:
After looking through manual page after manual page I found it was my own mistake. I had (for a reason I don't precisely recall, I guess to trouble-shoot another issue before) set all ports to 9101 - for the director, the file-daemon and the storage daemon.
So I assume the bacula components must have blocked each other's communication on port 9101. After resetting the default ports like 9102, 9103 according to the manual, it worked and I can now backup locally.
You have to add director's name from the backup server, edit /etc/bacula/bacula-fd.conf on remote client, see "List Directors who are permitted to contact this File daemon":
Director {
Name = BackupServerName-dir
Password = "use *-dir password from the same file"
}
While importing a database to my amazon rds instance i've been issued the following error:
ERROR 2006 (HY000) at line 667: MySQL server has gone away
I went ahead and tried changing the interative_timeout setting to a larger number. However, it'll only let me set that for a session and amazon doesn't allow it to be set for global sessions.
How do i import a larger database into my amazon's rds instance?
The documentation gives instructions on how to import large datasets. Typically, the best method is to create flat files and import them in to your RDS instance.
I recently completed a migration of a database over 120GB in size from a physical server to RDS. I dumped each table in to a flat CSV file, then split the larger files in to multiple 1GB size parts. I then imported each table in to RDS.
You can simply change your RDS DB sizing settings by using the parameter group settings. Most MSQL settings are in there. It will require a restart of the instance however. The setting you want is max_allowed_packet and you need to not only set it with the client, but on the server itself.
Here's how I did it, mind you my databases weren't very large (largest one was 1.5G).
First dump your existing database(s):
mysqldump [database_name] --master-data=2 --single-transaction --order-by-primary -uroot -p | gzip > /mnt/dumps/[database_name].sql.gz
You can then transfer this file to an Amazon EC2 instance that has permission to access your RDS instance using something like scp. Once the file is located on your Amazon EC2 instance you should extract it using:
gzip [database_name].sql.gz -d
#you should now have a file named [database_name].sql in your directory.
mysql -uroot -p -h=[rds-instance]
source [database_name].sql
It should then start importing. This information is located in their documentation.