We recently moved the database to Amazon RDS SQL Server. We have some difficulties with the date times (timezone). By default RDS provides UTC date. Is there any way to overwrite / manipulate the local timezone at database level in SQL Server. Please help me on this.
Thanks in advance,
SqlLover
https://forums.aws.amazon.com/thread.jspa?messageID=161339
The time zone is currently not modifiable. There is indeed a Parameter value in rds-describe-db-parameters called "default_time_zone" but it's marked as not modifiable.
Related
I have deployed my MERN stack app on AWS EC2 and have done clustering but my RDS is 2CPU and 8GB ram now with the increase in traffic my DB instance gives an error of maximum connections so how can I increase connection or upgrade my RDS instance?
Do I have to reconfigure RDS Settings as my website is in production so I don't want it to go down? Kindly Guide me.
You haven't specified what DB engine you are using so it's difficult to give a firm answer but, from the documentation,:
The maximum number of simultaneous database connections varies by the DB engine type and the memory allocation for the DB instance class. The maximum number of connections is generally set in the parameter group associated with the DB instance. The exception is Microsoft SQL Server, where it is set in the server properties for the DB instance in SQL Server Management Studio (SSMS).
Assuming that you are not using MSSQL, you have a few different options:
Create a new ParameterGroup for your RDS instance, specifying a new value for max_connections (or whatever the appropriate parameter is called).
Use a different instance class with more memory as this will have a higher default max_connections value.
Add a read-replica.
Make code changes to avoid opening so many connections.
1 and 2 will require a change to be made to your database in a maintenance window so there would be downtime. It sounds like you have a single RDS instance so it's possible to upgrade without downtime. The process is backup-db -> restore-db to new instance -> upgrade restored instance -> change application to use restored instance (you will need to manage any writes done between backup + switchover yourself).
3 is only relevant if the issue is that the number of connections are making SELECT queries. If this is an issue you would need to update connection strings to use the read-replica.
4 is a huge scope but it's probably where I would start (e.g. could you use connection pooling, or cache data to reduce the number of connections?).
I need to build a microservice that scrapes a message once a day and persists it somewhere. It does not need to be accessible after 24 hours (it can be deleted). It doesn't really matter where or how, but I need to access it from an Express.js endpoint and return the message. Currently we use Redis and MongoDB for data persistence. It feels wrong to create a whole collection for one tiny service, and I'm not sure of an application of Redis that would fulfill this task. What's my best option? Open to any suggestions, thank you!
You can use YUGABYTE DB, and you can set TABLE LIVE= 24 Hours, then data will be deleted.
Redis provide an expiration mechanism out of the box. You can associate a timeout to a key, and it will be automatically deleted after the timeout has expired. Some official documentation here
Redis also provides logical databases, if you want to keep this expiring keys separated from the rest of your application. So you do not need to spin up another machine. Some official documentation here
The ASP.net MVC application is in EC2 and interacting with RDS ( SQL Server). The application is sending Bulk GET request (API call) to RDS via NHibernate to get the items. The application performance is very slow as sometimes it’s making around 500 numbers of GET API call to get 500 items from the DB ( note - getting items from DB has its own stored procedure/ Logic)
I was referring this to understand scaling RDS https://aws.amazon.com/blogs/database/scaling-your-amazon-rds-instance-vertically-and-horizontally/ and https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/
However, didn’t get much clue that support my business scenario.
My questions are(considering above scenario):
Is there any way to distribute my GET request to RDS (SQL Server) so that it can return the 500 items from SQL server quickly?
Is it possible to achieve this without any code or existing architecture change ( both from .net an SQL end)?
What are the different ways I should tryout to make this performance better?
What are the pricing details for Read replica?
Note: The application does both read and write. And, I’m more concern about this particular GET API calls.
Thanks.
Is there any way to distribute my GET request to RDS (SQL Server) so that it can return the 500 items from SQL server quickly?
You will need to have a router in your application that will route the request to the read replicas(can be many).
You can provision a read replica with different instance type with enhanced capacity for that use-case.
You can try memory cache, it can reduce response time and can off load read work load to the read replicas.
Is it possible to achieve this without any code or existing architecture change ( both from .net an SQL end)?
Based on the documentation "applications can connect to a read replica just as they would to any DB instance." which means your application requires additional modification to support the use-case.
What are the different ways I should tryout to make this performance better?
memory cache and instance type with enhanced capacity for reads(the same suggestion above)
What are the pricing details for Read replica?
It will depends on the instance type that you provision.
I have created manage instance in azure using UTC timezone at time of creation. Now I want to change timezone to GMT. So is this any way to make timezone change of Manage instance SQL server?
This can't be changed once managed instance is created. You need to redeploy managed instance with correct timestamp and use cross instance PITR to move databases.
The date/time is derived from the operating system of the computer on which the instance of SQL Server is running.
I searched a lot and according my experience , we can not change the timezone once the SQL server instance is created.
The only thing we can to is convert the UTC timezone to GMT. Many people has post similar problem on Stack overflow. Such as:
Date time conversion from timezone to timezone in sql server
SQL Server Timezone Change
Azure gives the built-in function AT TIME ZONE (Transact-SQL) applies to SQL Server 2016 or later. AT TIME ZONE implementation relies on a Windows mechanism to convert datetime values across time zones.
inputdate AT TIME ZONE timezone
For example, Convert values between different time zones:
USE AdventureWorks2016;
GO
SELECT SalesOrderID, OrderDate,
OrderDate AT TIME ZONE 'Pacific Standard Time' AS OrderDate_TimeZonePST,
OrderDate AT TIME ZONE 'Central European Standard Time' AS OrderDate_TimeZoneCET
FROM Sales.SalesOrderHeader;
I don't have the Azure SQL MI, so I could test it for you.
Hope this helps.
While importing a database to my amazon rds instance i've been issued the following error:
ERROR 2006 (HY000) at line 667: MySQL server has gone away
I went ahead and tried changing the interative_timeout setting to a larger number. However, it'll only let me set that for a session and amazon doesn't allow it to be set for global sessions.
How do i import a larger database into my amazon's rds instance?
The documentation gives instructions on how to import large datasets. Typically, the best method is to create flat files and import them in to your RDS instance.
I recently completed a migration of a database over 120GB in size from a physical server to RDS. I dumped each table in to a flat CSV file, then split the larger files in to multiple 1GB size parts. I then imported each table in to RDS.
You can simply change your RDS DB sizing settings by using the parameter group settings. Most MSQL settings are in there. It will require a restart of the instance however. The setting you want is max_allowed_packet and you need to not only set it with the client, but on the server itself.
Here's how I did it, mind you my databases weren't very large (largest one was 1.5G).
First dump your existing database(s):
mysqldump [database_name] --master-data=2 --single-transaction --order-by-primary -uroot -p | gzip > /mnt/dumps/[database_name].sql.gz
You can then transfer this file to an Amazon EC2 instance that has permission to access your RDS instance using something like scp. Once the file is located on your Amazon EC2 instance you should extract it using:
gzip [database_name].sql.gz -d
#you should now have a file named [database_name].sql in your directory.
mysql -uroot -p -h=[rds-instance]
source [database_name].sql
It should then start importing. This information is located in their documentation.