I'm trying to start redis with a dump.rdb file on linux, but I get a core error. However, when I start on Windows with te same file, it runs perfectly. Also, if I try to start on this Linux machine with a smaller file, it seems that it starts.
Could it be a memory problem?
Thanks!
Amparo
i had the similar problem before and the reason is that the user "redis" do not have the write authority in the folder of "dump.rdb".bgsave is the default way to backup data to disk in redis. so i run the code " config set stop-writes-on-bgsave-error no" in redis.cli and the problem be fixed.what's more,you also can change the folder of the "dump.rdb" in redis.conf.
Related
I am using rc.local to start my node script on start with:
node .> "/log_file_$(date +"%H:%M:%S_%m_%d_%Y").txt"
It works fine - but now once the log grows in size - I need to create a new log on a server every 12/24 hours; without restarting the server.
Is there any simple way to change the node app output destination?
I would prefer not to use any library for that, because I need to log all the messages including errors, warns, not only console.log.
Thanks for your help.
There are a number of options, I'll offer two:
1. Stackdriver
Stream your logs to Stackdriver, which is part of Google Cloud, and don't store them on your server at all. In your node.js application, you can can setup Winston and use the Winston transport for Stackdriver. Then you can analyze and query them there, and don't need to worry about storage running out.
2. logrotate
If you want to deal with this manually, you can configure logrotate. It will gzip older logs so that they consume less disk space. This is a sort of older, "pre-cloud" way of doing things.
I have an NodeJS application that run on a production server with forever.
That application use a third-party SQLite database which is updated every morning with a script triggered by a cron, who download the db from an external FTP to the server.
I spend some time before realising that I need to restart my server every time the file is rapatriated otherwise there is no change in the data used by my application (I guess it's cached in memory at starting).
// sync_db.sh
wget -l 0 ftp://$REMOTE_DB_PATH --ftp-user=$USER --ftp-password=$PASSWORD \
--directory-prefix=$DIRECTORY -nH
forever restart 0 // <- Here I hope something nicer...
What can I do to refresh the database without restarting the app ?
You must not overwrite a database file that might have some open connection to it (see How To Corrupt An SQLite Database File).
The correct way to overwrite a database is to download to a temporary file, and then copy it to the actual database with the backup API, which takes care of proper transactional behaviour. The simplest way to do this is with the sqlite3 command-line shell:
sqlite3 $DIRECTORY/real.db ".restore $DOWNLOADDIRECTORY/temp.db"
(If your application manually caches data, you still have to tell it to reload it.)
I am using the ioredis library for Node.js - I am wondering how to send Redis a signal to force persistence. I am having a hard time finding out how to do this. The SAVE command seems to do this, but I can't verify that. Can anyone tell me for sure if the SAVE command will tell Redis to write everything in memory to disk on command?
this article hints at it:
https://community.nodebb.org/topic/932/redis-useful-info so does this
one: http://redis.io/commands/save
The answer is yes, SAVE will do the job for you, but it has a synchronous behaviour, means it will be blocking till the saving is done not letting other clients retrieve data. as shown in the docs:
You almost never want to call SAVE in production environments where it
will block all the other clients
The better solution is described in BGSAVE , you can call BGSAVE and then check for the command LASTSAVE which will return for you the timestamp of the latest snapshot taken from the instance. http://redis.io/commands/lastsave
I have configured Redis to use RDB persistence method to save data every second if single write (save 1 1) but still after restart I see the key value as nil.
Well I found that the Redis I used to start was with the command: redis-server &
This command is used to start every time with new key in database, so the data stored in snapshot and AOF files was ignored.
I changed the configuration to start the redis server with the correct path to database files and started the server with following command and its working fine now: /etc/init.d/redis_port start
I am currently working on a node.js api deployed on aws with elastic beanstalk.
The api accepts a url with query parameters, saves the parameters on a db (in my case aws rds), and redirects to a new url without waiting for the db response.
The main priority by far for the api is the redirection speed and the ability to handle a lot of requests. The aim of this question is to get your suggestions on how to do that.
I ran the api through a service called blitz.io to see what load it could handle and this is the report I got from them: https://www.dropbox.com/s/15wsa8ksj3lz99e/Blitz.pdf?dl=0
The instance and the database are running on t2.micro and db.t2.micro respectively.
The api can handle the load if no write is performed on the db, but crashes under a certain load when it writes on the db (I shared the report for the latter case) even without waiting for the db responses.
I checked the logs and found the following error in /var/log/nginx/error.log:
*1254 socket() failed (24: Too many open files) while connecting to upstream
I am not familiar with how nginx works but I imagine that every db connection is seen as an open file. Hence, the error implies that we reach the limit for open files before being able to close the connections. Is that a correct interpretation? Why am I getting the error?
I increased the limit in the way suggested here: https://forums.aws.amazon.com/thread.jspa?messageID=613983#613983 but it did not solve the problem.
At this point I am not sure what to do. Can I close the connections before getting a response from the db? Is it a hardware limitation? The writes to the db are tiny.
Thank you in advance for your help! :)
if you just modified ulimit, it might not be enough. You should look at fs.file-max for number of file descriptors,
sysctl -w fs.file-max=100000
as explained there :
http://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/