I would like to know if it's possible, using libtorrent-rasterbar, to download a torrent and save it to a remote location (a remote server for instance) instead of saving it to local disk.
Yes, the features list mentions that the storage API is customizable.
The storage interface makes no reference to disk-specific things, thus there is nothing that excludes remote storage.
Related
I'm building an application that saves a lot of images using the C# Web API of ASP.NET.
The best way seems to save the images in the file system and save their path into the database.
However, I am concerned about load balancing. Since every server will put the image in its own file system, how can another server behind the same load balancer retrieve the image?
If you have resources for it, I would state that:
the best way is to save them in the file system and save the image path into the database
Is not true at all.
Instead, Id say using an existing file server system is probably going to produce the best results, if you are willing to pay for the service.
For dotnet the 'go to' would be Azure Blob Storage, which is ideal for non-streamed data like images.
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-dotnet
Otherwise, you can try and create your own file storage service from scratch. In which case you will effectively be creating a separate service API apart from your main cluster that handles your actual app, this secondary API just handles file storage and runs on its own dedicated server.
You then simply just create an association between Id <-> File Data on the File Server, and you're App Servers can request and push files to the File Server via those Ids.
Its possible but a File Server is for sure one of those types of projects that seems straightforward at first but very quickly you realize its a very very difficult task and it may have been easier to just pay for an existing service.
There might be existing self hosted file server options out there as well!
First of all I am a beginner with node.js.
In node.js when I use functions such as fs.writeFile(); the file is created and is visible in my repository. But when this same process is done on a cloud such as heroku no file is visible in the repository(cloned via git). I know the file is being made because I am able to read it but I cannot view it. Why is this??? Plus how can I view the file?
I had the same issue, and found out that Heroku and other cloud services generally prefer that you don't write in their file system; everything you write/save will be store in "ephemeral filesystem", it's like a ghost file system really.
Usually you would want to use Amazon S3 or reddis for json files etc, and other bigger ones like mp3.
I think it will work if you rent a remote server, like ECS, with a linux system, and a mounted storage space, then this might work.
Please forgive me if this has been asked but I could not find an answer.
How can one move a website from one server to another without having to download the files (zipped or otherwise) and then uploading to the new server?
I suspect this is possible via ftp, but am not sure how to do so.
Thank you
In standard of FTP protocol is this function. But in my experience this is hard to configure for work and not any servers support this feature.
More simple is use SSH. If you have SSH access to any of this servers then you can login to shell on one server and transfer files from/to other server via FTP.
I'm evaluating the features of a full-fledged backup server for my NAS (synology). I need
FTP access (backup remote sites)
SSH/SCP access (backup remote server)
web interface (in order to monitor each backup job)
automatic mail alerting if jobs fail
lightweight software (no mysql, sqlite ok)
optional: S3/Glacier support (as target)
optional: automatic long-term storage after a given time (ie local disk for 3 months, Glacier after that)
seems like biggest player are Amanda, Bacula and duplicity (likewise)
Any suggestion?
thanks a lot
Before jumping on the full server backups, please clarify these questions:
Backup software's are agent and non agent based, which one do you want to use?
Are you interested to go for open source or proprietary software?
Determine your source and destination are they in the same LAN or in the Internet. Try to get the picture of the bandwidth between source and destination and the volume of data getting backed up?
Also if you are interested try to know gui requirements and various other os platform support for backup software.
Importantly try to know the mail notification configuration.
Presently am setting one for my project and so far have installed bacula-v7.0.5 with webmin as gui. Trying the same config in the amazon cloud utilizing s3 as storage by mounting s3fs into the ec2 instance.
My bacula software is a free community version.Haven't explored the mail notification until now.
I am planning to use cloudfoundry paas service (from VMWare) for hosting my node.js application. I have seen that it has support for mongo and redis in the service layer and node.js framework. So far so good.
Now I need to store my mediafiles(images uploaded by users) to a filesystem. I have the metadata stored in Mongo.
I have been searching internet, but have not yet got good information.
You cannot do that for the following reasons:
There are multiple host machines running your application. They each have their own filesystems. Each running process in your application would see a different set of files.
The host machines on which your particular application is running can change moment-to-moment. Indeed, they will change every time you re-deploy your application. Every time a process is started on a new host machine, it will see an empty set of files. Every time a process is stopped on an old host machine, all the files would be permanently deleted.
You absolutely must solve this problem in another way.
Store the media files in MongoDB GridFS.
Store the media files in an object store such as Amazon S3 or Rackspace Cloud Files.
Filesystem in most cloud solutions are "ephemeral", so you can not use FS. You will have to use solutions like S3/ DB for such purpose