cloudfoundry: how to use filesystem - node.js

I am planning to use cloudfoundry paas service (from VMWare) for hosting my node.js application. I have seen that it has support for mongo and redis in the service layer and node.js framework. So far so good.
Now I need to store my mediafiles(images uploaded by users) to a filesystem. I have the metadata stored in Mongo.
I have been searching internet, but have not yet got good information.

You cannot do that for the following reasons:
There are multiple host machines running your application. They each have their own filesystems. Each running process in your application would see a different set of files.
The host machines on which your particular application is running can change moment-to-moment. Indeed, they will change every time you re-deploy your application. Every time a process is started on a new host machine, it will see an empty set of files. Every time a process is stopped on an old host machine, all the files would be permanently deleted.
You absolutely must solve this problem in another way.
Store the media files in MongoDB GridFS.
Store the media files in an object store such as Amazon S3 or Rackspace Cloud Files.

Filesystem in most cloud solutions are "ephemeral", so you can not use FS. You will have to use solutions like S3/ DB for such purpose

Related

How to see the files that were written to disk from node.js app hosted on heroku?

I have a node.js app on heroku and I sometimes need to write files to heroku. Do you know how to see those files? Should I delete them after I am finished using them? I do not want to use memory for no reason.
Heroku (and other container based platforms) are different from traditional servers that you might be used to. It's worth bearing in mind that the Heroku filesystem is ephemeral - that means that any changes to the filesystem whilst the dyno is running only last until that dyno is shut down or restarted. Each dyno boots with a clean copy of the filesystem from the most recent deploy.
If you really needed to check a file on a running dyno (let's say to debug an issue with a file upload) it is possible to login using Heroku Exec https://devcenter.heroku.com/articles/exec
That said, you really shouldn't be using the filesystem for anything other than temporary files. Instead you should aim to use external services for persistent storage as described here: https://12factor.net/
For example, if you are handling file uploads you could try storing these on a service like Amazon S3.

Why Heroky reset my file "data.json" everyday?

I made a discordJs bot which saves data in a file.
Everything is hosted on Heroku and all works good.
But everyday, Heroku reset my file.
Why can't i keep my files everyday ?
Here's the full explanation from Heroku docs:
The Heroku filesystem is ephemeral - that means that any changes to
the filesystem whilst the dyno is running only last until that dyno is
shut down or restarted. Each dyno boots with a clean copy of the
filesystem from the most recent deploy. This is similar to how many
container based systems, such as Docker, operate.
In addition, under normal operations dynos will restart every day in a
process known as "Cycling".
These two facts mean that the filesystem on Heroku is not suitable for
persistent storage of data. In cases where you need to store data we
recommend using a database addon such as Postgres (for data) or a
dedicated file storage service such as AWS S3 (for static files). If
you don't want to set up an account with AWS to create an S3 bucket we
also have addons here that handle storage and processing of static
assets https://elements.heroku.com/addons
Source: https://help.heroku.com/K1PPS2WM/why-are-my-file-uploads-missing-deleted

File server in container

I can see there are some implemented Web, DB servers are able to run as a container, it occurred to me that why not be able to implement as a file server with a centralized storage (e.g. SAN)
Does anyone try this before, or any recommendation to me?
My basic idea is use 2-3 docker images to create the file servers (mostly Windows servers) and they are mounting on the same storage. For the front-end, I may go or DFS namespaces to normalize the UNC path.
Windows based images have Server service disabled out of the box. It's impossible to start it either since drivers are removed as well. It will not be possible to do in Windows containers.

what does 'scaling node to multiple instances' mean

I've been building a web app in node (and have built others in asp.net-mvc) and recently trying to understand the concept of scaling an app to many users. Assuming a web app gets thousands (or millions) of concurrent users, I understand that the load should be split to more than one instance of node.
my question is, do all these instances run on the same server (virtual machine)? if so, do they (should they) access the same database? if so, does this mean (if I use nginx) that ngingx would be responsible for routing the different requests to the different node instances?
and assuming uploaded files are saved on the file system, do the different instances access the same directories? if not, if a person uploads images to the file system, and connects later and is routed to a different instance of node, how does he access the images he uploaded earlier? is there some sort of sync process done to the file systems?
any resources/articles regarding this would be very helpful!

Is a Amazon Machine Images (AMI's) static or it's code be modified and rebuilt

I have a customer who wishes me to do some customisations of the erp system opentaps, which they used via opentaps Amazon Elastic Computing Cloud (EC2) images, I've only worked with it on a normal server and don't know anything about images in the cloud. When I ssh in with the details the client gave me there is no sign of the erp installation directory I'd expect to see. I did originally expect that the image wouldn't be accessible, but the client assured me it was. I suppose they could be confused.
Would one have to create a new image and swap it out or is there a way to alter the source and rebuild like on a normal server?
Something is not quite clear to me here. First of all EC2 images running in the cloud are just like normal virtual servers, so If you have an access to the running instance there is no difference between instance in the cloud and instance on another pc in your home for example.
You have to find out how opentaps are installed on the provided amis, then do your modifications, create an image from the modified instance and save it to s3 for backup if necessary.
If you want to start with fresh instance, you can start up any linux/windows distro on the EC2, install opentaps yourself your way and you are done.

Resources