I am trying to avoid using a DB in my simple RESTful app.
I created a "posts.txt" file which has all the posts in it, the app reads from this file and creates the posts array (JSON.parse).
The problem is, when I "git push heroku master" it, the "posts.txt" in heroku gets overriden and thus I lose all the posts created by guests.
I tried to .gitignor this file but it seems I just do it worng (or that I didn't understand the idea of "untracking" a file).
What can I do in order to prevent the overriding (I just don't want to push a new "posts.txt" into heroku every time)?
Due to your Heroku app potentially being run on multiple servers over time, there is no way to ensure that your posts.txt file will remain consistent overtime. Also as you make changes, and as you have noted, it can easily get overwritten.
Heroku can terminate your application and start it on another server as needed. Almost like a server-less type setup.
That means there is no real way to ensure a stable data persistence on Heroku without some type of database layer.
Great point mentioned in the comments that I forgot to mention. The file will also be deleted after cycling because the filesystem is ephemeral. You can find more information about file uploads missing/deleted on Heroku's site here.
One other thing about this is even you are using some type of VPS or something like that, you'd still run into the problem of how to sync the posts down to your local machine during development and ensuring that stays in sync. Database layer is for sure the way to go here.
Related
I've googled this question a lot, but I haven't found right answer.
I've built an example app on NodeJS without Database connectivity. While developing I stored data in separate dir - "fakeDB", which contains "tests", "questions", "users" dirs and so on. In "tests" there are JSON files represent test data (a set of questions and answers).
When I deployed app on Heroku, tests stored correctly. When new test created, it is saved in "tests" dir and I have an access to it later.
But when I push a new commit to GH repo, tests that were created in Heroku, have been deleted.
How can I get copy of my Heroku repo on local machine?
NOTE: I've run heroku run bash and on ls it printed the list of local files, not from remote. Also, I've run git pull heroku to separate dir, but there were also a set of my previous files without created on Heroku.
Heroku's filesystem is ephemeral:
Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code. During the dyno’s lifetime its running processes can use the filesystem as a temporary scratchpad, but no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted. For example, this occurs any time a dyno is replaced due to application deployment and approximately once a day as part of normal dyno management.
This means that you can't reliably create files and store them on the local filesystem. They aren't shared across dynos, and they periodically disappear. Furthermore, Heroku doesn't provide a mechanism for easily retrieving generated files.
The official recommendation is to use something like Amazon S3 for storing uploads, generated files, etc. Of course, depending on what's in your files a database might be a better fit.
What I'm trying to accomplish:
Update the logging level on a node micro service, without stopping the existing running service, by detecting the saving of a change to the config file.
The reason: work policies demand different levels of approval based on what gets changed.
Updating a config file is a "standard change" (considered "safe", requiring low ceremony to accomplish.)
Changing the config file, and restarting the service is a "normal change" (considered "not safe", requiring vp approval).
This capability will go a long way towards allowing us to improve the logging in our services.
The technical challenge:
Both node-config and bunyan appear to require a restart in order to accept changes.
Results of attempting to do proper research prior to submitting a question:
Live updating Node.js server
Can node-config reload configurations without restarting Node?
(This last post has a solution that worked for someone, but I couldn't get it to work.)
In theory, I should be able to delete both the app level objects using the lines:
delete require.cache[require.resolve('config')];
delete require.cache[require.resolve('logging')];
and then recreate both objects with the new configurations read from the changed config file.
Deleting the config and logging object may work, but I'm still a node nube so every way that I attempted to use this magical line failed for me.
(Caveat: I was able to delete the objects I just couldn't get them recreated in such a way that my code would use the newly created objects.)
The horrifically ugly attempt of what I'm doing can be found on my github in the "spike_LiveUpdate" directory of:
https://github.com/MalcolmAnderson/uService_stub
The only thing we are attempting to change is the log level of the application so that, if we needed to, we could:
bounce the logging level "to 11" for a small handful of minutes,
have the user demonstrate their error,
and the put it back to the normal logging level.
(It would also be a useful tool to have in our back pocket so when the dealers of FUD say, "But if you get the logging level wrong, we will have to use an emergency change to fix it. Sorry, we appreciate your desire to get a better picture of what's happening in the application, but we just can't justify the risk you're proposing.")
Research prior to writing question:
Attempt to solve the problem.
Search for "node live update", "node live configuration update", "node no restart configuration change"
Search for the above but replacing the word "live update" with "rolling update" (this got a lot of Kubernetes hits, which may be the way I have to go with this.)
Bottom line: Attempting to be able to change both Node-config, and Bunyan logging without a restart.
[For all I know, there's a security reason for not allowing live updates.]
If I understand truly, you need to change log level of your application but without restart it
According to this, there are couple of ways I can suggest.
You can create a second server which includes the new deployment of your changed code and route new traffic to the new server and stop routing new traffic to the old server. When old server drains the processes it currently has, you can shut it down and go with a new one, or re-deploy your application for normal status.
You can create a second application layer in the same server with different port which I do not recommend it and do the same thing with first bullet that I mentioned.
If you're not running on Kubernetes, you need to make your own rolling update strategy (k8s is doing it automatically).
I have a file which is around 1.2 GB size and I want to call an instance to it while formulating results for my website. Is it possible to make the instance the same for all users of the website? According to my understanding, for eg Heroku, as all create separate instances of the website for every user, is there any way to make it happen. I apologize in advance if the question is naive !!
Bummer, heroku only allows you to have 500mb of files in your deploys.
If not for that, you could just commit that big file to your repo and use it on your site.
Heroku doesn't create an instance of the app for every user but for every deploy. Once you deploy your application, your entire old server is thrown away and a new one is created.
Let's say you need a temporary file, you could put it on /tmp/something and then use it for a while. Once you make a deploy, as the server is discarded and a new one is spawn, that file wouldn't be there and you'd have to recreate it for the first request.
Anyway, that doesn't look good. Even if heroku would let you store the file there, you'd also have to parse and dig through it in order to display your results for your site, which is likely to make you run out of memory too.
I would recommend you to review your approach to this problem, maybe break that file in small pieces or store it in a database to perform smaller calculations.
If there is absolutely no way around it, you could have a server in some other server like digital ocean and build a small api to perform the calculations there and make your site call it.
If you want, I can give you a hand on switching the approach for it, just post a different question, comment the link to it and I'll give it a shot.
I have a server deployed on Heroku through Heroku auto deployment method from GitHub. On that server I have a file named subscription.json which contains user data whenever user is registered. I want to see that file.
How can I access that file?
If that file is in your repository you should be able to read it like any regular file.
However, this isn't going to work on Heroku:
which contains user data whenever user is registered
Heroku's filesystem is dyno-local and ephemeral. Any changes you make to it will be lost the next time your dyno restarts. This happens frequently (at least once per day).
For saving things like user uploads, Heroku recommends using a third-party service like Amazon S3. But in this case I think a database would be a much more appropriate solution.
Heroku has its own PostgreSQL service that is very well supported and available out of the box. If you prefer to use another database there are plenty of options.
I've googled this question a lot, but I haven't found right answer.
I've built an example app on NodeJS without Database connectivity. While developing I stored data in separate dir - "fakeDB", which contains "tests", "questions", "users" dirs and so on. In "tests" there are JSON files represent test data (a set of questions and answers).
When I deployed app on Heroku, tests stored correctly. When new test created, it is saved in "tests" dir and I have an access to it later.
But when I push a new commit to GH repo, tests that were created in Heroku, have been deleted.
How can I get copy of my Heroku repo on local machine?
NOTE: I've run heroku run bash and on ls it printed the list of local files, not from remote. Also, I've run git pull heroku to separate dir, but there were also a set of my previous files without created on Heroku.
Heroku's filesystem is ephemeral:
Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code. During the dyno’s lifetime its running processes can use the filesystem as a temporary scratchpad, but no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted. For example, this occurs any time a dyno is replaced due to application deployment and approximately once a day as part of normal dyno management.
This means that you can't reliably create files and store them on the local filesystem. They aren't shared across dynos, and they periodically disappear. Furthermore, Heroku doesn't provide a mechanism for easily retrieving generated files.
The official recommendation is to use something like Amazon S3 for storing uploads, generated files, etc. Of course, depending on what's in your files a database might be a better fit.