heroku retain files/folders while re-deploying new version of nodejs application to heroku - node.js

My folder structure for images looks like below
./public/img/**
under img folder I have following folders - categoryImages, languageImages, socialShareImages and userImages.
Now I want to retain the userImages as this contains images uploaded by user, But every time I deploy a new version of my app to heroku with "git push heroku master" it overwrites userImages folder.
I tried without userImages folder in my git repository but even this doesnt help. Looks like everytime you upload a new version of app every folder and file is rewritten. Now question is how can I retain this userImages folder?
Regards,
Chidan

Heroku has a ephemeral filesystem and as far as I know, (and as much as I wish it weren't) I don't think you can get any files to persist.
However, heroku offers a free Postgres database per app, and anything in your database will persist. You'll have to use the node.js pg package to access Postgres.
Edit: I believe it's also possible to use S3 (which might work better for something like images). You'll have to look that up though.

Related

How to exclude from delete folder during heroku deploy?

I deployed app on heroku which can save files with database record. Problem is, when i deploy again some changes, folder with files is new without files which were there before deploy (rows in database stay of course). How can resolve this? I want to folder "files" in root app folder stay this same during all future deploys on Heroku.
App - nodejs + express + react + postgresl
This is not possible, Heroku provides an ephemeral file system which is wiped out after each deployment.
You need to persist data in a database (MongoDB Atlas, Heroku Postgres addon) or use an external file storage (AWS S3, Dropbox).
Check out Files on Heroku to see some options and examples.

How to dynamically change API URL's in react app which ran in a Docker container without rebuilding?

What is the best way to manage API URL's in an application (created with create-react-app) and ran in a Docker container?
Actually, I want to build a docker image and be able to run it on different environments (production and staging for example) without building a new one.
My current solution is to start a container with some environment variable like "docker run -e ENV=dev".
Add a logic to read env from query params. If query params are not passed use the default. That way, you can easily switch between envs on the fly. If you want to remember your users choice, then store it in the storage and you can read from storage when query param is not passed.
I don't consider myself a React dev, but I have come across and solved the same problem with Vue and docker.
After a little research specifically for React, it looks like you can use the public folder to share a mounted file/directory with the running container. Vue has a similar folder. The files in that directory are accessible from the app root URL (/some-file.blah).
Your app's directory structure might look like:
./app
./app/src
./app/src/public/
./app/src/public/config.json
./app/src/... (the rest of your app)
I assume that config.json would then be available at /config.json after a build. You could either then include that file in your HTML template's script tags or load it on demand using AJAX depending on what stage of the page lifecycle it's needed.
Having very little experience with React myself, I assume someone more familiar can provide clarification (or better edits) to help out.

Uploaded Image Disappeard When Heroku Server Restart

I am using node.js to upload files to my Heroku server. Everything works fine, but when the Heroku server restarts or goes down all the uploaded files disappears, the URL hits returns 'Not Found'.
Experienced this months ago. You need to host the images somewhere else as Heroku does not support storing files in them. Ended up using Cloudinary to store files, and then later on getting a VPS server.
Media files and other user content that is uploaded by users should not be stored on Heroku. Heroku was built to run the application code and it only cares about the files of your application that are in your repository.
Heroku discards your previous environment on every deploy, and launches a new one based on your code repository.
So only application code should stay there, other things should be delegated to other services, in this case for file storage you should use something like S3 or similar.
Heroku has something called Ephemeral filesystem.
From the documentation:
Each dyno gets its own ephemeral filesystem, with a fresh copy of the
most recently deployed code. During the dyno’s lifetime its running
processes can use the filesystem as a temporary scratchpad, but no
files that are written are visible to processes in any other dyno and
any files written will be discarded the moment the dyno is stopped or
restarted.

duplicating modx revo install

I'd like to do some changes to a modx revo install through a staging subdomain, with a separate database. What's the easiest way of doing this? I've been battling with this for two days.
I'm trying a new install now and replacing content, components and database content
I end up moving/duplicating MODX sites between live and staging subdomains several times per week. Here's how I do it.
MySQL
Create a new blank staging database
Make sure you MySQL user can access the new databse
Export/Backup your live database
Import the backup to your new/staging database
Files
Download the matching version of MODX from http://modx.com/download/previous-releases/ because you'll need the /setup/ directory (hopefully you didn't leave that on your server previously).
Copy the entire content of the 'public_html or 'www' folder over to the staging subdomain folder. Don't forget the .htaccess file which is sometimes hidden.
Upload the setup folder to your staging location on your server just like it would be found in a clean MODX install.
Update the three config.core.php files from the top directory, /connectors/, and /manager/ to update the "MODX_CORE_PATH" to the correct directory for staging.
Update the 'core/config/config.inc.php' file. You'll need to update the database details and every instance of your directory structure to match the new staging location.
Run Setup
Run by going to staging.domain.com/setup
If you get ant errors during setup it probably means that you missed something that needed updating in one of the inc.php files.
It's actually very similar to moving the site from one server to another except duplicating to a subdomain on the same server instead. MODX has instructions for moving to a new server at http://rtfm.modx.com/revolution/2.x/administering-your-site/moving-your-site-to-a-new-server
There is another method to solve this problem.
Create new database & user for your sub site.
There is nice github repo. There you can find MODX install script which runs via cli. You'll get a new installed version of MODX in the end.
Install Vapor package from official repo to your old site. Then run vapor script from it via cli. It creates a new package with your whole site dump (You should check dependencies for xpdo objects in this script. For ex. you can copy all the stuff except users or anything else).
After all copy new package to core/packages at new site and install it.
Dump is ready :)

Deployed version of NodeJS site not loading on AWS

I am doing my first deployment on AWS (using Elastic Beanstalk), and I am completely new to this.
I built a personal website using NodeJS / Express, and on my local machine it loads just fine. Once I was ready to deploy a v1, I created an AWS account and set up a new EBS application environment for Node. I set up the static files to load from /public, set my node version, and set the launch command as node app.js, but those were the only options I changed.
I zipped up my site (using CNTL + Click -> Compress on a selection of all site files) and uploaded that zip, and after some time, it came up all green. Clicking the link to load my site though, I get a half finished version. Looking at my console, I see that I am getting 4 files as 404, and because of that, 4 failures from RequireJS.
These 4 files are backbone views, and are contained in a folder with 4 other JS files that are all loading just fine (I can open them in the chrome dev tools source tab from the deployed version). I am confused how just these 4 files would go missing.
Is there some way to FTP into where ever my files are contained, to confirm the files are in fact not present? And barring that, what steps are available to figure out what is occurring here? Like I said, it looks and loads just fine locally, and I am at a loss as to where to even start debugging something like this. The AWS docs I have read so far only tell me to do exactly what I have been doing.
Repo for the project is here: https://github.com/RyanMG/trustycode
And the deployment is here: http://trustycode.elasticbeanstalk.com/
The files it is having trouble with are under public/javascript/views/ (CodeView, AboutView, PhotoView, DesignView)
Any ideas / advice?
Is there some way to FTP into where ever my files are contained, to confirm the files are in fact not present?
You can ssh into the EC2 instance of the Elastic Beanstalk app using your pem file.
Check files in /var/app/current
I don't have the reputation to comment, but that is one of those common gotchas I found myself switching to OSX from GNU/LINUX at work. OSX is case insensitive; linux world is case sensitive.

Resources