I've been trying to make my user uploaded date on OpenShift accessible publicly. However, I run into the issue that I can't seem to make it work in any way.
I'm using NodeJS to upload the files to process.env.OPENSHIFT_DATA_DIR via express4 and fs.
The files upload just fine. However, I've read plenty of messages saying that I should link the folders together using "ln -sf ../route/to/app-root/data/folder linked_folder". Which I've done, but I cannot still access them publicly.
I honestly don't know what else should I do. Do the files automatically sync? Because that doesn't seem to be the case. Or should I be uploading to my repo folder and then OpenShift automatically links it to the data dir folder?
My current exact setup when doing "ln" is:
01| cd app-root/repo/public/
02| ln -sf ../../data/user-files user-files
Doing this to link the user-files folder in repo/public with the openshift data/user-files folder.
So the thing is that I can't access the files in the front end by doing "ln" at all. No clue where to go from here.
All what you need is:
1. store all your files in OPENSHIFT_DATA_DIR directory
2. write a script that run before server.js or app.js it's function is to copy all data from OPENSHIFT_DATA_DIR to your desired directory inside your repo like public directory or whatever you want.
SAMPLE: initDataBeforeRun.js
var fs = require('fs');
fs.writeFileSync('./public', fs.readFileSync(process.env.OPENSHIFT_DATA_DIR));
Related
I am currently using nodejs that is deployed in ebs on aws. I have a function that will write a pdf and then email it off but it says the file path can't be found. I've verified the project file seems to be /var/app/current/, but changing the reference of the file path doesn't seem to remove the error. Any idea how to go about fixing this?
The /var/app/current/ does not exist initially. Its only created at the very last stage of your deployment.
The deployment happens in /var/app/staging/ folder, and at the very last, once everything finishes, /var/app/staging/ is moved into /var/app/current/.
Thus, I would not recommend using absolute paths in your project or config files. Its better to use relative path or container_commands for config scripts:
The specified commands run as the root user, and are processed in alphabetical order by name. Container commands are run from the staging directory, where your source code is extracted prior to being deployed to the application server.
I have a Node-RED app running in a docker container, with the aim to periodically read contents of a directory where .csv files are constantly updated and new .csv files are sometimes added. The point is to read new entries periodically, parse data, and send it onward.
I have not utilized the numerous 'contrib' nodes, as I have enabled the NodeJS 'fs' module and played with it. Additionally the built-in 'file' and 'file in' Node-RED modules are useful when reading the .csv files' contents, so that is not an issue.
The problem comes with the new .csv files being added into the directory where all the .csv files are. I want be able to read all the file names and subsequently read all the .csv files.
I have mounted the .csv file directory into the docker container, and when testing whether I'm able to read the file names, weird things happen. Even though the files are visible in the container (viewed using docker exec -it CONTAINER /bin/bash) a piece of code containing fs.readdir does not list the files. When I try the fs.readdir too see the contents of /data directory, which is mounted into the container, it lists the contents like 10 % of the time (injecting a timestamp into the node to run it)
As you can see from the image, the contents of the directorty in question are not listed on every execution of the node. The contents of the mounted directory containing the .csv files are never listed upon running this node with the correct path as parameter.
The operating system is CentOS 7, where I am not a sudoer. I have managed to make it so that none of the mounted files or directories are owned by root, so they are owned by user node-red within the container. I managed to pull this directory file listing through on my ubuntu where I am a sudoer, but as none of the stuff is root-owned there either, I am not sure if that is the problem. I have a feeling this might be an operating system -relating thing.
Notes:
All relevant files and directories have permissions rwxr-xr-x
I have tried to mount the .csv files containing directory under /data directory, and as its own directory directly under root as /files
I am able to read the file contents with the Node-RED file nodes, just not the directories. Reading static file names is not enough as the directory contents keep changing
I have enabled NodeJS 'fs' module from the settings.js file which is mounted into the container
The Node-RED node (in image) does not output any errors (I tried this by adding an error return to the function in the image)
I have tried to run the Node-RED container as root user and without defining the user
I am running the Node-RED container using docker-compose
I hope this was not too much text or too unclear, I just wanted to make sure at least most of the stuff I have tried would be written here. If someone has some insight on the workings of Node-RED under docker and using the NodeJS fs module, it would be most appreciated :)
The core Watch node should do all of this for you, no need to write function nodes.
If you want walk subdirectories make sure you tick the right box in the config.
From the Sidebar docs for the watch node:
The full filename of the file that actually changed is put into
msg.payload and msg.filename, while a stringified version of the watch
list is returned in msg.topic.
msg.file contains just the short filename of the file that changed.
msg.type has the type of thing changed, usually file or directory,
while msg.size holds the file size in bytes.
To answer my question of why Node-RED was unable to read directory contents most of the time, it was because of using the asynchronous fs.readdir module. When I switched to using the synchronous version fs.readdirSync, Node-RED was able to read directory contents without problems.
I am having an issue trying to use Capistrano to deploy an application that requires having several Amazon EFS bind mounts inside of the deployment (current) folder.
I have a directory on the webserver in the root called /webroot inside of it is where all of our code currently is along with about 7 folders (bind mounts) that are shared across three nodes.
Inside of my deploy.rb I have the following line set :deploy_to, "/webroot/testingCap" in which Capistrano is deploying the code into the symlinked folder current. This is great but now when it gets to the step of symlinking the bind mount directories for example:/webroot/uploads it throws an error:
rm -rf /webroot/uploads
rm: cannot remove '/webroot/uploads'
Device or resource busy
I am not sure why it is trying to forcefully remove that directory? I thought it was supposed to just symlink to the directory.
My linked_dirs part looks like this inside of deploy.rb:
append :linked_dirs, "/webroot/uploads"
What am I doing wrong?
:linked_dirs only works with relative paths and always uses Capistrano's shared directory.
When you add e.g. "foo" to :linked_dirs, Capistrano will create a symlink within your deployed app. If anything already exists there, it will delete it first (that is why you are seeing the rm -rf).
The destination of that link will always be to the same name in Capistrano's shared directory. So the chain of events will be like this:
rm -rf /webroot/testingCap/current/foo
ln -s /webroot/testingCap/shared/foo /webroot/testingCap/current/foo
Thus if you look inside current, you will a link that points
foo -> /webroot/testingCap/shared/foo
Notice that the path relative to current is identical to the path relative to shared. This is how :linked_dirs works and you can't change it.
For example, if your app expects to store uploads in public/uploads, you will need the exact same relative path to exist inside shared in order for the link to be established. In other words, the link will point like this:
/webroot/testingCap/current/public/uploads -> /webroot/testingCap/shared/public/uploads
In your case, I suspect you can get this to work, but you'll need to make sure that your mount points are located exactly where Capistrano expects them to be.
I'm having a weird issue when pushing my app to heroku.
It's an angularjs front app with a basic nodejs server to be able to run it on heroku.
I'm pushing a deployment branch with all the app already "compile" by grunt in a /dist folder
My problem is in the /dist/public directory, I have 4 folders : js, css, img and fonts ; but after a push and checking on the dyno with heroku run bash, only the img one is in /dist/public, the 3 others aren't there.
I try to do a new push, renaming the public folder to another name (ie shared) and this time, all 4 folders are there, so it seems heroku's doing something with folders named public but I can't figure why and how to avoid this suppression/ignoring thing.
Has any of you encountered the same issue, and how to resolve it without having to rename my public folder ?
EDIT :
Adding my .gitignore file for those of you wondering about that:
/.vagrant/machines
/node_modules
/app/bower_components
/.sass-cache
/test
/app/src/lib/config.js
/dist
Do a git add -f dist/public/js dist/public/css dist/public/fonts from within your repo.
You have a .gitignore rule for /dist, which will ignore any files within /dist and its subdirectories, unless they are already being tracked. My guess is, that the files you have newly generated were not being tracked earlier, and hence they were silently ignored.
The -f flag in the git add above will add those forcefully (overriding the ignore rule), and so you will be able to make commits.
If there are only a few files, and you want to avoid adding the whole folders, I would suggest adding each of the individual files forcefully (i.e., with the -f flag).
I'm fairly new to GIT and I'm trying to figure out if I can do as follow,
I'm developing an app which has a front-end and a back-end.
Let's say the front-end contains 10 files located in this path.
Eg:
/home/front_end/file1 , /home/front_end/file2 ...... /home/front_end/file10
While, the back-end contains 100 files located in a different path.
Eg:
/home/app/code/file1, /home/app/code/file2 ,................./home/app/code/file99
How can I create a repo which has two different locations?
You can't really.
What you can do is:
setting up a repo wherever you want, with 'front_end' and 'app_code' folders in it
symlink /home/font_end to yourRepo/front_end
(as in ln -s /path/to/yourRepo/font_end /home/front_end)
symlink /home/app/code to yourRepo/app_code
(as in ln -s /path/to/yourRepo/app_code /home/app/code)