I'm fairly new to GIT and I'm trying to figure out if I can do as follow,
I'm developing an app which has a front-end and a back-end.
Let's say the front-end contains 10 files located in this path.
Eg:
/home/front_end/file1 , /home/front_end/file2 ...... /home/front_end/file10
While, the back-end contains 100 files located in a different path.
Eg:
/home/app/code/file1, /home/app/code/file2 ,................./home/app/code/file99
How can I create a repo which has two different locations?
You can't really.
What you can do is:
setting up a repo wherever you want, with 'front_end' and 'app_code' folders in it
symlink /home/font_end to yourRepo/front_end
(as in ln -s /path/to/yourRepo/font_end /home/front_end)
symlink /home/app/code to yourRepo/app_code
(as in ln -s /path/to/yourRepo/app_code /home/app/code)
Related
I am building a web application for an online "build your own" card game. In the application, I have a cards.json file that holds custom card data. This file is changed with fs whenever a user creates a card. Whenever I push local changes, the cards.json file gets overwritten on deploy. That means all the remote data gets lost on every deploy. How can I include a cards.json file remotely but not change the file whenever I push changes using git push heroku master?EDIT: I guess for clarification reasons, I have tried using a .gitignore as well as removing the file from the staging area. I'm not entirely sure, but I think the issue is that when the application is deployed the file is overwritten there.
So I just found out that the data created during runtime will always be deleted/reset.
https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
I guess the best fixes for anyone else who has this same issue are:
a) Look into Databases and Heroku Add-ons, or
b) This is very workaround, and there might be better ways to do it, but:
// Go into a new directory, and use
$ heroku ps:copy <FILENAME> --app <APPNAME>
// Then, copy+paste the data from this file into your main repo.
/* Now, each time you do this, you need to make sure you delete that file from the
* extra directory you created as ps:copy only works when the file doesnt exist locally.
*/
I think git fetch doesn't work in this instance, as it only pulls that unchanged file, rather than the changed one from the dyno.
Look up the .gitignore file in git, seems to me that's exactly what you're looking for.
If it doesn't recognize .gitignore properly at first:
git add [uncommitted changes you want to keep] && git commit
git rm -r --cached .
git add .
git commit -m "fixed untracked files"
In .gitignore add the cards.json along with the path .
eg. src/test/resources/testdata/cards.json
I am having an issue trying to use Capistrano to deploy an application that requires having several Amazon EFS bind mounts inside of the deployment (current) folder.
I have a directory on the webserver in the root called /webroot inside of it is where all of our code currently is along with about 7 folders (bind mounts) that are shared across three nodes.
Inside of my deploy.rb I have the following line set :deploy_to, "/webroot/testingCap" in which Capistrano is deploying the code into the symlinked folder current. This is great but now when it gets to the step of symlinking the bind mount directories for example:/webroot/uploads it throws an error:
rm -rf /webroot/uploads
rm: cannot remove '/webroot/uploads'
Device or resource busy
I am not sure why it is trying to forcefully remove that directory? I thought it was supposed to just symlink to the directory.
My linked_dirs part looks like this inside of deploy.rb:
append :linked_dirs, "/webroot/uploads"
What am I doing wrong?
:linked_dirs only works with relative paths and always uses Capistrano's shared directory.
When you add e.g. "foo" to :linked_dirs, Capistrano will create a symlink within your deployed app. If anything already exists there, it will delete it first (that is why you are seeing the rm -rf).
The destination of that link will always be to the same name in Capistrano's shared directory. So the chain of events will be like this:
rm -rf /webroot/testingCap/current/foo
ln -s /webroot/testingCap/shared/foo /webroot/testingCap/current/foo
Thus if you look inside current, you will a link that points
foo -> /webroot/testingCap/shared/foo
Notice that the path relative to current is identical to the path relative to shared. This is how :linked_dirs works and you can't change it.
For example, if your app expects to store uploads in public/uploads, you will need the exact same relative path to exist inside shared in order for the link to be established. In other words, the link will point like this:
/webroot/testingCap/current/public/uploads -> /webroot/testingCap/shared/public/uploads
In your case, I suspect you can get this to work, but you'll need to make sure that your mount points are located exactly where Capistrano expects them to be.
I've been trying to make my user uploaded date on OpenShift accessible publicly. However, I run into the issue that I can't seem to make it work in any way.
I'm using NodeJS to upload the files to process.env.OPENSHIFT_DATA_DIR via express4 and fs.
The files upload just fine. However, I've read plenty of messages saying that I should link the folders together using "ln -sf ../route/to/app-root/data/folder linked_folder". Which I've done, but I cannot still access them publicly.
I honestly don't know what else should I do. Do the files automatically sync? Because that doesn't seem to be the case. Or should I be uploading to my repo folder and then OpenShift automatically links it to the data dir folder?
My current exact setup when doing "ln" is:
01| cd app-root/repo/public/
02| ln -sf ../../data/user-files user-files
Doing this to link the user-files folder in repo/public with the openshift data/user-files folder.
So the thing is that I can't access the files in the front end by doing "ln" at all. No clue where to go from here.
All what you need is:
1. store all your files in OPENSHIFT_DATA_DIR directory
2. write a script that run before server.js or app.js it's function is to copy all data from OPENSHIFT_DATA_DIR to your desired directory inside your repo like public directory or whatever you want.
SAMPLE: initDataBeforeRun.js
var fs = require('fs');
fs.writeFileSync('./public', fs.readFileSync(process.env.OPENSHIFT_DATA_DIR));
I'm having a weird issue when pushing my app to heroku.
It's an angularjs front app with a basic nodejs server to be able to run it on heroku.
I'm pushing a deployment branch with all the app already "compile" by grunt in a /dist folder
My problem is in the /dist/public directory, I have 4 folders : js, css, img and fonts ; but after a push and checking on the dyno with heroku run bash, only the img one is in /dist/public, the 3 others aren't there.
I try to do a new push, renaming the public folder to another name (ie shared) and this time, all 4 folders are there, so it seems heroku's doing something with folders named public but I can't figure why and how to avoid this suppression/ignoring thing.
Has any of you encountered the same issue, and how to resolve it without having to rename my public folder ?
EDIT :
Adding my .gitignore file for those of you wondering about that:
/.vagrant/machines
/node_modules
/app/bower_components
/.sass-cache
/test
/app/src/lib/config.js
/dist
Do a git add -f dist/public/js dist/public/css dist/public/fonts from within your repo.
You have a .gitignore rule for /dist, which will ignore any files within /dist and its subdirectories, unless they are already being tracked. My guess is, that the files you have newly generated were not being tracked earlier, and hence they were silently ignored.
The -f flag in the git add above will add those forcefully (overriding the ignore rule), and so you will be able to make commits.
If there are only a few files, and you want to avoid adding the whole folders, I would suggest adding each of the individual files forcefully (i.e., with the -f flag).
I am running my NodeJS project on DotCloud. Sadly, DotClouds deployment is "project-intrusive" that is it requires a supervisord.conf file to reside in the app-root. My deployment setup looks like this (using git repos).
project-deploy.git/prod/dotcloud.yml
project-deploy.git/prod/project -> project.git
(/prod/project use project.git as a submodule to access the code)
Now, my though of this is that I eventually would end up having different environments like this, e.g. dev, test and stage. The dev environment wouldn't even have a dotcloud.yml file since it is expected to run everything locally.
Well this works pretty well. But the problem is the supervisord.conf file which is just for deployment to dotcloud, now it resides in the project.git repo, but it doesn't belong there since it is just for deployment.
Are there any modules or NodeJS scripts that let you put deployment configuration files elsewhere, and maybe even specify what the target environment is, e.g. node deploy.js --production, or something like that?
There is a way to get rid of supervisord.conf. Assuming that you want to run e.g. node app.js, you can put the following in dotcloud.yml:
www:
type: nodejs
process: node app.js
Now, of course, it doesn't solve the problem of the dotcloud.yml file itself; but at least it reduces clutter a little bit -- removing it from the approot.