Heroku with Django and node build - node.js

I am pushing everything from BitBucket to Heroku, using Pipelines. The problem are static files because I am using django-gulp which will compile all sass files (using nodejs and gulp) when I call collectstatic.
It will first push code to Heroku and run it, but as it turns out all other scripts (pip install, npm install, ...) will execute on BitBucket side, and not Heroku. My Profile has this inside:
web: gunicorn magistrska_web.wsgi --log-file -
Website is running, but there are no static files.
If always run with DISABLE_COLLECTSTATIC=1, otherwise I get the following
remote: -----> $ python manage.py collectstatic --noinput
remote: /bin/sh: 1: gulp: not found
What I would need to do is for Heroku to first run npm install before collectstatic or it won't work, but I am having hard time finding any documentation on this.
heroku local web works fine, because I ran collectstatic before locally.
bitbucket-pipelines.yml configuration:
image: nikolaik/python-nodejs
pipelines:
default:
- step:
script:
- git push https://heroku:$HEROKU_API_KEY#git.heroku.com/$HEROKU_APP_NAME.git HEAD
- pip install -r requirements.txt
- npm install
- npm install -g gulp
- python manage.py collectstatic --noinput

The solution was way too easy!
I needed to add another, nodejs buildpack on Heroku and it worked! Example below

Related

Running node.exe from Django on Heroku - [Errno 2] No such file or directory

I'm deploying a Django project Heroku. I need to run node.exe, so I copied node.exe to my folder and deploy all them to Heroku.
Here is the code use with node in Django:
def extract_eval_unpacked(text):
with io.open('temp.js','w',encoding='utf-8') as f:
f.write(text)
cmd = 'node.exe temp.js'
process = subprocess.check_output(cmd,shell=False)
if process:
return process.decode()
return None
This works locally, but when run on Heroku I get
[Errno 2] No such file or directory: 'node.exe temp.js': 'node.exe temp.js'
I checked on Heroku and I see node.exe:
$ heroku run ls
Running ls on ⬢ extractmedia... up, run.4541 (Free)
1 client_id.txt manage.py Procfile.windows runtime.txt test1.py
1.py gettingstarted node.exe README.md staticfiles
app.json hello Procfile requirements.txt temp.js
Why isn't this working, and how can I fix it?
I need to run node.exe, so I copied node.exe to my folder and deploy all them to Heroku
Don't do this.
Heroku doesn't run Windows, so it won't be able to execute a Windows binary like node.exe. There's a much better way to add Node.js to your application: use multiple buildpacks.
Set your main buildpack:
heroku buildpacks:set heroku/python
Add a second buildpack for Node.js:
heroku buildpacks:add --index 1 heroku/nodejs
Check your buildpacks and make sure that Python comes last:
heroku buildpacks
Add a package.json file to the root of your repository, e.g. by running npm init or yarn init.
If you depend on any specific Node.js libraries, include them as dependencies, e.g. via yarn add or npm install --save. Commit this file.
Update your Python to call node instead of node.exe. Commit that change.
Deploy.
You should see Node.js and any JavaScript dependencies get installed, followed by Python and all of your Python dependencies. node should be available at runtime.

Docker Compose w/ Gulp - Local gulp not found

I am attempting to use gulp inside a Docker container.
I have the following Dockerfile
FROM golang:alpine
RUN apk --update add --no-cache git nodejs
RUN npm install --global gulp
ENV GOPATH=/go PATH=$PATH:/go/bin
VOLUME ["/go/src/github.com/me/sandbox", "/go/pkg","/go/bin"]
WORKDIR /go/src/github.com/me/sandbox
CMD ["gulp"]
and I have the following docker-compose.yml
version: '2'
services:
service:
build: ./service
volumes:
- ./service/src/:/go/src/github.com/me/sandbox
docker-compose build builds successfully, but when I run docker-compose up, I get the following error message
Recreating sandbox_service_1
Attaching to sandbox_service_1
service_1 | [22:03:40] Local gulp not found in /go/src/github.com/me/sandbox
service_1 | [22:03:40] Try running: npm install gulp
I have tried several different things to try to fix it.
Tried also installing gulp-cli globally and locally
Tried installing gulp locally with npm install gulp
Tried moving the npm install --global gulp after the WORKDIR
Tried different paths for volumes.
My guess is that it has something to do with the volumes, because when I get rid of anything having to do with a volume, it doesn't complain.
Mr project structure is shown in screenshot below:
This is what worked for me.
RUN npm install -g gulp
RUN npm link gulp
You need a local version of gulp as well as a global one.
Adding this line should fix your issue
RUN npm i gulp

CI using Gitlab and Heroku

I'm using react-starter-kit for developing my web application, and Gitlab as my remote git repository.
I want to configure a continuous deployment such that on every push to the master, the npm run deploy script will be executed.
From my local pc, executing npm run deploy builds the node application and push it to the remote heroku git repository. It uses the local credentials on my pc.
I have configured the gitlab runner (in the .yml file) to execute the same npm run deploy, but it fails with Error: fatal: could not read Username for 'https://git.heroku.com': No such device or address.
I need to find a way to authenticate the gitlab runner to heroku. I have tried to set env variable HEROKU_API_KEY, but it also didn't work.
How can I push from my gitlab runner to my heroku git repo?
You should use dlp in your yml. Try something like this in the .gitlab-ci.yml:
before_script:
- apt-get -qq update
- npm set progress=false
- npm install --silent
deploy:
script:
- npm run deploy
- apt-get install -yqq ruby ruby-dev --silent
- gem install dpl
- dpl --provider=heroku --app=your-app-name --api-key=$HEROKU_API_KEY
only:
- master
You preferaby want to add the env variable $HEROKU_API_KEY from GitLab, not here directly.

Node.js Deployment through Elastic Beanstalk using Docker

I am trying to deploy a node.js react based isomorphic application using a Dockerfile linked up to Elastic Beanstalk.
When I run my docker build locally I am able to do so successfully. I have noticed however that the npm install command is taking a fair amount of time to complete.
When trying to deploy the application using the eb deploy command it is pretty much crashing the Amazon service or I get an error like this:
ERROR: Timed out while waiting for command to Complete
My guess is that this is down to my node_modules folder being 300MB big. I have also tried adding an artifact declaration into the config.yml file and deploying that way but get the same error.
Is there a best practice way of deploying a node application to AWS Beanstalk or is the best way to manually setup an EC2 instance and relying on Code Commit git hooks?
My Dockerfile is below:
FROM node:argon
ADD package.json /tmp/package.json
RUN npm config set registry https://registry.npmjs.org/
RUN npm set progress=false
RUN cd /tmp && npm install --silent
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
EXPOSE 8000
CMD npm run build && npm run start
...and this is my config.yml file:
branch-defaults:
develop:
environment: staging
master:
environment: production
global:
application_name: website-2016
default_ec2_keyname: key-pair
default_platform: 64bit Amazon Linux 2015.09 v2.0.6 running Docker 1.7.1
default_region: eu-west-1
profile: eb-cli
sc: git
You should change your platform to a more current one (I'm using
docker 1.9.1, and there might be newer versions)
I'm using an image from docker hub to deploy my apps into beanstalk. I build them using our CI servers and then run a deploy
command that pulls the image from docker hub. This can save you a
lot of build errors (and build time) and is actually more in touch with the Docker
philosophy of immutable infrastructure.
300MB for node_modules is not small but should present no problem. We deploy this size of dependencies and code regularly.

How can you get Grunt livereload to work inside Docker?

I'm trying to use Docker as a dev environment in Windows.
The app I'm developing uses Node, NPM and Bower for setting up the dev tools, and Grunt for its task running, and includes a live reload so the app updates when the code changes. Pretty standard. It works fine outside of Docker but I keep running into the Grunt error Fatal error: Unable to find local grunt. no matter how I try to do it inside Docker.
My latest effort involves installing all the npm and bower dependencies to an app directory in the image at build time, as well as copying the app's Gruntfile.js to that directory.
Then in Docker-Compose I create a Volume that is linked to the host app, and ask Grunt to watch that volume using Grunt's --base option. It still won't work. I still get the fatal error.
Here are the Docker files in question:
Dockerfile:
# Pull base image.
FROM node:5.1
# Setup environment
ENV NODE_ENV development
# Setup build folder
RUN mkdir /app
WORKDIR /app
# Build apps
#globals
RUN npm install -g bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN npm install -g grunt
RUN npm install -g grunt-cli
RUN apt-get update
RUN apt-get install ruby-compass -y
#locals
ADD package.json /app/
ADD Gruntfile.js /app/
RUN npm install
ADD bower.json /app/
RUN bower install
docker-compose.yml:
angular:
build: .
command: sh /host_app/startup.sh
volumes:
- .:/host_app
net: "host"
startup.sh:
#!/bin/bash
grunt --base /host_app serve
The only way I can actually get the app to run at all in Docker is to copy all the files over to the image at build time, create the dev dependencies there and then, and run Grunt against the copied files. But then I have to run a new build every time I change anything in my app.
There must be a way? My Django app is able to do a live reload in Docker no problems, as per Docker's own Django quick startup instructions. So I know live reload can work with Docker.
PS: I have tried leaving the Gruntfile on the Volume and using Grunt's --gruntfile option but it still crashes. I have also tried creating the dependencies at Docker-Compose time, in the shared Volume, but I run into npm errors to do with unpacking tars. I get the impression that the VM can't cope with the amount of data running over the shared file system and chokes, or maybe that the Windows file system can't store the Linux files properly. Or something.

Resources