Proxy for ajax calls using rake-pipeline-web-filters - rake-pipeline

I'm setting up my devel environment for an Ember.js app using rake-pipeline as described here.
During development, my html and javascript are served by webrick (rake-filter magic that I don't quite understand) on http://0.0.0.0:9292 and I have a REST service developed in php served by Apache on http://somename.local
My ajax calls from the client app are getting lost because of the browser's anti-cross-domain-ajax thing. How do I work around this issue?

You can't configure the proxy directly in your Assetfile. You'll have to create a config.ru file and use the rackup command to launch the server.
Here's an example Assetfile:
input "app"
output "public"
And config.ru:
require 'rake-pipeline'
require 'rake-pipeline/middleware'
require "rack/streaming_proxy" # Don't forget to install the rack-streaming-proxy gem.
use Rack::StreamingProxy do |request|
# Insert your own logic here
if request.path.start_with?("/api")
"http://localhost#{request.path.sub("/api", "")}"
end
end
use Rake::Pipeline::Middleware, 'Assetfile' # This is the path to your Assetfile
run Rack::Directory.new('public') # This should match whatever your Assetfile's output directory is
You'll have to install the rack and rack-streaming-proxy gems.

You can use Rack::Proxy and then just send the needed requests to the proxy.
if request.path.start_with?("/api")
URI.parse("http://localhost:80#{request.path}")
end

Related

NodeJS Google Vision is unable to detect a Project Id in the current environment

Under Ubuntu environment, NodeJS Google Vision complains:
Error: Unable to detect a Project Id in the current environment.
Even though I already put json credential through
$ export GOOGLE_APPLICATION_CREDENTIALS=/var/credential_google.json"
Please help.
As a quick hack you can try this :
$ GOOGLE_APPLICATION_CREDENTIALS="/var/credential_google.json" node app.js
It's not recommended to use a .json config file locally. I've seen these leak on production servers causing whole platforms to be deleted + the introduce environmental switching and security issues.
Setup Google Cloud CLI.
Now the server will 'look' at the local environment and use that.
If you get the error "Unable to detect a Project Id in the current environment.", it means the auth library cannot find the project default id.
You need to have a base project in Google Cloud set, regardless of environmental variables and project you're running.
Run
gcloud config set project [some-project-id]
Now if you run (node example)
"dev": "NODE_ENV=dev GCP_PROJECT=some-project-id nodemon index.ts",
It will load the project environment. This also allows you to deploy easier with:
"deploy:dev": "y | gcloud app deploy --project some-dev-project app.yaml",
"deploy:prod": "y | gcloud app deploy --project some-prod-project app.yaml"
App engine has security setup automatically with standard environments. With flex you can use one of the manage images Google Provides.
If you are usually a windows user and trying out Ubuntu (like me), the problem is likely with the assumptions that the export command exports variable to all terminal sessions and that you need to open a new terminal to get it to use (as expected in a windows terminal for an environment variable).
The export command doesn't export the variable to another terminal session. So if you export it in a terminal, you use it on the same terminal.
If you would like to export it permanently, then you can try the solution listed here
You can put the path to the JSON credentials directly when instantiating the client, by passing it as an argument.
For example:
const client = new speech.SpeechClient( {keyFilename: "credential_google.json"});
Also, for me setting it in the terminal didn't work.

How can I make node application see system variables on Google Cloud?

I have variable set in my .bash_rc file:
whoami#cloudshell:~/source/NodePrototype (x-alcove-9999999)$ echo $APP_ENVIRONMENT
LIVE
Yet node.js application out of:
const app_environment_config=require('./APP_ENVIRONMENT/' + process.env.APP_ENVIRONMENT)
produce
2019-02-21 14:18:16 default[20190221t141628] Error: Cannot find module './APP_ENVIRONMENT/undefined'
Eventhough when I enter node shell:
whoami#cloudshell:~/source/NodePrototype (x-alcove-9999999)$ node
> process.env.APP_ENVIRONMENT
'LIVE'
The same part works locally.
It depends on how your Node app is being launched, because looks like is not running in an environment where that variable exists, to make sure print all your current env vars to make this sure: console.log(process.env).
Also, a good practice, when you need something like that, is to use .env files with this module: https://www.npmjs.com/package/dotenv is a good practice to pass configuration to your Node apps.

Rscript and Nodejs integration on Ubuntu Server

I am trying to build a node js app in which i call rscript to do some statistical computation and return an array with 8 elements which then i pass back to nodejs so that we can display those elements on ejs pages .
I am successfully able to do this on local host everything is working fine and even rscript is running and giving back the output, but when we try to do the same on ubuntu server we are not getiing any console.log(out) on our terminal (out is the variable which gets the output from the rscript) we get a null.
We are calling the script in localhost and server in same way as shown.
`console.log(data);
var out = rscript(abc.R)
.data(data.xyz,data.abc)
.callSync();
console.log(out);`
In the above code we get json in the data variable and it gives log as well both on local and server.
I have installed all the libraries needed like rscirpt inside nodejs using npm and have already installed R and Rstudio on my ubuntu server and installed all the libraries too which are needed to run the rscript.
The rscript is placed in same folder where my index.js is alll the ejs pages are stored in other folder which the node app is able to access and display them too.
You will have to deploy your R script somewhere else and then call that R script using API calls in your node server file.
One of the services that you can use to call rscript as an API in node is Algorithmia. You will just need to follow their instructions and wrap all your code inside a function. It will appear as a sample there, once you create an R project.

process.env.PATH undefined in Passenger node app (production mode)

I recently deployed a node application with Phusion Passenger for nginx, and encountered a pretty quirky error in the process:
My code threw an error from trying to spawn a child_process. I did a bit of debugging and eventually concluded that the problem arose from the $PATH environment variable being undefined in node, and I could solve the problem with a passenger_env_var directive like this (showing an extract of my nginx config):
server {
listen 80;
server_name blargh.com;
root /home/user/blargh.com/build;
passenger_enabled on;
# For some reason $PATH isn't loaded into node, and we can't spawn child processes without it
passenger_env_var PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games;
}
I still haven't figured out what caused this problem though - setting passenger_load_shell_envvars on; didn't help, and the www-data user did have a $PATH envvar defined in the shell. Moreover, other environment variables (like $SHELL) seems to have been loaded by node, adding to the mystery of why $PATH was excluded.
Does anybody know what could cause this problem?
tl;dr
Specify global envvars that you expect to be defined at system boot (like PATH) in /etc/default/nginx. Use something like dotenv properly and write environment specific config for your app in a text file that's not checked in. Environment variables are pretty evil in general.
I felt this one deserved a fairly lengthy answer, since environment variables has caused recurring problems for me during the last couple of months.
Storing your config as environment variables is one of the rules that 12 factor app lays out for writing scalable web applications. They're good because they let you separate your config from your code in a flexible manner. However, a problem with them is that the way we encounter them normally, when we export MYVAR=myvalue or set them in our ~/.pam_environment or ~/.bashrc, the scope of them is our current terminal session.
This causes issues as we start to use solutions like Phusion Passenger to start our apps at system boot - their startup scripts don't care about user shell environments. They also don't care about the global /etc/environment apparently, which is what caused my problems with PATH being undefined.
Phusion Passenger actually has some documentation on making global environment variables persist:
If you installed Nginx through the Debian or Ubuntu packages, then you can define environment variables in /etc/default/nginx. This is a shell script so you must use the export FOO=bar syntax.
So by setting the PATH envvar in /etc/default/nginx, I could solve that issue. But I was still having trouble with the other environment variables - I had to set them in my nginx config to have them passed on to my node app. It was clear to me that this wasn't the right way to do it.
At this point I was already using dotenv, but I had misunderstood its purpose slightly. I had checked in the .env file and thought of it as a way to provide default values for envvars that would be overridden by the environment as needed. This isn't how the authors themselves envisioned this module to be used:
We strongly recommend against committing your .env file to version control. It should only include environment-specific values such as database passwords or API keys.
It started becoming clear to me that people often don't define the envvars for their apps in the actual environment. I found an article by Peter Lyons that suggests storing config in a text file instead of in envvars, and that's when it clicked for me.
My final solution was to uncommit my .env file, and write a specific one for each environment. I left a .env.template in my repo as a reference to what configuration my app expected to be defined at run-time.

How to debug django-piston application?

My piston application works correctly when I run it locally with python manage.py runserver command but returns
urllib2.HTTPError: HTTP Error 403:
FORBIDDEN
under apache. How can I debug django-piston application?
I usually debug Piston apps by:
Setting my handlers to use Basic Authentication, even if I'm normally using something else.
Use curl to make requests
Use pdb (or ipdb) to set a breakpoint in my handler if desired.
You can conditionally change to BasicAuthentication like this:
auth = {'authentication': WhateverYouAreUsingForAuthentication(realm="YourSite")}
if getattr(settings, "API_DEBUG", None):
from piston.authentication import HttpBasicAuthentication
auth = {'authentication': HttpBasicAuthentication(realm="Spling")}
some_handler = Resource(SomeHandler, **auth)
To pass a username and password using curl, use the -u option:
curl -u username:password http://localhost:8000/api/some/endpoint/
So in your local settings module, just set API_DEBUG=True whenever you want to use basic auth.

Resources