I am trying to port a Perl script, which successfully reads and consumes Shibboleth session attributes, into Node.js. The Perl code looks, for example, like this:
die "Must be protected behind shibboleth authentication" unless $ENV{'AUTH_TYPE'} eq 'shibboleth';
die "Requires eppn" unless $ENV{'eppn'} ne "";
my $user = $ENV{'eppn'};
my $shib_session_id = $ENV{'Shib-Session-ID'};
It appears as though the Shibboleth attributes are available to Perl as environment variables. As far as I can tell (I don't know Perl), there is nothing within the script that is fetching or altering these values.
So, I checked process.env, in my Node.js app, and none of these values exist. Nor do they, as far as I have searched, exist in the request object created by Express.js.
The Perl script is on an Apache server, but nothing in the httpd.conf looks like it's passing anything special to the Perl script. My Node.js app is reversed proxied on the same Apache server.
Is it possible to get the Shibboleth attributes in Node.js, or does it rely on some Perl/Apache/Shibboleth magic?
Thanks to #mpapec's comment, I solved this by passing the Apache environment variables upstream as request headers:
RequestHeader set X-Auth-Type %{AUTH_TYPE}e
RequestHeader set X-EPPN %{eppn}e
RequestHeader set X-Shib-Session-ID %{Shib-Session-ID}e
These now appear in req.headers in my Node.js app; although X-Auth-Type is mysteriously set to (null)... I can work around that, for the time being, but any ideas why this is the case?
Related
I'm fairly new to node and express...
I have a constant I set in my Node/Express application for "Domain". I'm constantly switching between localhost and the actual url when I switch between development and production.
I'm trying to figure out how I can automatically test the URL so that when the code is read on the server (ie actual domain url vs localhost) that it sets the proper domain url.
Here is my code:
function define(name, value) {
Object.defineProperty(exports, name, {
value: value,
enumerable: true
});
}
define("DOMAIN", "http://www.actual_domain.com");
// define("DOMAIN", "http://localhost:3000");
Anyone have a simple solution for this?
Thanks!!
there is many solutions for this, but usually this is done by environment variables, depends on your platform, but, in Linux systems, you do the following in your shell
export ENV_URL="http://www.example.com"
and to make sure its exported successfully
echo $ENV_URL
and in your code you do
const base_url = process.env.ENV_URL || "http://www.default.com";
in your local machine you set the ENV_URL to localhost or whatever you prefer, and in your server you set it to your actual URL.
or you could simply have many configuration files and you can determine the appropriate one by the environemnt variable like
export ENV=PROD
and in your code you can load the prod configuration file that contains your environment configurations.
The de facto way to do this sort of thing is through environment variables. Have a look at 12factor.net for a load of best practices, but in general you want to put differences in the environment into environment variables, then read those variables at runtime.
I'm trying to run a number of api calls using dredd and api blueprint to test a site. I would like to run the tests on circleCI, as there are Selenium tests running in the same place. Each transaction needs to be accompanied by two tokens, which are set as cookies in the headers. Ideally, these would be set in the dredd.yml file. When running on a local machine, if I replace ACCESS_TOKEN and REFRESH_TOKEN with the actual values, the test runs as expected.
circle.yml:
test:
override:
- dredd
dredd.yml headers
header: ['Cookie: access_token=ACCESS_TOKEN; refresh_token=REFRESH_TOKEN']
Where ACCESS_TOKEN and REFRESH_TOKEN get replaced by the actual values set in circleCI's environment variables. I have also tried:access_token=$[ACCESS_TOKEN], access_token=$["ACCESS_TOKEN"] and access_token=$ACCESS_TOKEN. None of these are being replaced in the headers for the first api call.
The header looks like: {"Content-Type":"application/json; charset=utf-8","User-Agent":"Dredd/1.4.0 (Darwin 14.5.0; x64)","Cookie":" access_token=$ACCESS_TOKEN; refresh_token=$REFRESH_TOKEN"}
I am new to yaml files, so I'm probably missing something basic, but I did search around for a while. The hooks file is written with node.js, so I don't think the ruby/rails help will be useful here. If I am missing anything in the question don't hesitate to let me know.
YAML is a data representation language, not a template language (or template processor, for that matter). While an individual program might support loading environment variables or additional parameters named in the configuration, the YAML parser (probably, unless it's a custom module) isn't what's injecting them. While skimming the dredd docs I don't see any references to environment variables or parameters, it may be worth creating an issue on the project and starting a discussion with the developers to see if this is supported.
I can think of a number of ways to solve your specific problem, but they all involve additional tools to render the YAML with your variables injected. Perhaps the easiest solution for your case is to set environment variables in the CircleCI web configuration (NOT version-controled circle.yml). Then, set up a pre-build step, where the YAML configuration is generated. To do this, wrap the YAML in a BASH script, with the YAML document contained inside of it as a here-doc.
#!/bin/bash
# ACCESS_TOKEN and REFRESH_TOKEN are injected by CircleCI
cat <<EOF > config.yml
---
header: ['Cookie: access_token=${ACCESS_TOKEN}; refresh_token=${REFRESH_TOKEN}']
EOF
Then run the rest of your job normally, perhaps deleting the configuration file or restoring it from version control before any artifacts are created to avoid the leakage of your credentials.
The better way to work with headers is by Hook files setting headers before each request. As you are using Node.js, try set Node environment variables:
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
transaction.request.headers.Cookie =
'access_token=' + ACCESS_TOKEN +
'; refresh_token=' + REFRESH_TOKEN;
}
I'm behind a firewall and lazybones can't reach its repository without a proxy.
I've searched the source and can't seem to find any reference to a proxy that seems to be relevant.
Support was officially added in version 0.8.1 of Lazybones, albeit via a general mechanism to add arbitrary system properties to the application in its configuration file, ~/.lazybones/config.groovy.
You can read about the details in the project README, but in essence, simply add the following to your config.groovy file:
systemProp {
http {
proxyHost = "localhost"
proxyPort = 8181
}
https {
proxyHost = "localhost"
proxyPort = 8181
}
}
You can use the systemProp. prefix to add any system properties to Lazybones, similar to the way it works in Gradle.
Is that what You're looking for? Basically You need to add some properties to gradle.properties file.
I am using Cygwin on Windows and I have modified the last line of
~/.gvm/lazybones/current/bin/lazybones
to say
exec "$JAVACMD" "${JVM_OPTS[#]}" -classpath "$CLASSPATH" "-Dhttp.proxyHost=127.0.0.1" "-Dhttp.proxyPort=8888" "-Dhttp.nonProxyHosts=localhost|127.0.0.1" uk.co.cacoethes.lazybones.LazybonesMain "$#"
Please note the quotes around the options. It works very well with my local Fiddler installation.
I have found no better way to enable proxy support due to the way the script is using eval. Maybe a more experienced shell script programmer can come up with a more elegant solution.
I was able to get out through the proxy setting the environment settings of
Picked up JAVA_TOOL_OPTIONS: -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8080
-Dhttp.nonProxyHosts="lmig.com" -Dhttps.proxyHost=127.0.0.1 -Dhttps.proxyPort=8080
unfortunately my environment requires authentication so I couldn't provide the complete proxy this way. I first ran "OWASP Zed Attach Proxy (ZAP)" which allowed me to run a proxy on my own machine (at port 8080) which then provided the complete authentication required.
This was able to then run the complete "lazybones list" command which retrieved the contents of the respositories.
Unfortunately I was not able to create an application from those templates becuase bintray required a login (though an anonymous login would do) and couldn't seem to get an additional level of authentication (I received "Unauthorized" from bintray)
I'm trying to use the node-config module to change some parameters of my configuration (basically logging level) during runtime.
In the official documentation says:
Environment variables can be used to override file configurations. Any environment variable that starts with $CONFIG_ is set into the CONFIG object.
I've checked that this is true when the server starts but it does not seem to work once it's up. (The handler of the watch function is never called when an environment variable is changed unlike a change in the runtime.json file or directly changing a config variable).
I'm currently watching the whole CONFIG object like this:
var CONFIG = require('config');
CONFIG.watch( CONFIG , null , function(object, propertyName, priorValue, newValue){
console.log("Configuration change detected");
});
Does anyone know if this is possible?
The environment is available during startup of a process.
If the process is running, you won't be able to change the environment anymore, the process is in.
The only option is to restart the process or use other mechanisms to communicate with it.
Say for example having a rest or tcp listener inside, where you can transfer your variable inside.
Best regards
Robert
As you must knowing, React is a single page application which is eventually when it is complied is a static page app that means all the files of the react application is complied into vanilla JS and CSS file bundle in a Tarball. Now that Tarball is eventually deployed on a web server. It could be Apache web server, nginx web server or anything which you are using it but an important point is the static app is running in someone else browser and someone access to website CSS and JS are downloaded in a browser and it is running in the browser runtime environment so technically you cannot have a runtime environment variable for someone else browser but may be there would be a way to access them during runtime.
SOLUTION
I have achieved this goal with the package called runtime-cra.
follow the steps on this official documentation: https://blog.risingstack.com/create-react-app-runtime-env-cra/
I am using the Java API for uploading files to Rackspace Cloud. I am trying to figure out how to set the header "Access-Control-Allow-Origin" on the files that I am uploading. I found another link that talks about setting this header using Python Binding here:
Setting Access-Control-Allow-Origin (CORS) in the Rackspace Cloud Files Python API
Is there a similar API with Java Binding as well? I can't seem to find it.
Thanks!
I'm not much of a Java guy, but per this it looks like metadata needs to be set on your containers, with a key of X-Container-Meta-Access-Control-Allow-Origin, and a value of a space separated list of allowed origins.
Thus you need to use whatever function is used to set container metadata for the jclouds API.
It appears that this could be done on creation like so (based on adaptation of this code):
CreateContainerOptions options = CreateContainerOptions.Builder
.withMetadata(ImmutableMap.of("Access-Control-Allow-Origin", "*"));
swift.getApi().createContainer(Constants.CONTAINER, options);
Looking through the docs, I found the following function in org.jclouds.openstack.swift.CommonSwiftClient:
boolean setContainerMetadata(String container,
Map<String,String> containerMetadata)
It therefore looks like you should be able to do what you're looking for with something like the following:
swift.getApi().setContainerMetadata(container, ImmutableMap.of("Access-Control-Allow-Origin", "*"));