I am new to openshift and I've tried hard to modify my env upon git push so that I don't need to rhc env set ENV_VAR=value -a appname everytime I push. According to the documentation, I can do export in one of the action hooks, but whenever I did so, the environment variable will not register..
What is the best way to register those variables automatically, rather than needing to execute rhc command or ssh into the machine and export?
The documentation seems to be outdated as the method of exporting in action_hooks doesn't work anymore
https://developers.openshift.com/en/managing-environment-variables.html
I see that you have your answer already, but in case others come here for the same question, I'd like to mention that the rhc env set command actually sets a variable persistently, so it "survives" the code push, build and gear restart.
The documentation linked in the question says that the export can be used to view environment variables during build; it does not recommend setting environment variables using hooks.
The variables' listing itself, using the build hook, should work just fine. (worked for me at the time of writing this)
In case the export in the build action hook seems not to work (does not list the variables), it is typically caused by the hook file not being set executable (or by a syntax error within the file).
Yes, the action hook way is already broken, even though you export through the hook, you can see that there is no declare -x statements thrown out like stated in the documentation anymore.
One other method you can do is to use the action hook to write to files in this directory:
$HOME/.env/user_vars
for example, if you want to set RAILS_ENV=development, write a script that churns out this file:
$HOME/.env/user_vars/RAILS_ENV
with this content:
development
Spent an awful lots of time to find alternative ways too, but this guy nailed it out, copied it in case the link becomes broken in the future:
If you need to set some environment variables in your GEAR you can use an action hook.
The pre-start action hook will serve you well but if you need to restore those variables after a gear restart, pre-start action hook won’t work.
Post-restart action hook, on the other hand, will execute its actions but I haven’t managed to get the environment variables working. After its execution all environment variables that should have a value were empty.
What I did was to modify pre-start action hook to create environment variables as files under $HOME/.env/user_vars
# Actual script
export OPENSHIFT_POSTGRESQL_DB_HOST="xxx.xxx.xxx.xxx"
export OPENSHIFT_POSTGRESQL_DB_PORT="***"
export OPENSHIFT_POSTGRESQL_DB_NAME="***"
export OPENSHIFT_POSTGRESQL_DB_USERNAME="***""
# Added script for post restart variables
echo "xxx.xxx.xxx.xxx" > OPENSHIFT_POSTGRESQL_DB_HOST
echo "***" > OPENSHIFT_POSTGRESQL_DB_PORT
echo "***" > OPENSHIFT_POSTGRESQL_DB_USERNAME
echo "***" > OPENSHIFT_POSTGRESQL_DB_PASSWORD
After this, if you execute gear restart, the environment variables will exist and will be accesible from your application.
Reference:
https://guilleml.wordpress.com/2015/02/17/setting-environment-variables-in-openshift/
Related
I am trying to create some new env variables in the rhel machine using chef.
The block executes successfully but on trying to echo the value, i am getting black result.
Script-1:
execute 'JAVA_HOME' do
command 'export JAVA_HOME='+node['java']['home']
end
Script-2:
bash 'env_test' do
code <<-EOF
echo $chef
EOF
environment ({ 'chef' => 'chef' })
end
Also gave this a shot as it was mentioned in the documentation:
ENV['LIBRARY_PATH'] = node['my']['lib']
Please let me know where am i going wrong here..
So the thing you need to know about environment variables is they only work in one direction (parent process to children) so an export in a subcommand does nothing after that execute resource finishes. The second and third examples both work though, with the second setting it for just that bash resource and the third for both the Chef process and everything it spawns. Remember that you need to run with with -l debug to see the output from subcommands Chef runs.
Above explanation is pretty helpful. Updating the /etc/environments file using chef to make sure that env variables are present from the next session. Also using the 3rd approach to make the env variables available for the current session.
So I have this really nasty problem.
I once set up a tomcat Server on my raspberry pi. The version of it was 8.0.24. I've created a bash script which sets the variable $CATALINA_HOME=/home/pi/apache-tomcat-8.0.24 on each start.
Meanwhile the directory is /home/pi/tomcat - i removed the useless information.
I've changed the export in /etc/init.d/tomcat also, but it didnt help.
After every restart, CATALINA_HOME is set back to /home/pi/apache-tomcat-8.0.24 again.
Is there a way to see, which script sets the environmental variable?
Somewhere I told linux to change the path at startup to /home/pi/apache.. , but i cant find where.
You can add a line in a few of the startup scripts to print the value of $CATALINA_HOME. Try adding:
echo "In $0, \$CATALINA_HOME is $CATALINA_HOME"
to your .bashrc before and after the call to /etc/bashrc
There's also a script called setenv.bash inside Tomcat that sets these types of variables. Take a look in there too.
I have a node process that uses an environment variable in the form SECRET_KEY=1234.5678.910112.
This works fine if set using export in my bash_profile and the process is run directly in the shell.
But when running it using supervisor the script only picks up the part before the first period. This is the case either when reading env vars set in bash_profile or set using environment= in the conf file.
Turns out all I needed to do was to add single quotes around my variable. I did do this before but didn't run supervisorctl reread to get the new config.
I have an application in node.js which depends on env variables. I made some changes in code and now one of this vars should be changed after deploy. I don't want to do this manually. What is the best practise to do this automatically.
I guess that running some script after deploy could be solution, but I want to run this script only once (with this one particular change).
My only idea is that I should have script that will be checking (after each deploy) some directory if there is another script to run and then run it and remove it. But how can I achieve that?
The best way to approach this is to use the Heroku Toolbelt to set your environment variables as described here:
heroku config:set GITHUB_USERNAME=joesmith
You can then refer to these variables in your Node.js application by using the following syntax:
var dbUsername = process.env.DB_USERNAME;
Assuming you set a DB_USERNAME variable like this:
heroku config:set DB_USERNAME=myAppUserName
I like to ensure there's a fallback if the environment variable is not set, you can achieve that like this:
var dbUsername = process.env.DB_USERNAME || 'fallbackUsername';
// The string after || will be used if the process.env.DB_USERNAME variable is undefined (not set)
Description
We are in a current project based on MVC4/Umbraco using Azure Websites to host it.
We are using SCM_BUILD_ARGS to change between different build setups depending on which site in Azure we deploy to (Test and Prod).
This is done by defining an app setting in the UI:
SCM_BUILD_ARGS = /p:Environment=Test
Earlier we used Bitbucket Integration to deploy and here this setting worked like a champ.
We have now switched to using Git Deployment, pushing the changes from our build server when tests have passed.
But when we do this, we get a lovely error.
"MSB1008: Only one project can be specified."
Trying to redeploy the same failed deployment from the UI on Azure works though.
After some trial and error I ended going into the deploy.cmd and outputting the %SCM_BUILD_ARGS% value in the script.
It looks like the / gets dropped from SCM_BUILD_ARGS but only when using Git deploy, not Bitbucket Integration or redeploy from UI.
Workaround
As workaround I have for now added a / to the deploy.cmd script in front of the %SCM_BUILD_ARGS%, but this of course breaks redeploy, since we then have //p:Environment=Test in the MSBuild command when the value of %SCM_BUILD_ARGS% has been inserted.
:: 2. Build to the temporary path
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
:: Added / to SCM_BUILD_ARGS
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
) ELSE (
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
)
Question
Anyone know of a better solution for this problem or is it possibly a bug in Kudu?
We would love to have both deploy from Git and Redeploy working.
Could you try changing from "/" to "-"? For instance, AppSettings from /p:Environment=Test to -p:Environment=Test, see if it helps.
-p:Environment=Test did not work for me, the setting which worked for me at the time of this writing (September 2015) was
-p:Configuration=Test
There is clearly a Kudu bug in there, and you should open an issue on https://github.com/projectkudu/kudu. But for now, I can give you a workaround.
Instead of using an App Setting, include a .deployment file at the root of your repo, containing:
[config]
SCM_BUILD_ARGS = /p:Environment=Test
I think this will work in all cases. I suspect the bug has to do with bash messing up the environment in post receive hook scenarios, which only apply to direct git push but not to Bitbucket and Redeploy scenarios.
UPDATE: In fact, it's easy to see such weird bash behavior. Try this:
Open cmd.exe
Run: set foo=/abc to set a variable
Run bash
From bash, run cmd to launch a new cmd on top of bash (so cmd -> bash -> cmd)
Run set foo to get the value of foo
Result:
FOO=C:/Program Files (x86)/git/abc
So the value gets completely messed up. The key also gets upper cases, though that's mostly harmless. Strange stuff...