Passing Main script variables into Perl Modules - linux

I am writing a Perl script that is run by a user and makes use of the current Linux environment as variables and other variables as well. The environment settings may change and be different from what they were originally.
However, I'm trying to use self-contained Perl Modules and need to be able to access these variables. What is the best practice to go about doing this? I can just pass along 10 variables when I create an object using the Perl Module, but that seems excessive...
Thanks

The environment variables are accessible from anywhere in the global %ENV hash:
print $ENV{HOME};
If you are creating objects, they probably have some attributes (being the objects hashes, arrays or even inside out objects...) Just store the relevant values into the attributes, e.g.
my $obj = Some::Package->new( name => 'Homer',
surname => 'Simpson',
city => 'Springfield',
# ... 7 more
);

Related

Terraform data dynamically using variables

I was wondering if it's possible to grab different data dynamically based on variables like so
data.terraform_remote_state.vm.outputs.vm_***var.vmname***
Or something similar? i dont have the option to redesign the outputs currently, and this would greatly lower the chance of making failure upon creating new terraform deployments
thanks!
There are Input Variables available in Terraform. These variables allow you to define inputs expected at the time of terraform apply. The values may be entered via an interactive terminal or provided in a .tfvars file.
variable "vmname" {
type = string
description = "The name of the virtual machine."
}
Then you can use them by expansion:
"data.terraform_remote_state.vm.outputs.vm_${var.vmname}"
For additional reference, see https://www.terraform.io/docs/language/values/variables.html

Is there a way to input variable values from outside to terraform main file?

Is there a way I can input variable values from outside to terraform main file. It can be a excel sheet or sql db. Is it possible to do so ?
What you can't currently do is point you cmdline at a db i.e. to replace a tfvars file, but what you can set up in Terraform is to use a number of different key value stores:
consul
https://www.terraform.io/intro/examples/consul.html
aws parameter store (using a resource or data)
https://www.terraform.io/docs/providers/aws/d/ssm_parameter.html
There are quite a number of other key/value stores to choose from but there's no zero code solution and you will end up with lots of these statements:
Setup a key in Consul to provide inputs
data "consul_keys" "input" {
key {
name = "size"
path = "tf_test/size"
default = "m1.small"
}
}
There are many ways to do that;
You can use a tfvars file with all your inputs and you can use one file customer, user, environment
You can pass the variables to terraform executable on the command line
You can define environment files prefixed wit TF_VAR_[variable]
You can use https://www.terraform.io/docs/providers/aws/d/ssm_parameter.html as suggested above
You can even store variables in DynamoDB or any other database
You can use Consult+Vault as well

How I can create environment parameter in bash or from script

I want to create env parameter
that the key is a:b or a#b
I need to do it from bash script or from terminal , it should work from linux or windows
when I tried it export a:b=c
I got an error
not a valid identifier
When I tried
export tempKey = a:b then It worked but then I didn't know how to use the value a:b to create it as key
Could you please advise ?
None of the commonly used unix shells will let you create a var whose name includes characters not legal in an identifier (typically letters, digits and underscore). The simplest workaround is to use the env command since it doesn't impose any restrictions on the strings it puts in the environment. For example, env a:b=c a_cmd where a_cmd is whatever command needs that environment string. If you want it to be part of the shell's environment do exec env a:b=c $SHELL. Obviously the new shell won't be able to use that var since $a:b is not a valid var reference even if you enclose the var name in braces.

How to use machine-generated variables in cookiecutter

Is there a way to machine-generate some values, after the user has supplied some their values for the variables in cookiecutter.json?
The reason I ask is that:
one of the values I need to prompt for is rather hard for users to work out
but it's really easy for me to write some Python code to generate the right value
So I'd really like to be able to remove the user prompt, and calculate the value instead.
Things I've tried:
Searched online for an example pre_gen_project.py file to show how to do it
Read the cookiecutter Advanced Usage page
I'm using cookiecutter on the command line:
cookiecutter path_to_template
Am I missing any tricks?
I needed this exact capability just a few days ago. The solution I came up with was to write a wrapper script for cookiecutter, similar to what is mentioned in:
http://cookiecutter.readthedocs.io/en/latest/advanced_usage.html#calling-cookiecutter-functions-from-python
My script generates a random string for use in a Django project. I called my script cut-cut:
#! /usr/bin/env python
from cookiecutter.main import cookiecutter
import os
rstring = ''.join([c for c in os.urandom(1024)
if c.isalnum()])[:64]
cookiecutter(
'django-template', # path/url to cookiecutter template
extra_context={'secret': rstring},
)
So now I simply run cut-cut and step through the process as normal. The only difference is that the entry named secret in my cookiecutter.json file is prepopulated with the generated value in rstring from the script, provided via the extra_context passed.
You could modify the script to accept the template via the command line, but in my usage I always use the same template, thus I simply pass a hard coded value "django-template" as noted in the code above.

Organize code in unix bash scripting

I am used to object oriented programming. Now, I have just started learning unix bash scripting via linux.
I have a unix script with me. I wanted to break it down into "modules" or preferably programs similar to "more", "ls", etc., and then use pipes to link all my programs together. E.g., "some input" myProg1 | myProg2 | myProg3.
I want to organize my code and make it look neater, instead of all in one script. Also, it will be easy to do testing and development.
Is it possible to do this, especially as a newbie ?
There are a few things you could take a look at, for example the usage of aliases in bash and storing them in either bashrc or a seperate file called by bashrc
that will make running commands easier..
take a look here for expanding commands into aliases (simple aliases are easy)
You can also look into using functions in your code (lots of bash scripts in above link's home folder to make sense of functions browse this site :) which has much better examples...
Take a look here for some piping tails into script
pipe tail output into another script
The thing with bash is its flexibility, so for example if something starts to get too messy for bash you could always write a perl/Java any lang and then call this from within your bash script, capture its output and do something else..
Unsure why all the pipes anyways here is something that may be of help:
./example.sh 20
function one starts with 20
In function 2 20 + 10 = 30
Function three returns 10 + 10 = 40
------------------------------------------------
------------------------------------------------
Local function variables global:
Result2: 30 - Result3: 40 - value2: 10 - value1: 20
The script:
example.sh
#!/bin/bash
input=$1;
source ./shared.sh
one
echo "------------------------------------------------"
echo "------------------------------------------------"
echo "Local function variables global:"
echo "Result2: $result2 - Result3: $result3 - value2: $value2 - value1: $value1"
shared.sh
function one() {
value1=$input
echo "function one starts with $value1"
two;
}
function two() {
value2=10;
result2=$(expr $value1 + $value2)
echo "In function 2 $value1 + $value2 = $result2"
three;
}
function three() {
local value3=10;
result3=$(expr $value2 + $result2;)
echo "Function three returns $value2 + $value3 = $result3"
}
I think the pipes you mean can actually be functions and each function can call one another.. and then you give the script the value which it passes through the functions..
bash is pretty flexible about passing values around, so long as the function being called before has the variable the next function being called by it can reuse it or it can be called from main program
I also split out the functions which can be sourced by another script to carry out the same functions
E2A Thanks for the upvote, I have also decided to include this link
http://tldp.org/LDP/abs/html/sample-bashrc.html
There is an awesome .bashrc to be reused, it has a lot of functions which will also give some insight into how to simplify a lot of daily repetitive commands such as that require piping, an alias can be written to do all of them for you..
You can do one thing.
Just as a C program can be divided into a header file and a source file for reducing complexity, you can divide your bash script into two scripts - a header and a main script but with some differences.
Header file - This will contain all the common variables defined and functions defined which will be used by your main script.
Your script - This will only contain function calls and other logic.You need to use "source <"header-file path">" in your script at starting to get all the functions and variables declared in the header available to your script.
Shell scripts have standard input and output like any other program on Unix, so you can use them in pipes. Splitting your scripts is a good solution because you can later use them in pipes with other commands.
I organize my Bash projects in the following way :
Each command is put in its own file
Reusable functions are kept in a library file which is just a classic script with only functions
All files are in the same directory, so commands can find the library with $(dirname $0)/library
Configuration is stored in another file as environment variables
To keep things clear, you should not use global variables to communicate between functions and main program.
I prepare a template for scripts with the following parts prepared :
Header with name and copyright
Read configuration with source
Load library with source
Check parameters
Function to display help, which is called if asked for or if parameters are wrong
My best advice is : always write the help function, as the next person who will need it is ... yourself !
To install your project you simply copy all files, and explain what to configure in the configuration file.

Resources