Platform-independent way of locating personal R library/libraries - linux

Actual question
How can I query the default location of a personal package library/libraries as described in the R Installation and Adminstration even after environment variables like R_LIBS_USER or .libPaths() etc. might already have been changed by the user?
I'd just like to understand how exactly R determines the default settings in a platform-independent way.
Naively, I was hoping for something equivalent to R.home("library"), e.g. R.user("library")
Due dilligence
I checked this post and the answers sort contain the information/paths I'd like to retrieve. Unfortunately I only really know my way around on Windows, not on OS X or Linux. So I'm not sure if/how much of this is correct in a generic sense (home directory, separation of user vs. system-wide stuff etc.):
OS X
/Library/Frameworks/R.framework/Resources/library
Linux
/usr/local/lib/R/site-library
/usr/lib/R/site-library
/usr/lib/R/library
I also looked into the manual, but that only gave me a basic idea of how R handles these sort of things (maybe just looked in the wrong corner, any pointers greatly appreciated).
Background
I sometimes create a temporary, fresh package library for the purpose of having a "sandbox" for systematic testing (e.g. when planning to upgrade certain package dependencies) .
When I'm done, I'd like to delete that library again while making absolutely sure that I don't accidentally delete one of the standard libraries (personal library/libraries and system-wide library).
I'm starting to put together a little package called libr for these purposes. Function deleteLibrary contains my current approach (lines 76 ff.):
## Personal libs //
r_vsn <- paste(R.version$major, gsub("\\..*", "", R.version$minor), sep = ".")
if (.Platform$pkgType == "win.binary") {
lib_p <- file.path(Sys.getenv("HOME"), "R/library", r_vsn)
} else if (.Platform$OS.type == "mac.binary") {
lib_p <- file.path(Sys.getenv("HOME"), "lib/R", r_vsn)
## Taken from https://stackoverflow.com/questions/2615128/where-does-r-store-packages
## --> Hopefully results in something like: '/Users/{username}/lib/R/{version}'
} else if (.Platform$OS.type == "source" && .Platform$OS.type == "unix") {
lib_p <- file.path(Sys.getenv("HOME"),
c(
"local/lib/R/site-library",
"lib/R/site-library",
"lib/R/library"
), r_vsn)
## Taken from https://stackoverflow.com/questions/2615128/where-does-r-store-packages
## --> Hopefully results in something like:
## '/usr/local/lib/R/site-library/{version}'
## '/usr/lib/R/site-library/{version}'
## '/usr/lib/R/library/{version}'
} else {
stop("Don't know what to do for this OS type")
}

Related

Free tool to generate all paths from a diagram

Good afternoon everyone,
Dispite a lot of research on the web I didn't found a solution that meets my need.
I need to find a free tool to modelize process (like BPMN, UML activity diagram) and generate all possible paths/combinations from the diagram.
Do you have any idea what tool can help me do that? Thank you a lot.
Update 1
I am not sure that such tool on the shell exists. My advise would be to choose one modelling tool which
supports your modelisation (BPMN, Activity, etc.),
can be extended with a language you are confortable with (Python, Java, C#, etc.).
In this case, you will find several tools for sure.
For fun, I picked Modelio (https://www.modelio.org/),
made a small activity example,
and a Jython script for it.
## return first initial node in the selected activity
def getInitialPoint(act):
for node in act.getOwnedNode():
if isinstance(node, InitialNode):
return node
## parcours activity nodes
def getPaths(currentPath, currentNode):
for outgoing in currentNode.getOutgoing():
node = outgoing.getTarget()
if isinstance(node, ActivityFinalNode):
paths.append(currentPath)
return;
elif isinstance(node, DecisionMergeNode):
getPaths(currentPath, node)
else:
getPaths(currentPath + " - " + node.getName(), node)
##Init
init = getInitialPoint(elt)
currentPath = init.getName()
global paths
paths = []
getPaths(currentPath, init)
##Print founded paths
for p in paths:
print p
Hoping it helps,
EBR

CC 1.75 MC 1.7.10: Creating a program which has an exception "program run something"

So I have not tried anything yet, but I know those code scripts wont work..
So my though was creating a program wich had some functions ex. rm, delete, mkdir and edit. So all of these programs have something in common, they all have an "exception", like a file name.. So I wondered how the programs actually can handle it, so what I though in the first place was something in an another language, but now what I think it would be in LUA as it is not is this:
Runned in shell: MyProgram run DNS_SERVER
MyProgram
local MyProgramexception = read(MyProgram {$0}:{$1}:{$2})
But it I guess it is not that simple, but what I need is something that run if statements for example like:
public $0 = {exception}
public $1 = {exception}
local run = ({$0}, run)
local del = ({$0}, rm)
local program = ({$1}, dns_server || web_server || other_things..)
This is of course some NON WORKING code as I tried to look as real as possible..
So I wondered if there is someone out there who actually knows this?
Also posted on Arqade, but it was off-topic somehow..
It is a bit a vague question to me (and some vague non-lua code), but I think you mean program arguments, like this:
rename <argument1> <argument2>
To accomplish this, you can store all arguments in a table like this:
local arg = {...}
the ... does the magic. Now, you can access argument #1 by doing this:
arg[1]
I hope I have understood your question well.

Why does this snippet for fallback-sourcing a mixed list of Puppet templates/files not work?

This seems to sit between the domains of stackoverflow and serverfault, but it seems ultimately a devel issue, so am posting it here.
In order to make my Puppet tree a bit more DRY (while avoiding enormous if-else ladders and hacky "/dev/null" fallbacks in my manifests), I want to be able to just call a function with a mixed list of templates/files to lookup in order, and use (and expand if necessary) the first one found. I wrote the code below heavily based on some ancient code (see the commented line for the link to that). I can't see why it doesn't yet work, and would appreciate if anyone can find what I've missed (or misunderstood).
I am ensuring:
Apache (and Passenger sub-process) are restarted on the master (to pick up the new function) before running puppet on the client
the client is run with --pluginsync
and I have already unsuccessfully tried:
using wrapper.file = filepath instead of wrapper.file = filename
and many other things, which I won't list as they seemed to have been red herrings. I found one email-thread which hinted that such code would only run on the client, whereas it of course needs to run on the master, but I can see the original code from the pastie link has worked successfully for others, and my changes shouldn't make any difference to the client/master aspect of that.
I am running puppet v3.7 on the client and v2.7 on the master (long story short - I can't avoid this for now, (see EDIT3 below) but I think the important point is that they are both >= 2.6, regarding the environment.to_s change).
I call the function with:
$sshd_config_content = multi_source_mixed("template|ssh/sshd_config.${::fqdn}.erb", "file|ssh/sshd_config.${::fqdn}", "template|ssh/sshd_config.${::domain}.erb", "file|ssh/sshd_config.${::domain}", "template|ssh/sshd_config.${::hostname}.erb", "file|ssh/sshd_config.${::hostname}", 'template|ssh/sshd_config.erb', 'file|ssh/sshd_config')
and always get the error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: multi_source_mixed: No match found for files/templates: template|ssh/sshd_config.XXX.erb, file|ssh/sshd_config.XXX, template|ssh/sshd_config.XX.erb, file|ssh/sshd_config.XX, template|ssh/sshd_config.X.erb, file|ssh/sshd_config.X, template|ssh/sshd_config.erb, file|ssh/sshd_config at /etc/puppet/environments/YYYY/modules/ZZZZ/manifests/init.pp:337 on node XXX
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
My code (so far) is:
# adapted from template-only version at http://pastie.org/666728
# this doesn't yet work (for me)
module Puppet::Parser::Functions
require 'erb'
newfunction(:multi_source_mixed, :type => :rvalue) do |args|
this_module = 'THIS_MODULE_NAME'
contents = nil
environment = compiler.environment.to_s
sources = args
sources.each do |file|
if file.index('|')
filetype, filename = file.split('|', 2)
else
filetype = 'file'
filename = file
end
Puppet.debug("Looking for #{filetype} #{filename} in #{environment}")
if filetype == 'template'
if filepath = Puppet::Parser::Files.find_template(filename, environment)
wrapper = Puppet::Parser::TemplateWrapper.new(self)
wrapper.file = filename
begin
contents = wrapper.result
rescue => detail
raise Puppet::ParseError, "Failed to parse template %s: %s" % [filename, detail]
end
break
end
else
filepath = Puppet::Module.find(this_module, environment).to_s + '/files/' + filename
if File.file?(filepath)
begin
contents = File.open(filepath, "rb").read
rescue => detail
raise Puppet::ParseError, "Failed to get contents from file %s: %s" % [filename, detail]
end
end
break
end
end
if contents == nil
raise Puppet::ParseError, "multi_source_mixed: No match found for files/templates: #{sources.join(', ')}"
end
contents
end
end
EDIT: BTW - I know the files being sourced exist in the right place already, as what I am doing with this code is replacing existing code which has been working already, but is hardcoded to search for files only, using the following:
source => ["puppet:///modules/ZZZZ/ssh/sshd_config.${::fqdn}", "puppet:///modules/ZZZZ/ssh/sshd_config.${::domain}", "puppet:///modules/ZZZZ/ssh/sshd_config.${::hostname}", 'puppet:///modules/ZZZZ/ssh/sshd_config']
EDIT 2: I proceeded with trying to test this using client=3.7 and master=2.7 because it is the only setup I can use for now, but I have since found several threads stating that it is painful-and-usually-impossible to even try to get an older puppetmaster version to cooperate with a newer puppetclient version, and I know at least one of the errors I've been ignoring for now has been to do with that. I suspect even some of this specific problem may be a consequence of that too, but unfortunately have no easy way to test things further until the upgrade of the master happens (probably won't happen for at least a week or two) (see below)
EDIT 3: I am now testing it with client and master at v2.7, and the problem remains...

Protect user credentials when connecting R with databases using JDBC/ODBC drivers

Usually I connect to a database with R using JDBC/ODBC driver. A typical code would look like
library(RJDBC)
vDriver = JDBC(driverClass="com.vertica.jdbc.Driver", classPath="/home/Drivers/vertica-jdbc-7.0.1-0.jar")
vertica = dbConnect(vDriver, "jdbc:vertica://servername:5433/db", "username", "password")
I would like others to access the db using my credentials but I want to protect my username and password. So I plan save the above script as a "Connections.r" file and ask users to source this file.
source("/opt/mount1/Connections.r")
If I give execute only permission to Connections.r others cannot source the file
chmod 710 Connections.r
Only if I give read and execute permission R lets users to source it. If I give the read permission my credentials will be exposed. Is there anyways we could solve this by protecting user credentials?
Unless you were to deeply obfuscate your credentials by making an Rcpp function or package that does the initial JDBC connection (which won't be trivial) one of your only lighter obfuscation mechanisms is to store your credentials in a file and have your sourced R script read them from the file, use them in the call and them rm them from the environment right after that call. That will still expose them, but not directly.
One other way, since the users have their own logins to RStudio Server, is to use Hadley's new secure package (a few of us sec folks are running it through it's paces), add the user keys and have your credentials stored encrypted but have your sourced R script auto-decrypt them. You'll still need to do the rm of any variables you use since they'll be part of environment if you don't.
A final way, since you're giving them access to the data anyway, is to use a separate set of credentials (the way you phrased the question it seems you're using your credentials for this) that only work in read-only mode to the databases & tables required for these analyses. That way, it doesn't matter if the creds leak since there's nothing "bad" that can be done with them.
Ultimately, I'm as confused as to why you can't just setup the users with read only permissions on the database side? That's what role-based access controls are for. It's administrative work, but it's absolutely the right thing to do.
Do you want to give someone access, but not have them be able to see your credentials? That's not possible in this case. If my code can read a file, I can see everything in the file.
Make more accounts on the SQL server. Or make one guest account. But you're trying to solve the problem that account management solves.
Have the credentials sent as command arguments? Here's an example of how one would do that:
suppressPackageStartupMessages(library("argparse"))
# create parser object
parser <- ArgumentParser()
# specify our desired options
# by default ArgumentParser will add an help option
parser$add_argument("-v", "--verbose", action="store_true", default=TRUE,
help="Print extra output [default]")
parser$add_argument("-q", "--quietly", action="store_false",
dest="verbose", help="Print little output")
parser$add_argument("-c", "--count", type="integer", default=5,
help="Number of random normals to generate [default %(default)s]",
metavar="number")
parser$add_argument("--generator", default="rnorm",
help = "Function to generate random deviates [default \"%(default)s\"]")
parser$add_argument("--mean", default=0, type="double", help="Mean if generator == \"rnorm\" [default %(default)s]")
parser$add_argument("--sd", default=1, type="double",
metavar="standard deviation",
help="Standard deviation if generator == \"rnorm\" [default %(default)s]")
# get command line options, if help option encountered print help and exit,
# otherwise if options not found on command line then set defaults,
args <- parser$parse_args()
# print some progress messages to stderr if "quietly" wasn't requested
if ( args$verbose ) {
write("writing some verbose output to standard error...\n", stderr())
}
# do some operations based on user input
if( args$generator == "rnorm") {
cat(paste(rnorm(args$count, mean=args$mean, sd=args$sd), collapse="\n"))
} else {
cat(paste(do.call(args$generator, list(args$count)), collapse="\n"))
}
cat("\n")
Sample run (no parameters):
usage: example.R [-h] [-v] [-q] [-c number] [--generator GENERATOR] [--mean MEAN] [--sd standard deviation]
optional arguments:
-h, --help show this help message and exit
-v, --verbose Print extra output [default]
-q, --quietly Print little output
-c number, --count number
Number of random normals to generate [default 5]
--generator GENERATOR
Function to generate random deviates [default "rnorm"]
--mean MEAN Mean if generator == "rnorm" [default 0]
--sd standard deviation
Standard deviation if generator == "rnorm" [default 1]
The package was apparently inspired by the python package of the same name, so looking there may also be useful.
Looking at your code, I'd probably rewrite it as follows:
library(RJDBC)
library(argparse)
args <- ArgumentParser()
args$add_argument('--driver', dest='driver', default="com.vertica.jdbc.Driver")
args$add_argument('--classPath', dest='classPath', default="/home/Drivers/vertica-jdbc-7.0.1-0.jar")
args$add_argument('--url', dest='url', default="jdbc:vertica://servername:5433/db")
args$add_argument('--user', dest='user', default='username')
args$add_argument('--password', dest='password', default='password')
parser <- args$parse_args
vDriver <- JDBC(driverClass=parser$driver, parser$classPath)
vertica <- dbConnect(vDriver, parser$url, parser$user , parser$password)
# continue here
Jana, it seems odd that you are willing to let the users connect via R but not in any other way. How is that obscuring anything from them?
I don't understand why you would not be satisfied with a guest account that has specific SELECT-only access to certain tables (or even views)?

Puppet: Can $hostname be checked against a master file before running manifest head?

I've seen someone doing a check on whether an agent's MAC address is on a specific regular expression before it runs the specified stuff below. The example is something like this:
if $is_virtual == "true" and $kernel == "Linux" and $macaddress =~ /^02:00:0A/ {
include nmonitor
include rootsh
include checkmk-agent
include backuppcacc
include onecontext
include sysstatpkg
include ensurekvmsudo
include cronntpdate
}
That's just it in that particular manifest file. Similarly another manifest example but via regular expression below:
node /^mi-cloud-(dev|stg|prd)-host/ {
if $is_virtual == 'false' {
include etchosts
include checkmk-agent
include nmonitor
include rootsh
include sysstatpkg
include cronntpdate
include fstab-ds-dev
}
}
I've been asked of whether can that similar concept be applied upon checking the agent's hostname with a master file of hostnames allowed to be run or otherwise.
I am not sure whether it can be done, but the rough idea goes around something like:
file { 'hostmasterfile.ini'
ensure => present,
source => puppet:///test/hostmaster.ini,
content => $hostname
}
$coname = content
#Usually the start / head of the manifest
if $hostname == $coname {
include <a>
include <b>
}
Note: $fqdn is out of the question.
To my knowledge, I have not seen any such sample manifest that matches the request. Whats more, it goes against a standard practice of keeping things easier to manage and not putting all eggs in a basket.
An ex-colleague of mine claims that idea above is about self-provisioning. However that concept is non-existent in Puppet (he posed that question at a workshop a few months back). I am not sure how true is that though.
If that thing above can be done, any suggestion of how can it be done? Or is it best to go back to the standard one manifest per node for easy maintenance?
Thanks very much.
M
Well, you can replace your node blocks with if constructs.
if $hostname == 'host1' {
# manifest for host1 here
}
You can combine this with some sort of inifile (e.g., using the generate) function. If the <a> and <b> for the include statements are then fetched from your ini file as well, you have constructed a crude ENC.
Note that this has security implications - any agent can claim to have any host name. It's even very simple to do:
FACTER_hostname=kerberos01 puppet agent --test
Any node can receive the catalog for kerberos01 this way. (node blocks rely on $certname instead, which cannot be forged.)
I could not decipher your precise intent from your question, but I suspect that you really want an ENC or a Hiera based approach.
Edit after feedback from your first comment:
To make the master read contents from local files, you should
get rid of the file { 'hostmasterfile.ini': } - it only allows you to set contents, not retrieve them
initialize the variable content using the file function (this will make all nodes fail if the file is not readable)
The code could look like this (assuming that there can be multiple host names in the ini file).
$ini_data = file('/etc/puppet/files/test/hostmaster.ini')
Next step would be a regex lookup like this:
if $ini_data =~ /name=$hostname/ {
Unfortunately, this does not work! Puppet will not expand variable values in regular expressions, apparently.
You can use this (kind of silly) workaround:
$ini_lookup = regsubst($ini_data, "name=$hostname", '__FOUND__')
if $ini_lookup =~ /__FOUND__/ {
...
}
Final remark about security: If your team is adamant about not using $certname for this lookup (although it should be easy to map host names to cert names), you should consider adding the host name to your trusted facts.

Resources