I would like to read value from the hiera for my custom factor. Is it possible ?
I am going to populate a factor based on my hiera value. Please look at my following code for your reference.
require 'facter'
# Default for non-Linux nodes
Facter.add(:jboss_base_algorithm) do
setcode do
nil
end
end
# Linux
#
Facter.add(:jboss_base_algorithm) do
confine :kernel => :linux
setcode do
Facter::Util::Resolution.exec("/usr/bin/echo '{hiera_value}' | /usr/bin/base64")
end
end
Any help is much appreciated.
Custom facts cannot rely on Hiera in a master / agent configuration, because facts are evaluated by the agent whereas the Hiera data live on the master.
Custom facts probably shouldn't rely on Hiera data even for local manifest application, in part because that inhibits switching to master / agent, and in part because the data are already accessible directly from Hiera. You don't need a custom fact to access the data.
Related
I am planning to rewrite my sensor's driver in order to try to get my module in the Linux Kernel. I was wondering whether there were requirements regarding the organization of the source code. Is it mandatory to keep all the code in one single source file or is it possible to split it up in several ones?
I would prefer a modular approach for my implementation, with one file containing the API and all the structures required for the Kernel registration, and another file with the low level operations to exchange data with the sensor (i.e. mysensor.c & mysensor_core.c).
What are the requirements from this point of view?
Is there a limitation in terms of lines of codes for each file?
Note:
I tried to have a look at the official github repo and it seems to me that the code is always limited to one single source file.
https://github.com/torvalds/linux/tree/master/drivers/misc
Here is an extract from "linux/drivers/iio/gyro/Makefile" as an example:
# Currently this is rolled into one module, split it if
# we ever create a separate SPI interface for MPU-3050
obj-$(CONFIG_MPU3050) += mpu3050.o
mpu3050-objs := mpu3050-core.o mpu3050-i2c.o
The "mpu3050.o" file used to build the "mpu3050.ko" module is built by linking two object files "mpu3050-core.o" and "mpu3050-i2c.o", each of which is built by compiling a correspondingly named source file.
Note that if the module is built from several source files as above, the base name of the final module "mpu3050" must be different to the base name of each of the source files "mpu3050-core" and "mpu3050-i2c". So in your case, if you want the final module to be called "mysensor.ko" then you will need to rename the "mysensor.c" file.
I am trying to edit my torrc and make all of the nodes funnel through one country.
So far I am able to force the entry and exit nodes but don't know how to change the middle node... any ideas?
I have already tried "MiddleNodes" and "RelayNodes"
EntryNodes {us},{ca}
ExitNodes {us},{ca}
StrictNodes 1
It's possible to restrict to MiddleNodes per Tor docs: https://2019.www.torproject.org/docs/tor-manual.html.en
MiddleNodes node,node,…
A list of identity fingerprints and country
codes of nodes to use for "middle" hops in your normal circuits.
Normal circuits include all circuits except for direct connections to
directory servers. Middle hops are all hops other than exit and entry.
This is an experimental feature that is meant to be used by
researchers and developers to test new features in the Tor network
safely. Using it without care will strongly influence your anonymity.
This feature might get removed in the future. The HSLayer2Node and
HSLayer3Node options override this option for onion service circuits,
if they are set. The vanguards addon will read this option, and if
set, it will set HSLayer2Nodes and HSLayer3Nodes to nodes from this
set. The ExcludeNodes option overrides this option: any node listed in
both MiddleNodes and ExcludeNodes is treated as excluded. See the
ExcludeNodes option for more information on how to specify nodes.
Edit: See new answer by #user1652110 describing MiddleNodes option which was added in January 2019.
There is no option to do so. The closest option you can try is ExcludeNodes by using as large a list of country codes as you can come up with that doesn't include the countries you do want to use.
Also note, at the time of writing, limiting your circuits' entry and exit points to relays in the US and Canada might severely limit your performance, anonymity, and reliability since there just aren't that many high-bandwidth exits and guards in these two countries.
I am a new mainframer and I have been given access to/control of a test system to play around in and learn. We have been trying to get IMS set up on the system but when I try to log into IMS 14 I get the error
"INIT SELF FAILED WITH SENSE 08570002".
I have found that the error code means, "The SSCP-PLU session is inactive."
I am thinking that the issue is with the VTAM configuration but I am not sure what exactly needs to be fixed or where in z/OS to look for it.
I have asked around and dug through documentation with no luck so any help would be very much appreciated.
The message indicates an attempt was made to establish a connection from the SSCP (VTAM) and a Primary LU (an application) and the application was not available. This is done on behalf of an SLU (secondary logical unit) which is generally a terminal or printer.
This could the result of several situations but here are some common ones:
An attempt was made to log on to something like TSO, CICS, IMS, ... before the VTAM ACB was actually opened. You can attempt the request again later when the service is up
To determine if the PLU (application is available) use the the VTAM command D NET,ID=vtamappl where vtamappl is the application ID your are trying to connect to. This command is entered on the console directly or through a secondary means like SDSF.
There may be a LOGAPPL= statement coded on the LU definition that tells VTAM to attempt to initiate a session when starting the LU. In your case this would appear to be happening before the PLU (application) is up. The LU definition (or generic definition) is in the VTAMLST concatenation.
This manual describes the sense code in more detail.
I'm new to puppet and I've setup two puppet master instances which are in same IP range.
Eg : Assume there are A and B puppet master nodes. A = 192.168.4.23 and B = 192.168.4.66. There are puppet agents configured to to pull from each respective node. Lets say C pulls from A and D pulls from B.
My configuration where C pulls from A is fine, thats what I expect. But when D pulls from B it doesn't work. But when I replace a file that D pulls from B, with a file in A placed in B then the agent script runs properly.
Appreciate any idea on what might be going on.
This is a thoroughly confusing question...
A.fileA <-> C Works.
B.fileB <-> D Does not work.
B.fileA <-> D Works.
There's just not enough information to go off of here.
What is fileA?
What is fileB?
What is the script?
What is the relationship between A and B where a a fileA is given to B?
Without this information, I might only offer a recommendation. Assuming you're using some source control management for your manifests and modules, if you're not already using r10k/code manager with dynamic directory environments, you should look into that in order to improve the integrity of the data available to the agents on each of the masters. This way, you can be relatively certain that what you expect to be on each of the masters truly is.
In Puppet I would like to create entries to all hosts files in a large group of servers.
256.344.987.776 6.fqn.mycompany.info my-hosts-hostname6
256.344.987.777 7.fqn.mycompany.info my-hosts-hostname7
256.344.987.778 8.fqn.mycompany.info my-hosts-hostname8
256.344.987.779 9.fqn.mycompany.info my-hosts-hostname9
256.344.987.780 10.fqn.mycompany.info my-hosts-hostname10
where the IP is taken from eth2 fact, the fqn is taken from concatting a fact hostname to domain, the short notation would be the fact: hostname.
I'm not sure how to best approach this.
It sounds like you want to glean the information from all of your hosts, collate it, and provide it to all the hosts. This is one of the classic use cases for exported resources. And of course, Puppet provides a built-in Host resource type for managing the individual entries. A minimal class that handles such a job might look like this:
class site::hosts {
# Export *this* host's entry for all machines to pick up
##host { "${hostname}.${domain}":
ensure => 'present',
ip => $ipaddress_eth2,
host_aliases => ${hostname}
}
# Apply *all* machines' hosts entries to this machine
Host<<| |>>
}
You will need to have exported resources enabled on your master for this to work. After you first put it into place, it may take a couple of cycles to stabilize, as on any given run, each host will receive only the entries provided by machines that have already received catalogs with that class in them.