Chef: passing variables - linux

Everytime we create a new server I have a bash script that asks the end-user a set of questions to help chef configure the custom server, his/her answer to those questions needs to be injected into chef so that I can use their responses within my chef script (to set the server "hostname" = "server1.stack.com", for instance). There is a json attribute when running chef-client I've read about that may be helpful but I'm not sure how that would work in our environment.
Note: We run chef-client on all of our systems every 15 minutes via cronjob to get updates.
Psuedocode:
echo -n "What is the server name?"
read hostname
chef-client -j {'hostname' => ENV['$hostname']}

Two issues, first is that -j takes a filename not raw JSON and second is that using -j will entirely override the node data coming from the server which also includes the run list and environment. If this is being done at system provisioning time you can definitely do stuff like this, see my AMI bootstrap script for an example. If this is done after initial provisioning, you are probably best off writing those responses to a file, and then reading that in from you Chef recipe code.

Passing raw json into chef-client is possible, but requires a little creativity. You simply do something like this:
echo '{"hostname": "$hostname"}' | chef-client -j /dev/stdin
The values in your json will be deep merged with the "normal" attributes stored in the chef-server. You can also include a run_list in your json, which will replace (not be merged) the run_list on the chef server.
You can see the run_list replacing the server run list here:
https://github.com/opscode/chef/blob/cbb9ae97e2d3d90b28764fbb23cb8eab4dda4ec8/lib/chef/node.rb#L327-L338
And you can see the deep merge of attributes here:
https://github.com/opscode/chef/blob/cbb9ae97e2d3d90b28764fbb23cb8eab4dda4ec8/lib/chef/node.rb#L305-L311
Also, any attributes you declare in your json will override the attributes already stored on the chef-server.

Related

Why might DSBulk Load stop operation without any errors?

I have created a Cassandra database in DataStax Astra and am trying to load a CSV file using DSBulk in Windows. However, when I run the dsbulk load command, the operation never completes or fails. I receive no error message at all, and I have to manually terminate the operation after several minutes. I have tried to wait it out, and have let the operation run for 30 minutes or more with no success.
I know that a free tier of Astra might run slower, but wouldn't I see at least some indication that it is attempting to load data, even if slowly?
When I run the command, this is the output that is displayed and nothing further:
C:\Users\JT\Desktop\dsbulk-1.8.0\bin>dsbulk load -url test1.csv -k my_keyspace -t test_table -b "secure-connect-path.zip" -u my_user -p my_password -header true
Username and password provided but auth provider not specified, inferring PlainTextAuthProvider
A cloud secure connect bundle was provided: ignoring all explicit contact points.
A cloud secure connect bundle was provided and selected operation performs writes: changing default consistency level to LOCAL_QUORUM.
Operation directory: C:\Users\JT\Desktop\dsbulk-1.8.0\bin\logs\LOAD_20210407-143635-875000
I know that DataStax recently changed Astra so that you need credentials from a generated Token to connect DSBulk, but I have a classic DB instance that won't accept those token credentials when entered in the dsbulk load command. So, I use my regular user/password.
When I check the DSBulk logs, the only text is the same output displayed in the console, which I have shown in the code block above.
If it means anything, I have the exact same issue when trying to run dsbulk Count operation.
I have the most recent JDK and have set both the JAVA_HOME and PATH variables.
I have also tried adding dsbulk/bin directory to my PATH variable and had no success with that either.
Do I need to adjust any settings in my Astra instance?
Lastly, is it possible that my basic laptop is simply not powerful enough for this operation or just running the operation crazy slow?
Any ideas or help is much appreciated!

snmp proxy in python

I need kind of snmp v2c proxy (in python) which:
react on snmpset command
read value from command and write it to yaml file
run custom action (prefer in different thread and somehow reply success to snmpset command):
run another snmpset to different machine, or
ssh to user#host and run some command, or
run some local tool
and:
react on snmpget command
check value for requested oid in yaml file
return this value
I'm aware of pysnmp but documentation just confuse me. I can image I need some kind of command responder (I need snmp v2c) and some object to store configuration/values form yaml file. But I'm completely lost.
I think you should be able to implement all that with pysnmp. Technically, that would not be SNMP proxy but SNMP command responder.
You can probably take this script as a prototype and implement your (single) cbFun which (like the one in the prototype) receives the PDU and branches on its type (GET or SET SNMP command in your case). Then you can implement value read from the .yaml file in the GetRequestPDU branch and .yaml file write, along with sending SNMP SET command elsewhere, in the SetRequestPDU branch.
The pysnmp API we are talking here is the low-level one. With it you can't ask pysnmp to route messages between callback functions -- it always calls the same callback for all message types.
However, you can also base your tool on the higher-level SNMP API which was introduces along with the SNMPv3 model. With it you can register your own SNMP applications (effectively, callbacks) based on PDU type they support. But given you only need SNMPv2c support I am not sure the higher-level API would pay off in the end.
Keep in mind that SNMP is generally time sensitive. If running local command or SSH-ing elsewhere is going to take more than a couple of seconds, standard SNMP manager might start retrying and may eventually time out. If you look at how Net-SNMP's snmpd works - it runs external commands and caches the result for tends of seconds. That lets the otherwise timing out SNMP manager eventually get a slightly outdated response.
Alternatively, you may consider writing a custom variation plugin for SNMP simulator which largely can do what you have described.

Starting a process using systemd on Linux: different behaviour using "su root -c"

We have a SignalR push server using Mono/Owin on a Linux Debian server.
We perform a load test and we get a different behaviour according to how the push is started on systemd
Working:
ExecStart=/bin/su root -c '/usr/bin/mono --server mydaemon.exe -l:/var/run/mydaemon.pid'
Hanging after around 1k connections:
ExecStart=/usr/bin/mono --server mydaemon.exe -l:/var/run/mydaemon.pid
We may reproduce the different behaviour anytime: in the second case, the test client stay in SignalR negotiate call, without receiving any answer.
We actvated as well the export of the environment varables "max thread" for Mono for both case.
So the question is, what could be the difference in resource system usage/avaliability in these 2 cases?
In the systemd service definition, you can specify the limit for the number of open files, so if you add a line:
LimitNOFILE=65536
in the [Service] section of the service definition file, it should set the limit to that value, rather than the default which comes through systemd as 1024.
The systemd-system.conf file defines the parameters for defaults for the limits (e.g. DefaultLimitNOFILE), and the systemd.exec manual page defines the parameters that can be used to set overrides on the various limits.

Why can't I access Jenkins URLs from within a Jenkins groovy script?

I'm using the Jenkins Dynamic Parameter Plugin and the Jenkins SSH credentials plugin and would like to use them together so that I can have groovy code that auto-populates a choice parameter with the ssh systems I have configured (or a subset more likely) and allows me to pick which system I want to run a deployment to. I've found a URL that provides the list of ssh hosts and have this basic code already written:
def myURL="http://myJenkins/job/myJob/descriptorByName/org.jvnet.hudson.plugins.SSHBuilder/fillSiteNameItems"
def allText=new URL(myURL).getText()
I've verified the URL does return JSON with the list of connections when I hit it from anywhere outside of Jenkins (REST client, wget, and even groovysh), but when I try to call it inside the dynamic parameter groovy code, I keep getting:
java.net.ConnectException: Connection refused
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:369)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230
...
So I'm wondering if this is some sort of threading issue (the thread that is running this is maybe the same thread that would respond to the http request), but my knowledge of Jenkins at a programming level is somewhat limited. If anyone can point me to how to get what I'm after (maybe even in a simpler way), I'd really appreciate it.

In Oracle, how do you change the 'default' database?

I really should know this, but would someone tell me how to change the default database on Linux?
For example:
I have a database test1 on server1 with ORACLE_SID=test1. So, to connect to test1 I can use:
sqlplus myuser/password
Connects to the default database, test1
I would now like the default sqlplus connection to go to database test2 on server server2.
So, I've updated tnsnames so that the old test1 entry now points to test2#server2. I've also added a separate entry for test2 that points to the same place. However, the default connection still seems to go to test1#server1.
The following both work fine and go to database test2 on server2:
sqlplus myuser/password#test1
sqlplus myuser/password#test2
But the default connection, sqlplus myuser/password, goes to test1#server1.
Any ideas?
Thanks.
To expand on kerchingo's answer: Oracle has multiple ways to identify a database.
The best way -- the one that you should always use -- is USER/PASSWORD#SERVER. This will use the Oracle naming lookup (tnsnames.ora) to find the actual server, which might be on a different physical host every time you connect to it. You can also specify an Oracle connection string as SERVER, but pretend you can't.
There are also two ways to specify a default server via environment variables. The first is TWO_TASK, which uses the naming lookup, and the second is ORACLE_SID, which assumes that the server is running on the current machine. ORACLE_SID takes precedence over TWO_TASK.
The reason that you should always use an explicit connect string is that you have no idea whether the user has set TWO_TASK, ORACLE_SID, both, or neither; nor do you know what they might be set to. Setting both to different values is a particularly painful problem to diagnose, particularly over the phone with a person who doesn't really understand how Oracle works (been there, done that).
Assuming you're logged into server1, you'll need to connect to test2 using
sqlplus myuser/password#test2
because you have to go through a listener to get to server2. The string test2 identifies an entry in your tnsnames.ora file that specifies how to connect to test2. You won't be able to connect to a different server using the first form of your sqlplus command.
If both instances (test1, test2) were on server1, then you could, as #kerchingo states, set the ORACLE_SID environment variable to point at another instance.
Defining a enviroment variable LOCAL with the tns alias of your database.
> set LOCAL=test1
> sqlplus myuser/password
> ... connected to test1
> set LOCAL=test2
> sqlplus myuser/password
> ... connected to test2
This works on windows client, not shure about other os.
The correct question is 'How do I change the default service'? Oracle DBMS offers two types of connection request: explicit and implicit. In an explicit request, you supply three operands like sqlplus username/password#service. In an implicit request, you ignore the third operand.
Implicit connection applies only when the client-host and server-host are the same. Consequently, the listener is on the same host.
The listener is the one that initially responds to connection request. In handling the implicit-connection request from the same host,
it checks whether the instance name has been set. It checks the value of shell-variable ORACLE_SID.
If set, then it can handle implicit-connection request. Otherwise, it cannot, and you must perform explicit-connection request, supplying the third operand.
The listener-config file name listener.ora associates instance with service.
To change the default service you connect to, change the default value of instance.
Therefore, change the default value of shell-variable ORACLE_SID. You do it in the OS user config file such as .profile or similar config files.
Hope this helps.
I think it is set in your environment, can you echo $ORACLE_SID?

Resources