Puppet read file content and generate a hash - puppet

I have a file called fstab.txt which contains:
UUID=86861354-d783-4b9e-a871-e9fbbfc35c22 /mnt/d1 ext4 defaults 1 2
UUID=ffa788ba-0802-4305-ab59-2a34dda3a706 /mnt/d2 ext4 defaults 1 2
UUID=993eec37-9c6d-4ba6-9ed3-77f2d7652256 /mnt/d3 ext4 defaults 1 2
UUID=36817374-0d46-4d5b-ac9b-2229268b0978 /mnt/d4 ext4 defaults 1 2
I want to read the file and generate a hash as below:
hash = {
"UUID=86861354-d783-4b9e-a871-e9fbbfc35c22" => "/mnt/d1",
"UUID=ffa788ba-0802-4305-ab59-2a34dda3a706" => "/mnt/d2",
"UUID=993eec37-9c6d-4ba6-9ed3-77f2d7652256" => "/mnt/d3",
"UUID=36817374-0d46-4d5b-ac9b-2229268b0978" => "/mnt/d4",
}
Currently I am thinking this way:
$output = generate("/bin/cat fstab.txt")
And split out $output.
Some one could guide me a better way to do this.
Thanks in advance.

Related

logstash - output single event into multiple line output file

I have a jdbc input with a select statement. each row in the restult set has 3 columns. c1, c2, c3. the event emitted has the following structure:
{"c1":"v1", "c2":"v2", "c3":"v3", "file_name":"tmp.csv"}
I want to output the values in a file in the following manner:
output file:
v1
v2
v3
this is the output configuration:
file {
path => "/tmp/%{file_name}"
codec => plain { format => "%{c1}\n%{c2}\n%{c3}" }
write_behavior => "overwrite"
flush_interval => 0
}
but what is generated is
outputfile:
v1\nv2\nv3
is the plain codec plugin not the one i need? is there any other codec plugin for the output file plugin that i can use? or is the only option i have is to write my own plugin?
Thanks!
A bit late to the party, but maybe this helps others. Although it looks funky, you should be able to get away with simply hitting Enter within the format string (using the line codec).
file {
path => "/tmp/%{file_name}"
codec => line {
format => "%{c1}
%{c2}
%{c3}"
}
write_behavior => "overwrite"
flush_interval => 0
}
Not the prettiest approach, but it works. Not sure if there is a better way.
what you are looking for is the line codec plugin: https://www.elastic.co/guide/en/logstash/current/plugins-codecs-line.html

Get mounted disk space on Temp in Linux machine

I am new to perl world. I written one perl script for calculating free disk space. But whenever output generates, it gives me different number than what actually shows using df -h command.
So my requirement is i want to show specific mounted free disk space. E.g I want to show only /boot "Use%" figure and it should match with df -h command figure.
Please find my script for reference as follows by clicking link named Actual Script.
Actual Script
The df function from Filesys::Df
module returns a reference to a hash (perldoc perlreftut) with fs info fields
Example:
$VAR1 = {
user_bavail => '170614.21875',
user_blocks => '179796.8203125',
user_fused => 408762,
used => '9182.6015625',
fused => 408762,
bavail => '170614.21875',
user_used => '9182.6015625',
su_bavail => '180077.20703125',
ffree => 11863876,
fper => 3,
user_favail => 11863876,
favail => 11863876,
user_files => 12272638,
blocks => '189259.80859375',
su_favail => 11863876,
files => 12272638,
per => 5,
su_blocks => '189259.80859375',
bfree => '180077.20703125',
su_files => 12272638
};
So you free space is
my $ref = df($dir, 1);
print $ref->{bavail} . " bytes\n";

Mysql seconds_behind master very high

Hi we have mysql master slave replication, master is mysql 5.6 and slave is mysql 5.7, seconds behind master is 245000, how I make it catch up faster. Right now it is taking more than 6 hours to copy 100 000 seconds.
My slave ram is 128 GB. Below is my my.cnf
[mysqld]
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
innodb_buffer_pool_size = 110G
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
# These are commonly set, remove the # and set as required.
basedir = /usr/local/mysql
datadir = /disk1/mysqldata
port = 3306
#server_id = 3
socket = /var/run/mysqld/mysqld.sock
user=mysql
log_error = /var/log/mysql/error.log
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
join_buffer_size = 256M
sort_buffer_size = 128M
read_rnd_buffer_size = 2M
#copied from old config
#key_buffer = 16M
max_allowed_packet = 256M
thread_stack = 192K
thread_cache_size = 8
query_cache_limit = 1M
#disabling query_cache_size and type, for replication purpose, need to enable it when going live
query_cache_size = 0
#query_cache_size = 64M
#query_cache_type = 1
query_cache_type = OFF
#GroupBy
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
#sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
enforce-gtid-consistency
gtid-mode = ON
log_slave_updates=0
slave_transaction_retries = 100
#replication related changes
server-id = 2
relay-log = /disk1/mysqllog/mysql-relay-bin.log
log_bin = /disk1/mysqllog/binlog/mysql-bin.log
binlog_do_db = brandmanagement
#replicate_wild_do_table=brandmanagement.%
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data\_recent
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data
replicate-wild-ignore-table=brandmanagement.t\_fb\_rt\_data
replicate-wild-ignore-table=brandmanagement.t\_keyword\_tweets
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data\_old
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data\_new
binlog_format=row
report-host=10.125.133.220
report-port=3306
#sync-master-info=1
read-only=1
net_read_timeout = 7200
net_write_timeout = 7200
innodb_flush_log_at_trx_commit = 2
sync_binlog=0
sync_relay_log_info=0
max_relay_log_size=268435456
Lots of possible solutions. But I'll go with the simplest one. Have you got enough network bandwidth to send all changes over the network? You're using "row" binlog, which may be good in case of random, unindexed updates. But if you're changing a lot of data using indexes only, then "mixed" binlog may be better.

puppet augeas resource only support avalialbe lenses

I am automating rhnplugin config file in puppet, below is my manifests
augeas { 'config' :
lens => 'Simplevars.lns',
incl => '/etc/yum/pluginconf.d/rhnplugin.conf',
changes => 'set /etc/yum/pluginconf.d/rhnplugin.conf/test " " '
}
getting below error
Warning: Augeas[config](provider=augeas): Loading failed for one or more files, see debug for /augeas//error outputeven
I tried with "simplelines lenses" not getting any o/p
I used "simplelines and simplevars" since could not find lenses for rhnplugin.
I treid in augtool and it worked
augtool> set /files/etc/yum/pluginconf.d/rhnplugin.conf/test
augtool> save
Saved 1 file(s)
augtool> set /files/etc/yum/pluginconf.d/rhnplugin.conf/test/enabled 1
augtool> save
Saved 1 file(s)
augtool> print /files/etc/yum/pluginconf.d/rhnplugin.conf/test
/files/etc/yum/pluginconf.d/rhnplugin.conf/test
/files/etc/yum/pluginconf.d/rhnplugin.conf/test/enabled = "1"
My doubt is can't we convert int to augeas resource if the lenses are not available.
rhnplugin.conf is not in a simplevars (i.e. key=value) format. It is an inifile. My recommendation would be to use puppet labs' inifile module to modify it.

Force lshosts command to return megabytes for "maxmem" and "maxswp" parameters

When I type "lshosts" I am given:
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
server1 X86_64 Intel_EM 60.0 12 191.9G 159.7G Yes ()
server2 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
server3 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
I am trying to return maxmem and maxswp as megabytes, not gigabytes when lshosts is called. I am trying to send Xilinx ISE jobs to my LSF, however the software expects integer, megabyte values for maxmem and maxswp. By doing debugging, it appears that the software grabs these parameters using the lshosts command.
I have already checked in my lsf.conf file that:
LSF_UNIT_FOR_LIMTS=MB
I have tried searching the IBM Knowledge Base, but to no avail.
Do you use a specific command to specify maxmem and maxswp units within the lsf.conf, lsf.shared, or other config files?
Or does LSF force return the most practical unit?
Any way to override this?
LSF_UNIT_FOR_LIMITS should work, if you completely drained the cluster of all running, pending, and finished jobs. According to the docs, MB is the default, so I'm surprised.
That said, you can use something like this to transform the results:
$ cat to_mb.awk
function to_mb(s) {
e = index("KMG", substr(s, length(s)))
m = substr(s, 0, length(s) - 1)
return m * 10^((e-2) * 3)
}
{ print $1 " " to_mb($6) " " to_mb($7) }
$ lshosts | tail -n +2 | awk -f to_mb.awk
server1 191900 159700
server2 191900 191200
server3 191900 191200
The to_mb function should also handle 'K' or 'M' units, should those pop up.
If LSF_UNIT_FOR_LIMITS is defined in lsf.conf, lshosts will always print the output as a floating point number, and in some versions of LSF the parameter is defined as 'KB' in lsf.conf upon installation.
Try searching for any definitions of the parameter in lsf.conf and commenting them all out so that the parameter is left undefined, I think in that case it defaults to printing it out as an integer in megabytes.
(Don't ask me why it works this way)

Resources