rabbitmq - custom config file - disk_free_limit not set properly - linux

I've properly install (rpm based) a rabbitmq cluster (with clusterer plugin) in rhel7, create the "custom" configuration files:
/etc/rabbitmq/rabbitmq-env.config => env varialble
/etc/rabbitmq/rabbitmq.config => rabbitmq properties
The rabbitmq cluster works fine exept that my parameters are ignored, any idea why?
Thanks in advance for you help
kr,
O.
nb: if I set the paramertesr myself with a command like:
rabbitmqctl set_disk_free_limit "1g"
for the disk limit for example, it works but I want them to survive a "reboot" :/
Here are my configurations files:
# /etc/rabbitmq/rabbitmq-env.config
(..)
NODE_PORT=5672
NODENAME=rabbit#node1
RABBITMQ_CONFIG_FILE=/etc/rabbitmq/rabbitmq.config
(..)
cat << EOF > /etc/rabbitmq/rabbitmq.config
[
{kernel, [
]},
{rabbit, [
{cluster_nodes, ["rabbit#node1", "rabbit#node2", "rabbit#node3"], disc}
{tcp_listeners, [5672]},
{disk_free_limit, "1GB"},
{collect_statistics_interval, 10000},
{heartbeat, 30},
{cluster_partition_handling, autoheal},
{default_user, <<"guest">>},
{default_pass, <<"guest">>}
]},
{rabbitmq_clusterer, [
{config, [ {version,1}, {nodes,["rabbit#node1", "rabbit#node2", "rabbit#node3"]} ]}
]}
]
EOF

a little update for this topic, I had misconfigured my rabbitmq files; in order to have a working configuration, do the following modifications.
kr,
O.
For the environment file: we can get rid of the '.config' part in the file name as rabbitMQ add it anyway.
I my log file, I Had an error with "... /etc/rabbitmq/rabbitmq.config.config ... "
So keep the file with the .config extension (/etc/rabbitmq/rabbitmq.config) by set the env variable without the .config:
(..)
RABBITMQ_CONFIG_FILE=/etc/rabbitmq/rabbitmq
(..)
For the rabbit.config file: As I used the clusterer plugins, we can get rid of the line cluster_nodes.
Your file will look like this one:
cat << EOF > /etc/rabbitmq/rabbitmq.config
[
{kernel, [
]},
{rabbit, [
{tcp_listeners, [5672]},
{disk_free_limit, "1GB"},
{collect_statistics_interval, 10000},
{heartbeat, 30},
{cluster_partition_handling, autoheal}
]},
{rabbitmq_management, [
{http_log_dir,"/myapps/myproject/rabbitmq/logs"},
{listener, [{port, 15672 }]}
]},
{rabbitmq_clusterer, [
{config, [ {version,1}, {nodes,["rabbit#node01", "rabbit#node02", "rabbit#node03"]} ]}
]}
].
EOF
To verify your current config for the clusterer plugin you can use:
rabbitmqctl eval 'rabbit_clusterer:status().'

Related

How to Diagnose Weatherreport from CouchDB

After building CouchDB from github. I run weatherreport as recommended in documentation to get the following error. How do you diagnose exactly whats going wrong? This seems like a bunch of random numbers
17:38:27 WARN: 'escriptize' command does not apply to directory /home/test/workspace/CouchDB-ant_rhel/couchdb
17:38:27 [ * ] Setup environment ... ok
17:38:27 [ * ] Ensure CouchDB is built ... ok
17:38:27 [ * ] Ensure Erlang boot script exists ... ok
17:38:27 [ * ] Prepare configuration files ... ok
17:38:27 [ * ] Start node node1 ... ok
17:38:28 [ * ] Check node at http://127.0.0.1:15984/ ... ok
17:38:28 [ * ] Running cluster setup ... ok
17:38:30 [ * ] Exec command bin/weatherreport --etc dev/lib/node1/etc --level error ... ['node1_diag35200#127.0.0.1'] [crit] Bad rpc call executing check weatherreport_check_memory_use: {'EXIT',{badarg,[{erlang,list_to_float,[[101]],[{error_info,#{module => erl_erts_errors}}]},{weatherreport_util,binary_to_float,1,[{file,[115,114,99,47,119,101,97,116,104,101,114,114,101,112,111,114,116,95,117,116,105,108,46,101,114,108]},{line,80}]},{weatherreport_check_memory_use,check,1,[{file,[115,114,99,47,119,101,97,116,104,101,114,114,101,112,111,114,116,95,99,104,101,99,107,95,109,101,109,111,114,121,95,117,115,101,46,101,114,108]},{line,56}]},{weatherreport_check,check,2,[{file,[115,114,99,47,119,101,97,116,104,101,114,114,101,112,111,114,116,95,99,104,101,99,107,46,101,114,108]},{line,81}]},{weatherreport_runner,'-run/2-fun-0-',2,[{file,[115,114,99,47,119,101,97,116,104,101,114,114,101,112,111,114,116,95,114,117,110,110,101,114,46,101,114,108]},{line,54}]},{erlang,apply,2,[]}]}}

ESlint override rule by nested directory

I want to disable rule for all files inside nested directory. I found examples only for exact path or by file extension. But it is not what I want.
We use it for shared config and don't know where this directory will be. We have many of them.
I'm trying config like this:
{
overrides: [
{
files: [
'**/test/**/*',
],
rules: {
"import/no-extraneous-dependencies": "off"
}
},
],
}
But glob like **/test/**/* and many others didn't not work.
Can someone help to reach this goal?
The above code should work.
How were you testing this? If it's an extension like VSCode you may need to refresh things to see latest definitions loaded.
If you are using a eslint service like esprint you will also need to restart it to grab latest definitions.
Caching
Make sure that eslint is not configured to cache results to avoid having to cache bust when debugging things. eslint docs
Here's an example for a react-native app with multiple overrides
module.exports = {
...baseConfig,
overrides: [
typescriptOverrides,
e2eOverrides,
themeOverrides,
{
files: ['**/*.style.js'],
rules: {
'sort-keys': [
'error',
'asc',
{
caseSensitive: true,
natural: true,
},
],
},
},
{
files: ['**/*.test.js'],
rules: {
'arrow-body-style': 'off',
},
},
],
};
Debugging the glob matcher
Run eslint in debug mode and see all the files being run example DEBUG=eslint:cli-engine npx eslint src/**/*.test.js
You can test your glob patterns by running a ls command. Example: ls ./src/**/*.test.js will either return all the files or 'no matches found'.

Jest tests not found

I have the following output in a gitlab job:
yarn run v1.15.2
$ jest --verbose
No tests found
In /path/to/my/project/
47 files checked.
testMatch: - 47 matches
testPathIgnorePatterns: /node_modules/,/build,/lib/ - 0 matches
testRegex: (/__tests__/.*|\.(test|spec))\.(tsx?|jsx?)$ - 1 match
Pattern: - 0 matches
Tests are not being executed, what am I doing wrong in here? I've been using the same gitlab-ci.yml config in other projects.
Any help would be appreciated!
Yes, the mistake was in package.json, I was missing <rootDir> in testPathIgnorePatterns and modulePathIgnorePatterns paths under jest options.
"testPathIgnorePatterns": [
"<rootDir>/node_modules/",
"<rootDir>/build",
"<rootDir>/lib/"
],
"modulePathIgnorePatterns": [
"<rootDir>/dist/",
"<rootDir>/build/"
]
The mistake is in your path. First open your cmd and navigate to directory where your package.json resides and then make sure whatever path you have provided in package.json, it must get-able.
You can also try to hard-code the path. Once you are able to run it then go for regex.
package.json
"name": "test",
"jest": {
"transform": {},
"verbose": true,
"bail": true,
"testMatch": ["path"]
},
For more details: testPathIgnorePatterns, modulePathIgnorePatterns
"testPathIgnorePatterns": [
"<rootDir>/build"
],
"modulePathIgnorePatterns": [
"<rootDir>/build/"
]

How do you automatically download external (C++) libraries when using native Node addons?

I'd like to include libpng in my native Node addon. How can I include it, so that when my library is installed, it will automatically download a specified version of libpng? Is it possible to use npm's package.json for this? If this is not possible, what is the accepted way of including an external library's source code in your repository?
I recommend that you create a gyp file to build the dependency library and add a script to your package.json to download it for you.
I often use my own native addon modules to demonstrate the answers I give to these questions. My own native addon module, node-dvbtee, demonstrates this.
You will notice the following inside package.json:
"scripts": {
"preinstall": "npm install mkdirp && scripts/prepare-build.sh && node scripts/configure-build.js",
"install": "node-gyp rebuild -j 8",
"test": "mocha"
},
What matters here is the preinstall section of the scripts section. It calls scripts/prepare-build.sh, which contains the following:
#!/bin/sh
cd "$(dirname "$0")"/..
if [ -e libdvbtee ]; then
echo libdvbtee sources present
else
git clone git://github.com/mkrufky/libdvbtee.git
fi
cd libdvbtee
if [ -e libdvbpsi/bootstrap ]; then
echo libdvbpsi sources present
else
rm -rf libdvbpsi
git clone git://github.com/mkrufky/libdvbpsi.git
cd libdvbpsi
touch .dont_del
cd ..
fi
As you can see, the above script checks to see if the libdvbtee directory is present. If not, it will clone it from github. After that, it checks to see if the full libdvbpsi sources are present. If not, it will clone them from github.
Now, for the gyp files:
My project has the gyp files stored in the deps directory.
libdvbpsi.gyp looks like this:
{
'target_defaults': {
'default_configuration': 'Debug',
'configurations': {
'Debug': {
'defines': [ 'DEBUG', '_DEBUG' ],
'msvs_settings': {
'VCCLCompilerTool': {
'RuntimeLibrary': 1, # static debug
},
},
},
'Release': {
'defines': [ 'NDEBUG' ],
'msvs_settings': {
'VCCLCompilerTool': {
'RuntimeLibrary': 0, # static release
},
},
}
},
'msvs_settings': {
'VCLinkerTool': {
'GenerateDebugInformation': 'true',
},
},
'include_dirs': [
'../libdvbtee/libdvbpsi/src',
'../libdvbtee/libdvbpsi/src/tables',
'../libdvbtee/libdvbpsi/src/descriptors',
'../libdvbtee/libdvbpsi'
],
'defines': [
'PIC',
'HAVE_CONFIG_H'
],
},
'targets': [
# libdvbpsi
{
'target_name': 'dvbpsi',
'product_prefix': 'lib',
'type': 'static_library',
'sources': [
'../libdvbtee/libdvbpsi/src/dvbpsi.c',
'../libdvbtee/libdvbpsi/src/psi.c',
'../libdvbtee/libdvbpsi/src/demux.c',
'../libdvbtee/libdvbpsi/src/descriptor.c',
'../libdvbtee/libdvbpsi/src/tables/pat.c',
'../libdvbtee/libdvbpsi/src/tables/pmt.c',
'../libdvbtee/libdvbpsi/src/tables/sdt.c',
'../libdvbtee/libdvbpsi/src/tables/eit.c',
# '../libdvbtee/libdvbpsi/src/tables/cat.c',
'../libdvbtee/libdvbpsi/src/tables/nit.c',
'../libdvbtee/libdvbpsi/src/tables/tot.c',
# '../libdvbtee/libdvbpsi/src/tables/sis.c',
# '../libdvbtee/libdvbpsi/src/tables/bat.c',
# '../libdvbtee/libdvbpsi/src/tables/rst.c',
'../libdvbtee/libdvbpsi/src/tables/atsc_vct.c',
'../libdvbtee/libdvbpsi/src/tables/atsc_stt.c',
'../libdvbtee/libdvbpsi/src/tables/atsc_eit.c',
'../libdvbtee/libdvbpsi/src/tables/atsc_ett.c',
'../libdvbtee/libdvbpsi/src/tables/atsc_mgt.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_02.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_03.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_04.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_05.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_06.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_07.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_08.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_09.c',
'../libdvbtee/libdvbpsi/src/descriptors/dr_0a.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_0b.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_0c.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_0d.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_0e.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_0f.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_10.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_11.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_12.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_13.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_14.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_1b.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_1c.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_24.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_40.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_41.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_42.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_43.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_44.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_45.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_47.c',
'../libdvbtee/libdvbpsi/src/descriptors/dr_48.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_49.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_4a.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_4b.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_4c.c',
'../libdvbtee/libdvbpsi/src/descriptors/dr_4d.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_4e.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_4f.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_50.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_52.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_53.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_54.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_55.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_56.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_58.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_59.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_5a.c',
'../libdvbtee/libdvbpsi/src/descriptors/dr_62.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_66.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_69.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_73.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_76.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_7c.c',
'../libdvbtee/libdvbpsi/src/descriptors/dr_81.c',
'../libdvbtee/libdvbpsi/src/descriptors/dr_83.c',
'../libdvbtee/libdvbpsi/src/descriptors/dr_86.c',
# '../libdvbtee/libdvbpsi/src/descriptors/dr_8a.c',
'../libdvbtee/libdvbpsi/src/descriptors/dr_a0.c',
'../libdvbtee/libdvbpsi/src/descriptors/dr_a1.c',
],
'conditions': [
['OS=="mac"',
{
'xcode_settings': {
'WARNING_CFLAGS': [
'-Wno-deprecated-declarations'
]
}
}
]
],
'cflags!': ['-Wdeprecated-declarations','-Wimplicit-function-declaration'],
'cflags+': ['-Wno-deprecated-declarations','-Wno-implicit-function-declaration','-std=c99'],
},
]
}
Of course, there are a lot of specifics in this gyp file that are specific to libdvbpsi and my use case. As such, you will notice that quite a few of the source files in the library are not actually needed for the version of it that we're going to build for my node.js addon module. The source files that we are not going to build are commented out by preceding that line with a hash # character.
We link this library to the node module we're currently building by including it in the dependency section of the node.js addon modules bindings.gyp. Here is the one used in my addon module:
{
"targets": [
{
"target_name": "dvbtee",
"sources": [
"src/node-dvbtee.cc",
"src/dvbtee-parser.cc"
],
"dependencies": [
'deps/libdvbtee.gyp:dvbtee_parser'
],
"include_dirs": [
"libdvbtee/usr/include",
"libdvbtee/libdvbtee",
"libdvbtee/libdvbtee/decode",
"libdvbtee/libdvbtee/decode/table",
"libdvbtee/libdvbtee/decode/descriptor",
"<!(node -e \"require('nan')\")"
],
'cflags': [ '-DDEBUG_CONSOLE=1' ],
'cflags_cc': [ '-DDEBUG_CONSOLE=1', '-Wno-deprecated-declarations' ],
'cflags!': [ '-fno-exceptions' ],
'cflags_cc!': [ '-fno-exceptions', '-Wdeprecated-declarations' ],
'conditions': [
['OS=="mac"', {
'xcode_settings': {
'WARNING_CFLAGS': [
'-Wno-deprecated-declarations'
],
'GCC_ENABLE_CPP_EXCEPTIONS': 'YES'
}
}]
]
}
]
}
As you can see, deps/libdvbtee.gyp:dvbtee_parser is listed in the dependencies section, above. deps/libdvbtee.gyp:dvbtee_parser itself contains its own dependencies section:
"dependencies": [
'libdvbpsi.gyp:dvbpsi'
],
So, when npm install is executed, npm will run the preinstall script to fetch the sources, then it will build the custom libdvbpsi library based on libdvbpsi.gyp, then build the custom libdvbtee based on libdvbtee.gyp which depends on that custom libdvbpsi library, and finally it will build and link the node.js addon module that depends on the libdvbtee library build.
In my specific case, the libraries need to be configured before we attempt to build them. This step is required to write the config.h header file that these libraries depend on. I handle that step within the scripts/configure-build.js script which is run after downloading the sources. In most cases, you will want to simply run ./configure for each library, but that depends on the libraries that you're including.
This is a cross-platform solution, provided that the libraries you're building are themselves cross-platform.
You can add it in scrips section in package.json. But you have to be careful about which all devices your application will be executed. Such as ARM, Intel 32 bit or Intel 64 bit, or so. You have options, I am just adding some hints here and you can put your code accordingly. Here script will get executed during npm install command.
1. In script, you have to check the machine type and do download library accordingly.
//package.json
{
"scripts": {
"preinstall": ""
,"install": ""
,"test" : ""
}
}
In script, you have to check the machine type and do download library accordingly, something like using wget abc.so in install section of script. You have to do some scripting to take right lib for machine and put in right place.
In other way if you want, you can add build script, which will do download the source code and build in the system on the fly.
git clone git://xyz/abc.git
cd abc
./configure
make
make install.
You can also look for babel cli for source compilation. https://babeljs.io/docs/usage/cli/
And all these within scripts section, within preinstall or install or test.
In your case, you would prefer to go for 1st way.

AWS Cloudformation cloud-init on Amazon Linux AMI fails in util.py

I'm trying to have a userdata script in the NAT AMI provided by Amazon. The userdata script never starts execution and when I looked at the logs, I see that there's a failure at
Mar 10 21:34:13 cloud-init[2290]: util.py[DEBUG]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/cloudinit/util.py", line 638, in runparts
subp([exe_path], capture=False, shell=True)
File "/usr/lib/python2.6/site-packages/cloudinit/util.py", line 1529, in subp
cmd=args)
ProcessExecutionError: Unexpected error while running command.
Command: ['/var/lib/cloud/instance/scripts/part-001']
Exit code: 1
Reason: -
Stdout: ''
Stderr: ''
I found in this related StackOverflow Question that one user was able to get around it by having #!/bin/sh instead of #!/bin/sh -e -v in their UserData portion of the template, but the issue does not seem to have a clear solution.
I have tried using just #!/bin/bash, #!/bin/bash -xe and completely removing this line altogether. I still continue to hit that error.
Has anyone encountered this issue with the Amazon provided NAT AMI before and if so, how do I get around this issue?
My UserData looks like this:
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"sudo yum update -y aws-cfn-bootstrap\n",
"# Install the files and packages and run the commands from the metadata\n",
"sudo /opt/aws/bin/cfn-init -v --access-key ", { "Ref" : "IAMUserAccessKey" }, " --secret-key ", { "Ref" : "SecretAccessKey" },
" --stack ", { "Ref" : "AWS::StackName" },
" --resource NAT2 ",
" --configsets config ",
" --region ", { "Ref" : "AWS::Region" }, "\n"
]]}}
I was able to get around this issue. As Max had suggested, I removed sudo from the cfn-init command. But it did not have the desired effect.
But then, I realized that the environment variables EC2_HOME would not be setup for the sudo. I am doing a bunch of stuff in my configset which uses aws cli and for these to work, the EC2_HOME needs to be setup. So, I went in and removed sudo everywhere in my configset and UserData. So now, my UserData looks like:
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"yum update -y aws-cfn-bootstrap\n",
"# Install the files and packages and run the commands from the metadata\n",
"/opt/aws/bin/cfn-init -v --access-key ", { "Ref" : "IAMUserAccessKey" }, " --secret-key ", { "Ref" : "SecretAccessKey" },
" --stack ", { "Ref" : "AWS::StackName" },
" --resource NAT2 ",
" --configsets config ",
" --region ", { "Ref" : "AWS::Region" }, "\n"
]]}}
For me I was trying to assign the Apache user to a file but we aren't using apache on this setup so the user didn't exist. This threw a CFN error. It was hard to track down the error but for anyone with this issue I suggest you view the logs here:
/var/log/cfn-init.log
and look for a traceback with an error message to help you debug.

Resources