Is it possible to have Cassandra load an additional config file with other properties, along with the default one? (/etc/cassandra/cassandra.yaml)
Thanks
No, you can change it to use a different file (by setting -Dcassandra.config=file://your/path), but only a single file. If you really want though, you can write a custom config loader (like the YamlConfigurationLoader) and set it with: -Dcassandra.config.loader=your.Loader.
Related
I have read the online tutorial about Optional Configuration
https://help.sap.com/viewer/b490bb4e85bc42a7aa09d513d0bcb18e/2011/en-US/8beb75da86691014a0229cf991cb67e4.html
I know that after I set the HYBRIS_OPT_CONFIG_DIR and put [2digit number]-local.properties file into this directory, Hybris will load these files automatically.
What I want to do is that, I set up several hybris work nodes as a cluster, and every node need a different configuration because every node's role is different, one is frontend, one is backend and one is data processor.
However, due to some reason, I have to set the same HYBRIS_OPT_CONFIG_DIR value for every node, so I want to know, if there is any way to let hybris load some of properties file in this directory.
For example, I put 10-local.properties and 20-local.properties, and one node only load 10-local.properties and one node will load both of them.
I know that I can implement configuration switch by set different value of HYBRIS_OPT_CONFIG_DIR, but I hope I can find another way.
thanks
I know the question is a bit old but maybe it will help someone in the future.
I see 3 potential solutions here:
Keep common config/local.properties with node type agnostic properties and store additional node type specific properties files in separate directories. Then set HYBRIS_OPT_CONFIG_DIR to a different directory to let hybris load additional props. E.g.
/config/local.properties <- common props
/config/frontend/10-local.properties <- additional frontend props
/config/backend/10-local.properties <- additional backend props
Now setting HYBRIS_OPT_CONFIG_DIR=/config/frontend environment variable will make hybris load /config/local.properties AND /config/frontend/10-local.properties.
Use HYBRIS_RUNTIME_PROPERTIES env variable to point to an additional properties file. E.g. you could have
/config/local.properties <- common props
/config/frontend.properties <- additional frontend props
/config/backend.properties <- additional backend props
Now setting HYBRIS_RUNTIME_PROPERTIES=/config/frontend.properties environment variable will make hybris load /config/local.properties AND /config/frontend.properties.
Use useconfig system property to make hybris load a different properties file. In this case you need to setup props like this:
/config/localfrontend.properties <- frontend props
/config/localbackend.properties <- backend props
Setting useconfig system property (e.g. by using -Duseconfig=frontend) will make Hybris load ONLY /config/localfrontend.properties properties file.
In my system, I use logstash, filebeat and elasticsearch
Filebeat reads the logs, required fields in the logs are filtered with logstash and saved in elasticsearch.
I have a customer requirement to switch on/off saving some fields in the log by a single config change by the customer.
My planned approach is to keep the switch variable as an environment variable in "/etc/default/logstash" location and let the customer change the variables with a file operation.
But I have found out that the logtash config is not reloaded when we change that file even if we set the "config.reload.automatic: true". So I cannot continue my planned approach.
Also letting customer edit the logstast ".conf" files is not a good approach either because the code is so complex.
Please advice on this issue.
Thanks,
I have found that it is not possible to reload the value of a variable in the environment without restarting logstash. So I have used a file read solution. The config block is as below.
ruby {
code => "event.set( 'variable1',IO.readlines('/etc/logstash/input.txt')[0])"
}
This has fixed my problem. But I would like to know is there a performance impact in executing file operation in each event
I am developing an application , where I read a file from hadoop, process and store the data back to hadoop.
I am confused what should be the proper hdfs file path format. When reading a hdfs file from spark shell like
val file=sc.textFile("hdfs:///datastore/events.txt")
it works fine and I am able to read it.
But when I sumbit the jar to yarn which contains same set of code it is giving the error saying
org.apache.hadoop.HadoopIllegalArgumentException: Uri without authority: hdfs:/datastore/events.txt
When I add name node ip as hdfs://namenodeserver/datastore/events.txt everything works.
I am bit confused about the behaviour and need an guidance.
Note: I am using aws emr set up and all the configurations are default.
if you want to use sc.textFile("hdfs://...") you need to give the full path(absolute path), in your example that would be "nn1home:8020/.."
If you want to make it simple, then just use sc.textFile("hdfs:/input/war-and-peace.txt")
That's only one /
I think it will work.
Problem solved. As I debugged further fs.defaultFS property was not used from core-site.xml when I just pass path as hdfs:///path/to/file. But all the hadoop config properties are loaded (as I logged the sparkContext.hadoopConfiguration object.
As a work around I manually read the property as sparkContext.hadoopConfiguration().get("fs.defaultFS) and appended this in the path.
I don't know is it a correct way of doing it.
is there a way to find out the default values of the parameter which are set in the /etc/systemd/system.conf file?
The manual page of systemd-system.conf just says:
When run as system instance systemd reads the configuration file
system.conf, otherwise user.conf. These configuration files contain a
few settings controlling basic manager operations.
The variables are outcommented (in user.conf + system.conf) and the file /etc/security/limits.conf is ignored by systemd.
So, what are the default values? Are they all set to unlimited?
The answer is:
systemctl show
:)
With MSTest.exe, you can specify a total timeout for a test run by setting the /TestSettings/Execution/Timeouts/#runTimeout attribute in a .testsettings file.
With VSTest.Console.exe, the .testsettings has been deprecated in favor of .runsettings, which apparently has a completely different schema (with, ahem, sparse documentation). I know that I can configure the .runsettings file to use legacy MSTest mode (thereby allowing me to use a .testsettings file), but I would prefer to avoid that if possible.
Is there a way to set a run timeout in the .runsettings file? Or is there a different way to get the same effect?
Yes, there is. Please see RFC here: 0011-Test-Session-Timeout.md