We have huge hadoop cluster and we installed one coordinator preso node
and 850 presto workers nodes. now we want to change the values in the file - config.properties but this should be done on all the workers!
so under
/opt/DBtasks/presto/presto-server-0.216/etc
the file is like this
[root#worker01 etc]# more config.properties
#
coordinator=false
http-server.http.port=8008
query.max-memory=50GB
query.max-memory-per-node=1GB
query.max-total-memory-per-node=2GB
discovery.uri=http://master01.sys76.com:8008
and we want to change it to
coordinator=false
http-server.http.port=8008
query.max-memory=500GB
query.max-memory-per-node=5GB
query.max-total-memory-per-node=20GB
discovery.uri=http://master01.sys76.com:8008
but this was done only on the first node - worker01, but we need to do it also on all workers. well - we can copy this file by scp to all other workers , but not in case root is restricted but what I want to know , if presto already think about more elegant approach that sync the configuration on all workers node as all know after we set new values we need also to restart the presto louncer script
dose presto have solution to this ?
I must to tell that my cluster is restricted root , so we cant copy the files VIA ssh
Presto does not have the ability to sync the configurations. This is something you would need to manage outside e.g. using a tool like Ansible. There is also project command line tool presto-admin (https://github.com/prestosql/presto-admin) that can assist with deploying the configs across the cluster.
Additionally, if you are using public clouds such as AWS, there are commercial solutions from Starburst (https://www.starburstdata.com/) that can assist management of the configurations as well.
Related
Anybody supporting a Cassandra application in production? Curious to know about, how you handle cassandra.yaml file. Also, do you think "seed node" get's a status of master node (partially).
Anybody supporting a Cassandra application in production?
Yes, my team supports several applications which use Cassandra in production.
Curious to know about, how you handle cassandra.yaml file.
By "handle" the cassandra.yaml file, I assume you mean deploy with different values with automation at large scale. We use an open source tool called Rundeck for that.
Rundeck allows you to build options into your jobs, which is useful for properties like cluster_name, seeds, etc. Then, you inject those options into your deploy scripts, using a regex replace (sed) to get them into specific properties in the yaml. Ex:
sed -i "s/cluster_name: 'Test Cluster'/cluster_name: '#cluster_name#'/" cassandra.yaml
Also, do you think "seed node" get's a status of master node (partially).
No, a seed node is not any kind of "master" node.
A seed node is no different from any other node.
In theory, every node in your cluster could be a seed node for another node. All it is, is a way for a new node to discover the network topology of the cluster. Think of it as an entry point to the cluster.
Well, I have two docker containers one of Presto coordinator and one for Presto Worker. It works fine but I need to specify the catalogs.properties files same in both coordinator and worker
Although my presto worker doesn't need to know about my catalogs.properties files they can fetch the details from master itself
But if I didn't specify then my presto launcher gets failed
Is there any way to stop the duplicity of catalogs.properties file in both master and slave
No, currently you need to configure your catalogs on each machine.
Note: in a typical production setup there will be some automation doing this for you, so there manual work is not multiplied then.
you may see this , it has configuration of presto on how to automatically make sure all workers have same catalog file. Note , this is just one of the features
https://github.com/niths4u/prestodb-cluster/
I am trying to deploy Cassandra on a Linux Based HPC cluster and I need some guidelines if possible. Specifically, what is the difference between running Cassandra locally and in cluster.
When managing locally (in which case it runs smoothly) we duplicate the original files for every node inside our Cassandra directory and we apply the appropriate changes for IP address, rcp, JMX etc... however, when managing a network which files do we need to install in each node. The whole package with all the files or just some of the required ones
like, bin/cassandra.in.sh, conf/cassandra.yaml, bin/cassandra.
I am a little bit confused on what to store in each node separately so to start working on the cluster.
You need to install Cassandra on each node (VM), i.e. the whole package and then update config files as neccessary. As described here to configure cluster in a single data center you need:
Install Cassandra on each node
Configure cluster name
Configure seeds
Configure snitch, if needed
I have a Spark app which uses many workers. I'd like to be able to pass simple configuration information to them easily (without having to recompile): e.g. USE_ALGO_A. If this was a local app, I'd just set the info in environment variables, and read them. I've tried doing something similar using spark-env.sh, but the variables don't seem to propagate properly.
How can I do simple runtime configuration of my code in the workers?
(PS I'm running a spark-ec2 type cluster)
You need to take care of configuring each worker.
From the Spark docs:
You can edit /root/spark/conf/spark-env.sh on each machine to set Spark configuration options, such as JVM options. This file needs to be copied to every machine to reflect the change.
If you use an Amazon EC2 cluster, there is a script that RSYNC s a directory between teh master and all workers.
The easiest way to do this is to use a script we provide called copy-dir. First edit your spark-env.sh file on the master, then run ~/spark-ec2/copy-dir /root/spark/conf to RSYNC it to all the workers.
see https://spark.apache.org/docs/latest/ec2-scripts.html
I have 4 windows machines, On which i have installed hadoop on 3 out of 4.
One machine having the Harton work Sandbox ( say for 4-th machine) , Now i need to make the 4th machine as my server ( Name node )
and rest of the machine as slaves.
Whether it will work if i update the configuration files in the rest of 3 machines
Or is there any other way to do this ?
Any other suggestions ?
Thanks
finally i got this but i could not find any help
Hadoop cluster configuration with Ubuntu Master and Windows slave
A non-secure cluster will work (non-secure in the sense that you do not enable Hadoop Kerberos based auth and security, ie. hadoop.security.authentication is left as simple). You need to update all nodes config to point to the new 4th node as the master for various services you plan to host on it. You mention namenode, but I assume you really mean to make the 4th node the 'head' node, meaning it will probably also run resourcemanager and historyserver (or the jobtracker for old-style Hadoop). And that is only core, w/o considering higher level components like Hive, Pig, Oozie etc, and not even mentioning Ambari or Hue.
Doing a post-install configuration of existing Windows (or Linux, makes no difference) nodes is possible, editing the various xx-site.xml files. You'll have to know what to change and is not trivial. Probably it would be much easier to just deploy again the windows machines, with an appropriate clusterproperties.txt config file. See Option III - Manual Install One Node At A Time.