My leaf are currently running on ec2 30 GB ram machines, can I upgrade the same machines to 60 GB ram machines and ensure that memsql leaf memory increases accordingly.
Yes, you certainly can.
If you are adding more memory to the same machines, you just need to
Stop memsql: memsql-ops memsql-stop
Provision the new RAM on the machine
Start memsql: memsql-ops memsql-start
Configure the new memory limit: memsql-ops memsql-update-config --set-global --key maximum_memory --value value_in_mb - see https://help.memsql.com/hc/en-us/articles/115002247706-How-do-I-change-MemSQL-s-memory-limits-after-changing-system-memory-capacity-
If you are switching to new machines instead of provisioning more memory on the same machines, then you can:
Deploy the new machines, install MemSQL on them, and add them to your cluster: https://docs.memsql.com/quickstarts/v5.8/quick-start-on-premises/#5-add-more-host-machines-and-memsql-nodes
Run memsql-ops cluster-manual-control --enable
Run REMOVE LEAF 'host':port for all the old machines that you now want to remove. This will move the data to the new nodes.
Run memsql-ops memsql-delete on each of the old leaf nodes that you just ran REMOVE LEAF on. This will delete the nodes which are now empty of data after the last step.
Run memsql-ops cluster-manual-control --disable
Related
We are getting an issue on Ubuntu EC2 instance, we are running 8 docker container on Amazon EC2 instance which is having 30 GB memory and 16 cores. What we observed our EC2 being stuck after 3-4 days of reboot and in the logs we are getting Memory issues. When we monitored to docker stats command it shows every Docker is running on less than 2GB (Total used 16G) on the EC2 when we ran the free –g command it shows the used memory is around 16-17GB and other memory is in buff / cache and Free is having 0.
Please let us know There is any issue or we have to do any configuration. I tried to drop the cache but it got filled in 10 minutes.
Versions –
Ubuntu – 16.04 ( xenial)
Docker - 17.09.0-ce
Please let me know if required more details for troubleshooting.
Thanks,
I would like to limit allocatable memory per node (VM) on Kubernetes.
Now it seems that certain pods can grow over the memory limit of VM making it unresponsive instead of killing pods before that happens.
See Reserve Compute Resources for System Daemons.
In the systemd, we can configure kubelet with Node Allocatable feature like this.
$ cat > /etc/systemd/system/kubelet.service.d/20-node-eviction.conf <<EOF
Environment="KUBELET_EXTRA_ARGS=--eviction-hard=memory.available<500Mi --system-reserved=memory=1Gi"
EOF
$ systemctl daemon-reload
Currently commitlog directory is pointing to Directory1. I want to change it different directory D2. How should the migration be ?
This is how we did it. We have a load-balanced client that talks to Cassandra 1.1.2, and each client lives on each Cassandra node.
Drain your service.
Wait for the load balancer to remove the node.
Stop your service on the local node to halt direct Client-Cassandra writes: systemctl stop <your service name>
At this point there should be no more writes and greatly reduced disk activity:
iostat 2 - Disk activity should be near zero
nodetool gossipinfo
Disable Cassandra gossip protocol to mark the node dead and halt Cassandra-Cassandra writes: nodetool disablegossip
Flush all contents of the commit log into SSTables: nodetool flush
Drain the node – this command is more important than nodetool flush, (and might include all the behaviour of nodetool flush): nodetool drain
Stop the cassandra process: systemctl stop cassandra
Modify Cassandra config file(s), e.g. vi /etc/cassandra/default.conf/cassandra.yaml
Start Cassandra: systemctl start cassandra
Wait 10-20 minutes. Tail Cassandra logs to follow along, e.g. tail -F /var/log/cassandra/system.log
Confirm ring is healthy before moving on to next node: nodetool ring
Re-start client service: systemctl start <your service here>
Note that there was no need for us to do manual copying of the commitlog files themselves. Flushing and draining took care of that. The files then slowly reappeared in the new commitlog_dir location.
You can change the commit log directory in cassandra.yaml (key: "commitlog_directory") and copy all logs to the new destination (see docs) :
commitlog_directory
commitlog_directory
The directory where the commit log is stored. Default locations:
Package installations: /var/lib/cassandra/commitlog
Tarball installations: install_location/data/commitlog
For optimal write performance, place the commit log be on a separate disk partition, or (ideally) a separate physical device from
the data file directories. Because the commit log is append only, an
HDD is acceptable for this purpose.
If you are using bitnami/cassandra containers, this should be done using this env var (see docs):
CASSANDRA_COMMITLOG_DIR: Directory where the commit logs will be
stored. Default: /bitnami/cassandra/data/commitlog
Im trying to start DSE 5.0.1 Cassandra (Single node) in my local.
Getting below error:
CassandraDaemon.java:698 - Cannot start node if snitch's data center
(Cassandra) differs from previous data center (Graph). Please fix the
snitch configuration, decommission and rebootstrap this node or use
the flag -Dcassandra.ignore_dc=true
If you are using GossipingPropertyFileSnitch, start Cassandra with the option
-Dcassandra.ignore_dc=true
If it starts successfully, execute:
nodetool repair
nodetool cleanup
Afterwards, Cassandra should be able to start normally without the ignore option.
This occurs when the node starts and see's that it has information indicating that it was previously part of a different datacenter. This occurs if the datacenter was different on a prior boot and was then changed.
In your case you are most likely using DseSimpleSnitch which names the Datacenter based on the workload of that node. Previously the node was started with Graph enabled which turned the name to Graph. Now trying to start it without Graph enabled leads to it naming the Datacenter Cassandra which is the default.
Using the -Dcassandra.ignore_dc=true flag will allow you to proceed but a better solution would be to switch to GossipingPropertyFileSnitch and give this machine a dedicated datacenter name.
Another option (if you are just testing) is to wipe out the data directory as this will clear out the information previously labeling the datacenter for the node. This will most likely be sudo rm -R /var/lib/cassandra/
This issue will happen when you change Datacenter name in this below respective file /etc/dse/cassandra/cassandra-rackdc.properties
To resolve please follow the below 3 steps
Clear the below-mentioned directories(Note:- if have data please take a backup with cp command )
cd /var/lib/cassandra/commitlog
sudo rm -rf *
cd /var/lib/cassandra/data
sudo rm -rf *
now start the dse service with the below command
service dse start
command to check the list node's status
nodetool -h ::FFFF:127.0.0.1 status
I have a Single Node MemSql cluster:
RAM: 16GM
Core: 4
Ubuntu 14.04
I have Spark deployed on this Memsql for ETL purpose.
I am unable to configure spark on Memsql.
How do I set rotation policy for Spark Work directory: /var/lib/memsql-ops/data/spark/install/work/
How can I change the path?
How large should spark.executor.memory be set to avoid OutOfMemoryExceptions?
How to set different configuration settings for Spark which has been deployed on Memsql cluster?
Hopefully the following will fix your issue:
See spark.worker.cleanup.enabled and related configuration options: https://spark.apache.org/docs/1.5.1/spark-standalone.html
The config can be changed in /var/lib/memsql-ops/data/spark/install/conf/spark_{master,worker}.conf. once the configuration is changed, you must restart the spark cluster with memsql-ops spark-component-stop --all and then memsql-ops spark-component-start --all