I had question, is it possible to store RMS to Memory card? Since we use large amount data to store in RMS. Is there any alternative way?
I want to know how RMS memory is calculated?
I want to know how RMS memory is calculated
API for above is RecordStore#getSizeAvailable(), see method javadocs:
...Returns the amount of additional room (in bytes) available for this record store to grow. Note that this is not necessarily the amount of extra MIDlet-level data which can be stored, as implementations may store additional data structures with each record to support integration with native applications, synchronization, etc.
To get Total memory size
Runtime.getRuntime().totalMemory()
To get free memory size
Runtime.getRuntime().freeMemory()
Related
With regard to the Key/Value model of ArangoDB, does anyone know the maximum size per Value? I have spent hours searching the Internet for this information but to no avail; you would think that this is a classified information. Thanks in advance.
The answer depends on different things, like the storage engine and whether you mean theoretical or practical limit.
In case of MMFiles, the maximum document size is determined by the startup option wal.logfile-size if wal.allow-oversize-entries is turned off. If it's on, then there's no immediate limit.
In case of RocksDB, it might be limited by some of the server startup options such as rocksdb.intermediate-commit-size, rocksdb.write-buffer-size, rocksdb.total-write-buffer-size or rocksdb.max-transaction-size.
When using arangoimport to import a 1GB JSON document, you will run into the default batch-size limit. You can increase it, but appears to max out at 805306368 bytes (0.75GB). The HTTP API seems to have the same limitation (/_api/cursor with bindVars).
What you should keep in mind: mutating the document is potentially a slow operation because of the append-only nature of the storage layer. In other words, a new copy of the document with a new revision number is persisted and the old revision will be compacted away some time later (I'm not familiar with all the technical details, but I think this is fair to say). For a 500MB document is seems to take a few seconds to update or copy it using RocksDB on a rather strong system. It's much better to have many but small documents.
What is the maximum value that one can set for transaction_buffer inside memsql cnf? I assume there is a correlation with RAM allocated on the server. My leaves have 32G each and at the moment we have transaction_buffer set to 0. We are passed designing phase on our cluster and we would like to do some performance tuning and one parameter that needs to be set up accordingly is this one.
The transaction_buffer size is an amount of memory reserved per database partition - i.e. each leaf node will need transaction_buffer size * partitions per leaf * number of databases memory. The default is 128 MB and this should be sufficient generally.
Basically, it's a balancing act - data in transaction_buffer will exist in memory before being written to disk. A transaction_buffer of 0 may save you some memory, but it's not taking full advantage of the speed of being in memory. If you have a lot of databases that are updated infrequently a low transaction_buffer may be the right balance as it is a per database cost (keeping in mind that each partition is a database itself).
Transaction_buffer may also be valuable for you as a "get out of jail free" card - since if your workload becomes more and more memory intensive it's possible to get into a situation where your OS is killing MemSQL too frequently to reduce memory consumption. Once you get stuck in a vicious cycle like that, restarting with a reduced transaction buffer can reduce memory overhead enough to keep the system from being OOM-killed long enough to troubleshoot and correct the issue on your end.
Eventually, it might become adaptive, and you'll be left without that easy way to get some wiggle-room. Which is why it is essential to make sure the maximum_memory is low enough that your system doesn't begin to OOM kill processes. https://docs.memsql.com/docs/memory-management
There is an In-Memory option introduced in the Cassandra by DataStax Enterprise 4.0:
http://www.datastax.com/documentation/datastax_enterprise/4.0/datastax_enterprise/inMemory.html
But with 1GB size limited for an in-memory table.
Anyone know the consideration why limited it as 1GB? And possible extend to a large size of in-memory table, such as 64GB?
To answer your question: today it's not possible to bypass this limitation.
In-Memory tables are stored within the JVM Heap, regardless the amount of memory available on single node allocating more than 8GB to JVM Heap is not recommended.
The main reason of this limitation is that Java Garbage Collector slow down when dealing with huge memory amount.
However if you consider Cassandra as a distributed system 1GB is not the real limitation.
(nodes*allocated_memory)/ReplicationFactor
allocated_memory is max 1GB -- So your table may contains many GB in memory allocated in different nodes.
I think that in future something will improve but dealing with 64GB in memory it could be a real problem when you need to flush data on disk. One more consideration that creates limitation: avoid TTL when working with In-Memory tables. TTL creates tombstones, a tombstone is not deallocated until the GCGraceSeconds period passes -- so considering a default value of 10 days each tombstone will keep the portion of memory busy and unavailable, possibly for long time.
HTH,
Carlo
I want to use RMS for storing Large Amount of Data. I have checked till certain Limit. I just want to ensure about RMS that will it capable to store it?
I have stored around 1,35,000 characters in RMS and I can also fetch them from RMS. How much data can I store using RMS?
I want to implement it in Live Application.
There is no such fixed limitation for RMS storage capacity. It all depends on how much free memory available on the device.
Try the following code snippet in your application to find memory status on device.
Runtime rt = Runtime.getRuntime();
long totalMemory = rt.totalMemory();
long freeMemory = rt.freeMemory();
// if rs is an instance of your App's RMS
// then, try
RecordStore rs = RecordStore.openRecordStore( "App_Db_Name_Here", true );
int sizeAvailable = rs.getSizeAvailable();
// it returns the amount of additional room (in bytes) available
// for this record store to grow.
Compare the above three values and proceed accordingly in your application.
But RMS IO operations would be slower as the size grows and hence such local database is not used for storing large data. Again this varies from device to device. You should be taking decision of implementation while porting your app on cross devices.
Refer to: RecordStore, and getSizeAvailable() documentation for complete notes.
I'm completely new to using cassandra.. is there a maximum key size and would that ever impact performance?
Thanks!
The key (and column names) must be under 64K bytes.
Routing is O(N) of the key size and querying and updating are O(N log N). In practice these factors are usually dwarfed by other overhead, but some users with very large "natural" keys use their hashes instead to cut down the size.
http://en.wikipedia.org/wiki/Apache_Cassandra claims (apparently incorrectly!) that:
The row key in a table is a string
with no size restrictions, although
typically 16 to 36 bytes long
See also:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/value-size-is-there-a-suggested-limit-td4959690.html which suggests that there is some limit.
Clearly, very large keys could have some network performance impact if they need to be sent over the Thrift RPC interface - and they would cost storage. I'd suggest you try a quick benchmark to see what impact it has for your data.
One way to deal with this might be to pre-hash your keys and just use the hash value as the key, though this won't suit all use cases.