i use bigcouch as my project...
i open 3 node ( default )
everything fine until one node suddenly down ( one server crash )
why if one node down, input process stuck...?
i read the documentation...
i try to set N = 1 ( replicate constant ) , R = 1 (read qourum constant ), and W = 1 (write qourum constant)...
i think my conf mean if 1 write and 1 replicate happen to server that's enaugh to return 201 status.
and then i made issue in bigcouch github..
i get the answer that i must set setting to default...
i already set the setting into default but bigcouch still stuck if one from three node down...
this 3 node i input in "nodes" database:
bigcouch#bigserver1.server1
bigcouch#bigserver2.server2
bigcouch#bigserver3.server3
and this error i get if i create a database via futon on one node down condition...
{timeout,[{{shard,undefined,'bigcouch#bigserver1.server1',undefined,undefined, #Ref}, ok}, {{shard,undefined,'bigcouch#bigserver2.server2',undefined,undefined, #Ref}, ok}, {{shard,undefined,'bigcouch#bigserver3.server3',undefined,undefined, #Ref}, nil}]}
need 10 minutes until this error come out...
this happen to with my node.js apps, and made my node.js apps stuck for 10 minutes
This is a known limitation of BigCouch 0.3. In 0.4 you will be able to create and delete databases as long as a majority of nodes is online.
Related
It is said in the documentation that cassandra-driver does automatic paging when queries are large enough (with default_fetch_size being 5000 rows) and will return PagedResult.
I have tested reading data from my local Cassandra which contains 9999 rows with SimpleStatement with my own fetch size, but it returned the ResultSet (9999 rows) instead of pages (instance of PagedResult). Also, I tried to change the Session.default_fetch_size but it didn't work as well.
Here's my code..
My first attempt: This is the SimpleStatement code i have made to change the fetch size.
cluster = Cluster()
session = cluster.connect(keyspace_name)
query = "SELECT * FROM user"
statement = SimpleStatement(query, fetch_size=10)
rows = list(session.execute(statement))
print(len(rows))
It prints 9999 (all rows), not 10 rows as I already set the fetch_size.
My second attempt: I tried to change the query fetch size by changing session's default fetch size Session.default_fetch_size.
cluster = Cluster()
session = cluster.connect(keyspace_name)
session.default_fetch_size = 10
query = "SELECT * FROM user"
rows = list(session.execute(query))
print(len(rows))
It also prints 9999 rows instead of 10.
My goal is not to limit the rows from my fetch query, such as SELECT * FROM user LIMIT 10. What I want is to fetch the rows page by page to avoid overload on memory.
So what actually happened?
Note: I am using Cassandra-Driver 3.25 for Python and using Python3.7
I am sorry if my additional information still doesn't make my question a good one. I never ask any questions before. So...any suggestions are welcome :)
Your test is invalid because your code is faulty.
When you list(), you are in fact "materialising" all the result pages. Your code is not iterating over the rows but retrieving all of the rows.
The driver automatically fetches the next page in the background until there are no more pages to fetch. It may not seem like it but each page only contains fetch_size rows.
Retrieving the next page happens transparently so to you it seems like the results are not getting paged at all but that automatic behaviour from the driver is working as designed. Cheers!
I want to create a two node cluster in Cassandra. I have done following changes in my yaml file -
Example:
Node 1
cluster_name: 'MyCassandraCluster'
num_tokens: 256
seed_provider: class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
‐ seeds: "10.168.66.41,10.176.170.59"
listen_address:10.168.66.41
rpc_address:10.168.66.41
endpoint_snitch: GossipingPropertyFileSnitch
auto_bootstrap : false
Node 2
cluster_name: 'MyCassandraCluster'
num_tokens: 256
seed_provider: class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
‐ seeds: "10.168.66.41"
listen_address:10.176.170.59
rpc_address:10.176.170.59
endpoint_snitch: GossipingPropertyFileSnitch
auto_bootstrap : false
But still I am not able to create two node cluster. Why am I facing this issue?
Well, it's hard to know without seeing an actual error message from your system.log, but I'll take a guess. It looks like you might have a chicken-before-the-egg problem, based on your seed nodes.
10.176.170.59 won't be able to start without 10.168.66.41 already running. And while .41 has itself specified as a seed node, it also has .59 specified, which might throw things off.
My recommendation is to change your seed list to be the same on all (both) nodes. Just set it to this on both:
seeds: "10.168.66.41"
Then, start .41, which should come up. Then start .59.
If that doesn't do it, look for exceptions in your system.log.
Auto bootstrap should be set to true when a new node is added in a cluster.
So set auto bootstrap to true and set your seed node as one node E.g. in your case it 10.168.66.41 (or) 10.176.70.59.
Start your seed node first
Telnet your seed node and storage port (default 7000) from your secondary node, if not able to telnet then check your firewall settings.
Start your secondary node now
I'm trying to fix problem in some legacy code which is generating nodes of custom content type "show", but only if node in same type and with same title doesn't exist already. Code looks like:
$program = node_load(array('title' => $xml_node->program_title, 'type' => 'show'));
if (!$program) {
$program = new stdClass();
$program->type = 'show';
...
node_submit($program);
node_save($program);
}
So, script is first trying to load node in 'show' content type with specific title and if it fails it creates one.
Problem is, when it's called multiple times in short period of time (inside a loop) it creates double nodes. Like 2 shows with the same title created in same second?!?
What can be the problem there?
I was looking examples for how to save node in Drupal 6. In some they don't even call node_submit() . Is that call needed? If so, do I maybe have to pass to node_save() what node_submit() returned? Or maybe node_load() fails to load existing node for some reason? Maybe some cache has to be cleared or something?
As far as i know and used node_save to create nodes programmaticly there is no need for the node_submit() function.
The reason that double nodes are created is that the node_load() function fired before completing the updates to the node_load() cache. Try to add:
node_load(FALSE, NULL, TRUE);
after node_save($program).
this will clear the node_load() cache.
see:
https://api.drupal.org/comment/12084#comment-12084
I get a very strange error when creating column family with phpcassa, here is my code:
$sys = new SystemManager("127.0.0.1:9160");
$attr = array("comparator" => "UTF8Type");
$data = $sys->create_column_family("my_key_space", "user_likes", $attr);
So i'm not actually sure if it's a valid code, but i am quite sure it is, so this is the error i get:
TTransportException [ 0 ]: TSocket: timed out reading 4 bytes from 127.0.0.1:9160
And i get this error after a really long loading, maybe 30-60 secs, but any other code like retrieving or inserting data works perfectly, so what could it be?
I believe the attribute name should be "comparator_type" instead of "comparator".
As for why the server isn't responding, you'll probably find an Exception or stack trace in your Cassandra logs. If you're using an up-to-date version of Cassandra (like 1.1.5 or 1.1.6), I suggest opening a ticket in the Cassandra JIRA, because it should be returning an error instead of timing out.
I have this simple Table (just for test) :
create table table
(
key int not null primary key auto_increment,
name varchar(30)
);
Then I execute the following requests:
insert into table values ( null , 'one');// key=1
insert into table values ( null , 'two');// key=2
At this Stage all goes well, then I close The H2 Console and re-open it and re-execute this request :
insert into table values ( null , 'three');// key=33
Finally, here is all results:
I do not know how to solve this problem, if it is a real problem...
pending a response from the author...
The database uses a cache of 32 entries for sequences, and auto-increment is internally implemented a sequence. If the system crashes without closing the database, at most this many numbers are lost. This is similar to how sequences work in other databases. Sequence values are not guaranteed to be generated without gaps in such cases.
So, did you really close the database? You should - it's not technically a problem if you don't, but closing the database will ensure such strange things will not occur. I can't reproduce the problem if I normally close the database (stop the H2 Console tool). Closing all connections will close the database, and the database is closed if the application is stopped normally (using a shutdown hook).
By the way, what is your exact database URL? It seems you are using jdbc:h2:tcp://... but I can't see the rest of the URL.
Don't close terminal. Terminal is parent process of h2-tcp-server. They are not detached. When you just close terminal, it's process closes all child processes, what means emergency server shutdown
This happens when a database "thinks" it got forced to close (an accident or emergency for example), and its related to "identity-cache"
In my case I was facing this issue while learning and playing with the H2 database with an SpringBoot application, the solution was that at the h2-console when finishing playing, execute the SHUTDOWN; command and after that you can safely stop your spring boot application without having this tremendous jump on your autogenerated fields.
Personal Note: This usually is not a problem if you are creating the new database on every application start, but when you persist the data (for example on a data.sql file like on the below properties) you are playing with on the h2 database and it persist even when restarting, then this happens, so close it safely with SHUTDOWN command.
spring.datasource.url=jdbc:h2:./src/main/resources/data;DB_CLOSE_ON_EXIT=FALSE;AUTO_RECONNECT=TRUE
spring.jpa.hibernate.ddl-auto=update
References:
Solution
https://stackoverflow.com/a/40135657/10195307
Learn about identity-cache https://www.sqlshack.com/learn-to-avoid-an-identity-jump-issue-identity_cache-with-the-help-of-trace-command-t272/