Did anyone succeded with running 'rippled'? - ripple

I made an attempt to run rippled both versions 1.1.2 and 1.2.4 on Ubuntu 18.04 and I would not say it does not work at all, because it utilizes CPU and Disk, creates database of 3GB size, and I even was able to create a wallet from the command line, but if I do
./rippled account_info r9cZA1mLK5R5Am25ArfXFmqgNwjZgnfk59 true
I always get
Loading: "/home/xrp/.config/ripple/rippled.cfg"
2019-May-29 10:04:10.273909186 HTTPClient:NFO Connecting to 127.0.0.1:5005
{
"result" : {
"error" : "lgrNotFound",
"error_code" : 21,
"error_message" : "ledgerNotFound",
"request" : {
"account" : "r9cZA1mLK5R5Am25ArfXFmqgNwjZgnfk59",
"command" : "account_info",
"ledger_index" : 0
},
"status" : "error"
}
}
What can be wrong?
See Building and running rippled on Ubuntu for more information on what steps I did.
EDIT1:
I tried the same account_info command with s1.ripple.com and s2.ripple.com and got the same "lgrNotFound" error:
./rippled -v --rpc_ip 34.213.185.56:51234 account_info r9cZA1mLK5R5Am25ArfXFmqgNwjZgnfk59 true

It looks like you are successfully running rippled. What you haven't done yet is successfully get rippled up-to-date or "synchronized" to the rest of the nodes on the network.
There are several possible reasons why, but these are the most common that I've seen:
Inability to connect to other peers. If your firewall or other network issue is making it impossible to connect to other peers, you're never going to sync. Run rippled server_info, and look for the number of peers. It should be at least 10 after rippled has been running for a few minutes. (The command rippled peers will give a lot more detail, but usually the number is sufficient to know if you're ok or not.)
Insufficient time. It can take several minutes for a node to sync, because it has to download the state of the ledger at some given time, then catch up to the changes from that time. If you've been waiting over 15 minutes or so, this is probably not your problem.
Insufficient machine resources. Most often this takes the form of a slow network connection, or a slow hard drive. These pages will give more details and current recommendations:
https://developers.ripple.com/system-requirements.html
https://developers.ripple.com/capacity-planning.html
Inability to download the validator list. Less common in general is a problem connecting to the cofigured [validator_list_sites] (https://vl.ripple.com) by default. Run rippled validators. You should get a result that includes a bunch (31 currently) of validator node IDs (50-odd character strings starting with "n"), and a JSON object labelled validator_list. That object should indicate an expiration date in the future, and a status of "active". Anything else usually indicates a problem. rippled validator_list_sites may give you more of an explanation of what the problem is, if any.

Related

"shmop_open(): unable to attach or create shared memory segment 'No error':"?

I get this every time I try to create an account to ask this on Stack Overflow:
Oops! Something Bad Happened!
We apologize for any inconvenience, but an unexpected error occurred while you were browsing our site.
It’s not you, it’s us. This is our fault.
That's the reason I post it here. I literally cannot ask it on Overflow, even after spending hours of my day (on and off) repeating my attempts and solving a million reCAPTCHA puzzles. Can you maybe fix this error soon?
With no meaningful/complete examples, and basically no documentation whatsoever, I've been trying to use the "shmop" part of PHP for many years. Now I must find a way to send data between two different CLI PHP scripts running on the same machine, without abusing the database for this. It must work without database support, which means I'm trying to use shmop, but it doesn't work at all:
$shmopid = shmop_open(1, 'w', 0644, 99999); // I have no idea what the "key" is supposed to be. It says: "System's id for the shared memory block. Can be passed as a decimal or hex.", so I've given it a 1 and also tried with 123. It gave an error when I set the size to 64, so I increased it to 99999. That's when the error changed to the one I now face above.
shmop_write($shmopid, 'meow 123', 0); // Write "meow 123" to the shared variable.
while (1)
{
$shared_string = shmop_read($shmopid, 0, 8); // Read the "meow 123", even though it's the same script right now (since this is an example and minimal test).
var_dump($shared_string);
sleep(1);
}
I get the error for the first line:
shmop_open(): unable to attach or create shared memory segment 'No error':
What does that mean? What am I doing wrong? Why is the manual so insanely cryptic for this? Why isn't this just a built-in "superarray" that can be accessed across the scripts?
About CLI:
It cannot work in standalone CLI processes, as an answer here says:
https://stackoverflow.com/a/34533749
The master process is the one to hold the shared memory block, so you will have to use php-fpm or mod_php or some other web/service-running version, and maybe even start/request/stop it all from a CLI php script.
About shmop usage itself:
Use "c" mode in shmop_open() for creating the segment before it can be used with "a" or "w".
I stumbled on this error in a different scenario where shared memory is completely optional to speed up some repeated operations. So I wanted to try reading first without knowing memory size and then allocate from actual data when needed. In my case I had to call it as #shmop_open() to hide this error output.
About shmop on Windows:
PHP 7 crashed Apache worker process causing its restart with status 3221225477 when trying to reallocate a segment with the same predefined (arbitrary number) key but different size, even after shmop_delete(). As a workaround for this, I took filemtime() of the source file which contains data to be stored in memory, and used that timestamp as the key in shmop_open(). It still was not flawless IIRC, and I don't know if it would cause memory leaks, but it was enough to test my code which would mainly run on Linux anyway.
Finally, as of PHP version 8.0.7, shmop seems to work fine with Apache 2.4.46 and mod_php in Windows 10.

npm is very slow on Windows 10

This question is basically a duplicate of this one, except that the accepted answer on that question was, "it's not actually slower, you just weren't running the timing command correctly."
In my case, it actually is slower! :)
I'm on Windows 10. Here's the output from PowerShell's Measure-Command (the TotalMilliseconds line represents wall-clock time):
PS> Measure-Command {npm --version}
Days : 0
Hours : 0
Minutes : 0
Seconds : 1
Milliseconds : 481
Ticks : 14815261
TotalDays : 1.71472928240741E-05
TotalHours : 0.000411535027777778
TotalMinutes : 0.0246921016666667
TotalSeconds : 1.4815261
TotalMilliseconds : 1481.5261
A few other numbers, for comparison:
'{.\node_modules.bin\mocha}': 1300ms
'npm run test' (just runs mocha): 3300ms
npm help: 1900ms.
the node interpreter itself is ok: node -e 0: 180ms
It's not just npm that's slow... mocha reports that my tests only take 42ms, but as you can see above, it takes 1300ms for mocha to run those 42ms of tests!
I've had the same trouble. Do you have Symantec Endpoint Protection? Try disabling Application and Device Control in Change Settings > Client Management > General > Enable Application and Device Control.
(You could disable SEP altogether; for me the command is: "%ProgramFiles(x86)%\Symantec\Symantec Endpoint Protection\smc.exe" -stop.)
If you have some other anti-virus, there's likely a way to disable it as well. Note that closing the app in the Notification area might not stop the virus protection. The problem is likely with any kind of realtime protection that scans a process as it starts. Since node and git are frequently-invoked short-running processes, this delay is much more noticeable.
In Powershell, I like to measure the performance of git status, both before and after that change: Measure-Command { git status }
I ran into this problem long ago, I think it was an extension that I had. I use Visual Studio Code, and when it has no extensions and running bash:
//GIT Bash Configuration
"terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe",
it actually flies, I use both OS, so I can tell the difference. Try using different tools and disabling some.
And if that still doesn't work, check your antivirus, maybe it's slowing down the process?
Been googling this all day, with no luck. Decided to uninstall Java to see what would happen and bingo, solved my problem. I know this is an old thread, but I found myself coming back to it so many times to see if I missed anything.
off topic:
Got to figure out how to get Java working now 🤦
Didn't know about Measure-Command, so I'll be using that in the future!
I had this problem. When I tried to run an application of my job in my home, I realized that in my job's laptop the app starts on 2 minutes but in my personal notebook it tooked 5 minutes or more.
After trying some possible solutions, finally I found the problem was that I installed Git Bash in my D drive partition which is a HDD. When I re-installed in C drive whichs is a SSD then the app started faster. However, I also moved Node.js to C drive to prevent another issues.

Bitcoind reindex taking too long. How do I troubleshoot?

I'm trying to get a fully indexed transaction history in bitcoin on my local machine in order to query specific "foreign" transactions. As instructed, I've set txindex=1 in /home/me/.bitcoin/bitcoin.conf, which now reads:
rpcpassword=mypass
txindex=1
I run "bitcoind -reindex" in the terminal and it processes and processes.... and processes. I can see that it's using some system resources through "ps aux | grep bit" but the process never seems to die. I let it run for over a week and it never seemed to finish.
I've seen other people report reindexing with the txindex on only taking a matter of hours, so I'm at a loss to figure out what is going on. I thought maybe that the bitcoind -reindex was just not resulting in an exit code since, after all, it's a daemon that's supposed to run all the time. But when I stopped it and restarted it (without the "reindex" flag), I still get errors if I run "getrawtransaction XXXX" on old transactions.
I'm running ubuntu linux. Is there a way I can monitor the reindex process to see how long it's going to take? Am I doing something wrong that it should take so much time to reindex? Am I doing something wrong in general?
Appreciate any help.
You can check the status with this command:
bitcoin-cli getblockchaininfo
bitcoin#alfa:~/.bitcoin/blocks$ bitcoin-cli getblockchaininfo
{
"chain" : "main",
"blocks" : 156942,
"headers" : 156942,
"bestblockhash" : "00000000000005ae04a5657be198c038a87bee8b8cdc51ff079536493c887ba9",
"difficulty" : 1090715.68005127,
"verificationprogress" : 0.00897010,
"chainwork" : "000000000000000000000000000000000000000000000009fd73b127af545deb",
"pruned" : false,
"softforks" : [
{
[...]
More info about bitcoin-cli can be found at: https://bitcoin.org/en/developer-reference#remote-procedure-calls-rpcs

Give reads priority over writes in Elasticsearch

I have an EC2 server running Elasticsearch 0.9 with a nginx server for read/write access. My index has about 750k small-medium documents. I have a pretty continuous stream of minimal writes (mainly updates) to the content. The speeds/consistency I receive with search is fine with me, but I have some sporadic timeout issues with multi-get (/_mget).
On some pages in my app, our server will request a multi-get of a dozen to a few thousand documents (this usually takes less than 1-2 seconds). The requests that fail, fail with a 30,000 millisecond timeout from the nginx server. I am assuming this happens because the index was temporarily locked for writing/optimizing purposes. Does anyone have any ideas on what I can do here?
A temporary solution would be to lower the timeout and return a user friendly message saying documents couldn't be retrieved (however they still would have to wait ~10 seconds to see an error message).
Some of my other thoughts were to give read priority over writes. Anytime someone is trying to read a part of the index, don't allow any writes/locks to that section. I don't think this would be scalable and it may not even be possible?
Finally, I was thinking I could have a read-only alias and a write-only alias. I can figure out how to set this up through the documentation, but I am not sure if it will actually work like I expect it to (and I'm not sure how I can reliably test it in a local environment). If I set up aliases like this, would the read-only alias still have moments where the index was locked due to information being written through the write-only alias?
I'm sure someone else has come across this before, what is the typical solution to make sure a user can always read data from the index with a higher priority over writes. I would consider increasing our server power, if required. Currently we have 2 m2x-large EC2 instances. One is the primary and the replica, each with 4 shards.
An example dump of cURL info from a failed request (with an error of Operation timed out after 30000 milliseconds with 0 bytes received):
{
"url":"127.0.0.1:9200\/_mget",
"content_type":null,
"http_code":100,
"header_size":25,
"request_size":221,
"filetime":-1,
"ssl_verify_result":0,
"redirect_count":0,
"total_time":30.391506,
"namelookup_time":7.5e-5,
"connect_time":0.0593,
"pretransfer_time":0.059303,
"size_upload":167002,
"size_download":0,
"speed_download":0,
"speed_upload":5495,
"download_content_length":-1,
"upload_content_length":167002,
"starttransfer_time":0.119166,
"redirect_time":0,
"certinfo":[
],
"primary_ip":"127.0.0.1",
"redirect_url":""
}
After more monitoring using the Paramedic plugin, I noticed that I would get timeouts when my CPU would hit ~80-98% (no obvious spikes in indexing/searching traffic). I finally stumbled across a helpful thread on the Elasticsearch forum. It seems this happens when the index is doing a refresh and large merges are occurring.
Merges can be throttled at a cluster or index level and I've updated them from the indicies.store.throttle.max_bytes_per_sec from the default 20mb to 5mb. This can be done during runtime with the cluster update settings API.
PUT /_cluster/settings HTTP/1.1
Host: 127.0.0.1:9200
{
"persistent" : {
"indices.store.throttle.max_bytes_per_sec" : "5mb"
}
}
So far Parmedic is showing a decrease in CPU usage. From an average of ~5-25% down to an average of ~1-5%. Hopefully this can help me avoid the 90%+ spikes I was having lock up my queries before, I'll report back by selecting this answer if I don't have any more problems.
As a side note, I guess I could have opted for more balanced EC2 instances (rather than memory-optimized). I think I'm happy with my current choice, but my next purchase will also take more CPU into account.

oracle query - ORA-01652: unable to extend temp segment but only in some versions of sql*plus

This one has me rather confused. I've written a query which runs fine from my development client but fails on the production client with error "ORA-01652: unable to extend temp segment by....". In both instances, the database and user is the same. On my development machine (MS Windows) I've got SQL*PLUS (Release 9.0.1.4.0) and Toad 9.0 (both using version 9.0.4.0.1 of the oci.dll). Both run the code without errors.
However when I run the same file, against the same database, using the same username/password from a different machine, this time version 10.2.0.4.0 (from the 10.2.0.4-1 Oracle instant client) I get the error.
It does occur reproducibly.
Unfortunately I've only got limited access to the dictionary views on the database which is set up as read-only (can't even get an explain plan!).
I've tried working around the problem by tuning the query (I suspect that there is a large interim result set which is subsequently trimmed down) but have not managed to change the behaviour at either client.
It may be possible to deploy a different version of the client on the machine causing the problems - but currently that looks like downgrading to a previous version.
Any ideas?
TIA
Update
Based on Gary's answer below, I had a look at the glogin.sql scripts - the only difference was that 'SET SQLPLUSCOMPATIBILITY 8.1.7' was present on the working client but absent on failing client - but adding it in did not resolve the problem.
I also tried
alter session set workarea_size_policy=manual;
alter session set hash_area_size=1048576000;
and
alter session set sort_area_size=1048576000;
to no avail :(
Update 2
I managed to find the same behaviour, this time talking to an Oracle 8i backend. In this case the database was RW. That allowed me to confirm that the different clients were, as I suspected, generating different plans. But why????
Looking at the output of 'SHOW PARAMETERS' both clients reported exactly the same settings!
Years ago I worked on a DR database that was fully READONLY, and even the TEMP tablespace wasn't writable. Any query that tried to spill to temp would fail (even if the temp space to be used was pretty trivial).
If this is the same situation, I wouldn't be surprised if there was a login.sql (or glogin.sql or a logon trigger) that does an ALTER SESSION to set larger PGA memory value for the session, and/or changes the optimizer goal to FIRST_ROWS.
If you can, compare the results of the following from both clients:
select * from v$parameter
where ismodified != 'FALSE';
Also from each client for the problem SQL, try EXPLAIN PLAN FOR SELECT...
and SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
See if it is coming up with different query plans.
Not really an answer - but a bit more information....
Our local DBAs were able to confirm that the 16Gb (!) TEMP tablespace was indeed being used and had filled up, but only from the Linux clients (I was able to recreate the error making an oci8 call from PHP). In the case of the sqlplus client I was actually using exactly the same file to run the query on both clients (copied via scp without text conversion - so line endings were CRLF - i.e. byte for byte the same as was running on the Windows client).
So the only rational solution was that the 2 client stacks were resulting in different execution plans!
Running the query from both clients approx simultaeneously on a DBMS with very little load gave the same result - meaning that the two clients also generated different sqlids for the query.
(and also Oracle was ignoring my hints - I hate when it does that).
There is no way Oracle should be doing this - even if it were doing some internal munging of the query before presenting it to the DBMS (which would give rise to the different sqlids) the client stack used should be totally transparent regarding the choice of an execution plan - this should only ever change based on the content of the query and the state of the DBMS.
The problem was complicated by not being to see any explain plans - but for the query to use up so much temporary tablespace, it had to be doing a very ugly join (at least partially cartesian) before filtering the resultset. Adding hints to override this had no effect. So I resolved the problem by splitting the query into 2 cursors and doing a nested lookup using PL/SQL. A very ugly solution, but it solved my immediate problem. Fortunately I just need to generate a text file.
For the benefit of anyone finding themselves in a similar pickle:
BEGIN
DECLARE
CURSOR query_outer IS
SELECT some_primary_key, some_other_stuff
FROM atable
WHERE....
CURSOR query_details (p_some_pk) IS
SELECT COUNT(*), SUM(avalue)
FROM btable
WHERE fk=p_some_pk
AND....
FOR m IN query_outer
LOOP
FOR n IN query_details(m.some_primary_key)
LOOP
dbms_out.put_line(....);
END LOOP;
END LOOP;
END;
The more I use Oracle, the more I hate it!
A bit more information - I've run into the same problem connecting to a different database - this time an Oracle 8i. And I can get EXPLAIN plans.
Although in this case the query completed on both clients, it took 50% longer running from Linux sql*plus 10.2.0.4.0 vs WinXP sql*plus 8.0.6.0
As I suspected, the plans are different - but both are using the default settings on the client, both are using the same optimizer mode. It seems to think the plan generated from the Linux client has a lower cost than that from the XP client (although it does take longer to run). How does the optimizer prune the search path for potential plans?

Resources