How to fix bgpdump when processing files - bgp

2022-11-13 03:03:35 [info] logging to syslog
bgpdump: bgpdump_lib.c:676: process_mrtd_table_dump_v2_ipv6_unicast: Assertion `e->peer_index <= entry->dump->table_dump_v2_peer_index_table->peer_count' failed.
Aborted (core dumped)
I check my disk space is enough,and I thought the file was too large, split into several small files and the resulting file was empty

Related

How to reduce I/O and memory usage when installing node.js on shared hosting?

I've tried to install node.js via Putty on my shared hosting account with cPanel and CloudLinux. But at some moment I/O and physical memory usage reached to their limits and the installation process was stopped. My I/O usage limit is 10 MB and physical memory limit is 512 MB.
This happens when Putty displays the lines:
-c -o /home/vikna211/node/out/Release/obj.target/v8_base/deps/v8/src/api.o
../deps/v8/src/api.cc
After that I see:
make[1]:
[/home/vikna211/node/out/Release/obj.target/v8_base/deps/v8/src/api.o]
Interrupt make[1]: Deleting intermediate file
`4095d8cbfa2eff613349def330937d91ee5aa9c9.intermediate' make: [node]
Interrupt
Is it possible to reduce the usage of both resources when installing node.js to successfully finish the process?
And maybe it's not a problem of memory. Maybe the process tries to delete that intermediate file, but can't do it and causes the memory crash.

Yocto: bitbake exit code confusion

I get an error while building an image using Yocto (dizzy):
ERROR: Creation of tar /mnt/workspace/build/tmp/deploy/tar/xev-dbg-1.2.1-r0.tar.gz failed.
and bitbake command fails with the following report:
No currently running tasks (6291 of 6292)
NOTE: Tasks Summary: Attempted 6292 tasks of which 18 didn't need to be rerun and all succeeded.
Summary: There were 13 WARNING messages shown.
Summary: There were 3 ERROR messages shown, returning a non-zero exit code.
If I check the file xev-dbg-1.2.1-r0.tar.gz, I get:
$ file /mnt/workspace/build/tmp/deploy/tar/xev-dbg-1.2.1-r0.tar.gz
/mnt/workspace/build/tmp/deploy/tar/xev-dbg-1.2.1-r0.tar.gz: gzip compressed data, from Unix, last modified: Mon Mar 27 20:19:55 201
and it is the same case for the remaining two errors.
I am confused:
if there was an error, why bitbake is reporting that all tasks succeeded?
If the file were successfully created, why bitbake exits with non zero value?
Bitbake did not return a 0 exit-code. This mean that there are errors in the bitbake process.
There are 3 errors when it is trying to create the tar files as shown.
The compressed file is there but it is not complete. E.g. Just like how you could download a file and interrupt it and the download file is still there. So we usually use md5sum or some kind of hash number to check on the completeness of the file.
A better understanding might be: Bitbake attempted to run 6292 task. 18 of them do not need to rerun. Bitbake attempted to rerun the rest 6274(6292-18) and succeeded in rerunning them. This does not mean that all of them are successfully compiled. In the process of rerunning them, there are 13 warnings and and 3 errors appeared. Because of the 3 errors, bitbake returns with a non-zero exit code.
No currently running tasks (6291 of 6292)
NOTE: Tasks Summary: Attempted 6292 tasks of which 18 didn't need to be rerun and all succeeded.
Summary: There were 13 WARNING messages shown.
Summary: There were 3 ERROR messages shown, returning a non-zero exit code.

How can the Cassandra commitlog be corrupted?

This is the second time my commitlog is corrupted, and the server refuses to start. What worries me is that I get these error issues even if no update were made to the database.
My config says that commitlog are synced every 10s seconds, so how can a file be corrupt unless a crash occurs within these 10 seconds?
Is this a Cassandra bug? Or by design, i.e. bad design?
I am using 3.4 on Windows 10, Datastax installer.
In the stdout log, the last part is
INFO 06:17:39 Replaying C:\Program Files\DataStax-DDC\data\commitlog\CommitLog-6-1471353812251.log, C:\Program Files\DataStax-DDC\data\commitlog\CommitLog-6-1471353812252.log, C:\Program Files\DataStax-DDC\data\commitlog\CommitLog-6-1471411951134.log, C:\Program Files\DataStax-DDC\data\commitlog\CommitLog-6-1471454506802.log, C:\Program Files\DataStax-DDC\data\commitlog\CommitLog-6-1471532812678.log
ERROR 06:17:39 Exiting due to error while processing commit log during initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Could not read commit log descriptor in file C:\Program Files\DataStax-DDC\data\commitlog\CommitLog-6-1471353812252.log
at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:611) [apache-cassandra-3.4.0.jar:3.4.0]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:373) [apache-cassandra-3.4.0.jar:3.4.0]
at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:236) [apache-cassandra-3.4.0.jar:3.4.0]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:192) [apache-cassandra-3.4.0.jar:3.4.0]
at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:172) [apache-cassandra-3.4.0.jar:3.4.0]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:283) [apache-cassandra-3.4.0.jar:3.4.0]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) [apache-cassandra-3.4.0.jar:3.4.0]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:680) [apache-cassandra-3.4.0.jar:3.4.0]
I have seen similar errors. This happens, when Cassandra process gets crahed may be due to OOM. Run "dmesg" and check if it was killed due to OOM. In that case there is possibility that commit log it was writing to was corrupted or its of 0kb file (check size of above file in error), and it throws the above error when Cassandra is restarted and it replays that file.

i have just begun to use android studio and i cant seem to get my gradle to sync with my application. here is what it shows :

7:46:20 PM Gradle sync started
7:46:35 PM Gradle sync failed: Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at https://docs.gradle.org/2.10/userguide/gradle_daemon.html
Please read the following process output to find out more:
-----------------------
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Consult IDE log for more details (Help | Show Log)
the jvm version is 1.7.0_79
and the studio version is 2.1.1
Error occurred during initialization of VM Could not reserve enough space for object heap Error: Could not create the Java Virtual Machine.
There's no space available in RAM. To fix go to /android-studio-dir/bin and edit studio.vmoptions and studio64.vmoptions to increment the -Xmx and to reserve more memory to Java. Note that the number of processes active may influence on that.
Probably, the /tmp location is full..
Found this somewhere..
Use df command
df
You should see an output with a line like this:
tmpfs 102400 102312 88 100% /tmp
So to change the size of the tmp file:
sudo mount -o remount,size=2G /tmp
Done! Now, It should work..

ejabberd 16.01 postinstall.sh fails on centos 7

I've just installed ejabberd 16.01 on my CentOS 7.1.1503, I have used the rpm installer downloaded from ProcessOne's web site using:
sudo rpm -Uvh ejabberd-16.01-0.x86_64.rpm
The installation went well up until the end, but finally I got an error message (without any description). I took a look into /opt/ejabberd-16.01 and saw everything was in it's place so I've tried running /bin/postinstall.sh manually to see what might go wrong.
When I run the script I get the following output:
-=- ejabberd post installation script -=-
(c) 2005-2015 ProcessOne
* Checking ejabberd installation
usermod: no changes
* Starting ejabberd instance
Failed to create main carrier for ll_alloc
Aborted (core dumped)
Failed to create main carrier for ll_alloc
Aborted (core dumped)
Failed to create main carrier for ll_alloc
Aborted (core dumped)
Failed to create main carrier for ll_alloc
Aborted (core dumped)
Failed to create main carrier for ll_alloc
Aborted (core dumped)
Failed to create main carrier for ll_alloc
Aborted (core dumped)
Failed to create main carrier for ll_alloc
Aborted (core dumped)
Failed to create main carrier for ll_alloc
Aborted (core dumped)
I get a familiar experience when I start the ejabbered server using:
ejabberdctl start
and then when I try to use ejabberdctl again (to stop/register a user) I get the same result:
1
From searching around I've found some connection to memory issues (erl with centos Failed to create main carrier for ll_alloc)
but this doesn't seem to be the case since I have 1GB of RAM and this is the output of top command: 3
so as you can see I have 573MB of free RAM (shouldn't it be enough??)
any help would be very much appreciated,
Thank you.

Resources