Solana error on metaplex when trying to make auction - rust

So I’m working on a metaplex project and when I try to launch an auction I’m getting a few errors and when I look for answers I’m not finding anything that makes much sense to me.
Also search the rust documents and didn’t have luck.
But one error is
#1 Metaplex NFT Auction instruction
Program log: + Processing CreateAuction
Program log: Transfer 141190560 lamports to the new account
Invoking System Program
Program returned success
Program log: Allocate space for the account
Invoking System Program
Program returned success
Account data size realloc limited to 10240 in inner instructions
Program Metaplex NFT Auction consumed 9991 of 200000 compute units
Program returned error: Failed to reallocate account data
Another is
#1 System Program instruction
Program returned success
#2 Token Program instruction
Program log: Instruction: InitializeAccount
Program Token Program consumed 3295 of 200000 compute units
Program returned success
#3 Metaplex Program instruction
Program log: Instruction: Init Auction Manager V2
Program log: This safety deposit box is not listed as a prize in this auction manager!
Program Metaplex Program consumed 5430 of 200000 compute units
Program returned error: custom program error: 0x1b
Another is
Raw
#1 System Program instruction
Program returned success
#2 Token Program instruction
Program log: Instruction: InitializeAccount
Program Token Program consumed 3295 of 200000 compute units
Program returned success
#3 Metaplex Program instruction
Program log: Instruction: Init Auction Manager V2
Program log: This safety deposit box is not listed as a prize in this auction manager!
Program Metaplex Program consumed 5430 of 200000 compute units
Program returned error: custom program error: 0x1b
Another is
Raw
#1 System Program instruction
Program returned success
#2 Token Program instruction
Program log: Instruction: InitializeAccount
Program Token Program consumed 3295 of 200000 compute units
Program returned success
#3 Metaplex Program instruction
Program log: Instruction: Init Auction Manager V2
Program log: This safety deposit box is not listed as a prize in this auction manager!
Program Metaplex Program consumed 5430 of 200000 compute units
Program returned error: custom program error: 0x1b
Another is
#1 Metaplex NFT Auction instruction
Program log: + Processing CreateAuction
Program log: Transfer 141190560 lamports to the new account
Invoking System Program
Program returned success
Program log: Allocate space for the account
Invoking System Program
Program returned success
Account data size realloc limited to 10240 in inner instructions
Program Metaplex NFT Auction consumed 9991 of 200000 compute units
Program returned error: Failed to reallocate account data
Another is
#1 System Program instruction
Program returned success
#2 Token Program instruction
Program log: Instruction: InitializeAccount
Program Token Program consumed 3295 of 200000 compute units
Program returned success
#3 Metaplex Program instruction
Program log: Instruction: Init Auction Manager V2
Program log: This safety deposit box is not listed as a prize in this auction manager!
Program Metaplex Program consumed 5430 of 200000 compute units
Program returned error: custom program error: 0x1b
You can see my solscan info too below
Appreciate your help and thank you in advance
https://solscan.io/account/6US2oFFpyidDHTGh1rbBm1AvbHU8ex2aRk1rNsZNqNRj

Related

How do you determine which process is using up Linux aio context capacity?

In Linux, you can read the value of /proc/sys/fs/aio-nr and this returns the total no. of events allocated across all active aio contexts in the system. The max value is controlled by /proc/sys/fs/aio-max-nr.
Is there a way to tell which process is responsible for allocating these aio contexts?
There isn't a simple way. At least, not that I've ever found! However, you can see them being consumed and freed using systemtap.
https://blog.pythian.com/troubleshooting-ora-27090-async-io-errors/
Attempting to execute the complete script in that article produced errors on my Centos 7 system. But, if you just take the first part of it, the part that logs allocations, it may give you enough insight:
stap -ve '
global allocated, allocatedctx
probe syscall.io_setup {
allocatedctx[pid()] += maxevents; allocated[pid()]++;
printf("%d AIO events requested by PID %d (%s)\n",
maxevents, pid(), cmdline_str());
}
'
You'll need to coordinate things such that systemtap is running before your workload kicks in.
Install systemtap, then execute the above command. (Note, I've altered this slightly from the linked article to removed the unused freed symbol.) After a few seconds, it'll be running. Then, start your workload.
Pass 1: parsed user script and 469 library scripts using 227564virt/43820res/6460shr/37524data kb, in 260usr/10sys/263real ms.
Pass 2: analyzed script: 5 probes, 14 functions, 101 embeds, 4 globals using 232632virt/51468res/11140shr/40492data kb, in 80usr/150sys/240real ms.
Missing separate debuginfos, use: debuginfo-install kernel-lt-4.4.70-1.el7.elrepo.x86_64
Pass 3: using cached /root/.systemtap/cache/55/stap_5528efa47c2ab60ad2da410ce58a86fc_66261.c
Pass 4: using cached /root/.systemtap/cache/55/stap_5528efa47c2ab60ad2da410ce58a86fc_66261.ko
Pass 5: starting run.
Then, once your workload starts, you'll see the context requests logged:
128 AIO events requested by PID 28716 (/Users/blah/awesomeprog)
128 AIO events requested by PID 28716 (/Users/blah/awesomeprog)
So, not as simple as lsof, but I think it's all we have!

p5.serialserver (i.e. p5serial and p5.serialcontrol) leaking memory

I'm trying to use p5.serial to display my Arduino-like device's USB output on a web page. It generates about ten strings a second continually.
the problem:
When I run p5serial (in a shell window) or p5.serialcontrol (an Electron/GUI app), the node server starts out at ~ 12 MB, but as it runs it bloats quickly to > 1 GB and the output becomes sluggish. The server eventually dies with
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
...
Abort trap: 6
the question:
Is this a known issue (aside from the bug report I just filed)? Or perhaps an error in the way I'm using it?
some details:
When I connect the Arduino-like device via a serial USB terminal, things work just fine (except for the lack of lovely p5.js graphics).
I'm running OS X (10.12.6 / Sierra), node v6.3.0, p5.serialserver#0.0.24
I've posted a gist containing a minimal example (but understand that it assumes you have an Arduino-like device with USB).
This memory link was fixed in p5.serial: https://github.com/p5-serial/p5.serialcontrol/issues/12

What happens when you execute an instruction that your CPU does not support?

What happens if a CPU attempts to execute a binary that has been compiled with some instructions that your CPU doesn't support. I'm specifically wondering about some of the new AVX instructions running on older processors.
I'm assuming this can be tested for, and a friendly message could in theory be displayed to a user. Presumably most low level libraries will check this on your behalf. Assuming you didn't make this check, what would you expect to happen? What signal would your process receive?
A new instruction can be designed to be "legacy compatible" or it can not.
To the former class belong instructions like tzcnt or xacquire that have an encoding that produces valid instructions in older architecture: tzcnt is encoded as
rep bsf and xacquire is just repne.
The semantic is different of course.
To the second class belong the majority of new instructions, AVX being one popular example.
When the CPU encounters an invalid or reserved encoding it generates the #UD (for UnDefined) exception - that's interrupt number 6.
The Linux kernel set the IDT entry for #UD early in entry_64.S:
idtentry invalid_op do_invalid_op has_error_code=0
the entry points to do_invalid_op that is generated with a macro in traps.c:
DO_ERROR(X86_TRAP_UD, SIGILL, "invalid opcode", invalid_op)
the macro DO_ERROR generates a function that calls do_error_trap in the same file (here).
do_error_trap uses fill_trap_info (in the same file, here) to create a siginfo_t structure containing the Linux signal information:
case X86_TRAP_UD:
sicode = ILL_ILLOPN;
siaddr = uprobe_get_trap_addr(regs);
break;
from there the following calls happen:
do_trap in traps.c
force_sig_info in signal.c
specific_send_sig_info in signal.c
that ultimately culminates in calling the signal handler for SIGILL of the offending process.
The following program is a very simple example that generates an #UD
BITS 64
GLOBAL _start
SECTION .text
_start:
ud2
we can use strace to check the signal received by running that program
--- SIGILL {si_signo=SIGILL, si_code=ILL_ILLOPN, si_addr=0x400080} ---
+++ killed by SIGILL +++
as expected.
As Cody Gray commented, libraries don't usually rely on SIGILL, instead they use a CPU dispatcher or check the presence of an instruction explicitly.

Kernel Oops page fault error codes for ARM

What does error code after Oops give information about the panic in arm ex.
Oops: 17 [#1] PREEMPT SMP
what 17 give information in this case.
In x86 it represents -
bit 0 == 0: no page found 1: protection fault
bit 1 == 0: read access 1: write access
bit 2 == 0: kernel-mode access 1: user-mode access
bit 3 == 1: use of reserved bit detected
bit 4 == 1: fault was an instruction fetch
But i am not able to find any information in arm.
Thanks
Shunty
What you printed above as description of bits is page fault descriptions, not Oops faults.
See Linux's oops-tracing for more information on looking for Linux crash analysis.
Below is how your Oops: 17 [#1] PREEMPT SMP arch/arm/kernel/traps.c:
#define S_PREEMPT " PREEMPT"
...
#define S_SMP " SMP"
...
printk(KERN_EMERG "Internal error: %s: %x [#%d]" S_PREEMPT S_SMP S_ISA "\n", str, err, ++die_counter);
Page faults doesn't need to crash the kernel, as well as not all kernel crashes are page faults. So there is a high chance Oops: 17 is not related to page faults at all. (and as a bonus my wild guess is it is about scheduling / just sounds familiar to me.)
Looks like you're asking about the ARM Fault Status Register (FSR) bits. I looked up the kernel code (arch/arm/mm/fault.c) and found that this is what is actually passed as a parameter to the Oops code:
static void
__do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
struct pt_regs *regs)
{
[...]
pr_alert("Unable to handle kernel %s at virtual address %08lx\n",
(addr < PAGE_SIZE) ? "NULL pointer dereference" :
"paging request", addr);
show_pte(mm, addr);
die("Oops", regs, **fsr**);
[...]
}
So, anyway, this I traced to the FSR register on the ARM(v4 and above?) MMU:
Source: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438d/BABFFDFD.html
...
[3:0] FS[3:0]
Fault Status bits. This field indicates the type of exception generated. Any encoding not listed is reserved:
b00001
Alignment fault.
b00100
Instruction cache maintenance fault[a].
b01100
Synchronous external abort on translation table walk, 1st level.
b01110
Synchronous external abort on translation table walk, 2nd level.
b11100
Synchronous parity error on translation table walk, 1st level.
b11110
Synchronous parity error on translation table walk, 2nd level.
b00101
Translation fault, 1st level.
b00111
Translation fault, 2nd level.
b00011
Access flag fault, 1st level.
b00110
Access flag fault, 2nd level.
b01001
Domain fault, 1st level.
b01011
Domain fault, 2nd level.
b01101
Permission fault, 1st level.
b01111
Permission fault, 2nd level.
b00010
Debug event.
b01000
Synchronous external abort, non-translation.
b11001
Synchronous parity error on memory access.
b10110
Asynchronous external abort.
b11000
Asynchronous parity error on memory access.
...
Disclaimer: I don't know whether this info is still relevant; the doc states it's for the ARM Cortex A15 and the page is marked as Superseded.
Could see this page also:
ARM926EJ-S Fault address and fault status registers

How much data can be fetched by submit_bio() at a time

Here is my LAN structure
I want to download a .zip file of 258.6MB from the samba server, meanwhile, start a profiling for the router's linux stack just before the download.
When finished, stopped the profiling and I found this in the porfiling report
samples % image name app name symbol name
...
16 0.0064 vmlinux smbd submit_bio
...
The sampling rate is 100000 and the event is CPU_CYCLES.
Because this is the first download of the file that is to say it is not in the page cache, submit_bio() should be pretty busy. Thus, I don't understand why there is just a poor portion of submit_bio(). Is that mean each time the submit_bio is called, we fetch about (258.6/16)MB data?
Thanks
That's statistical sampling. It means of the x times the profiler sampled the system, 16 times it happened to find the CPU running in submit_bio(). It does not mean that submit_bio() is called 16 times.

Resources