I was trying to trace the error in my Rust API code. When I ran it, it showed the following in the terminal:
Server running on localhost:3000
auth
auth err1
...
Notice auth err1 was printed from inside .ok_or() in my code below, but the StatusCode::BAD_REQUEST was not triggered as I got 200 OK back. Why? What happened?
pub async fn auth<T>(mut request: Request<T>, next: Next<T>) -> Result<Response, StatusCode> {
println!("auth");
let token = request
.headers()
.typed_get::<Authorization<Bearer>>()
.ok_or({
println!("auth err1");
StatusCode::BAD_REQUEST
})?
.token()
.to_owned();
//other code to connect to DB and retrieve user data...
}
Since you put println!("auth err1") in a block it will immediately be executed no matter if typed_get returned Some or None.
You need to make it a closure and use ok_or_else to only print in the None case:
.ok_or_else(|| {
println!("auth err1");
StatusCode::BAD_REQUEST
})?
Related
I've noticed that metaplex got API for platforms like JS, iOS, Android and all of them are documented great, except RUST :)
For example all of apis above got something like getNftByMint with output of mine nft data with all metadata inside
But checking all the rust/solana/anchor crates and docs I didn't find any way how to get metadata from metadataAccount right in the rust program, in instance to check some params of some nfts and do something)
The only way I found is below, but even if I tried to mitigate this error adding match and so on, still I got the same error message, But without from_account_info this error dissapears.
In rs program i got:
pub fn get_metadata_by_pubkey(ctx: Context<GetMetadataByPubkey>) -> Result<()> {
let account = ctx.accounts.metadata_pubkey.to_account_info();
let metadata: Metadata = Metadata::from_account_info(&account)?;
...
And in ts file:
it("Is get_metadata_by_pubkey", async () => {
const pk: PublicKey = new PublicKey("<public key of my nft's metadata account>");
await program.methods.getMetadataByPubkey().accounts({
metadataPubkey: pk
}).rpc();
});
And I got this error, after running anchor test:
Error: failed to send transaction: Transaction simulation failed:
Error processing Instruction 0: Program failed to complete
from_account_info is the correct way of doing it. Here is an example of a program that uses it https://github.com/Bonfida/dex-v4/blob/main/program/src/processor/create_market.rs#L154
However, you need to make sure the metadata account is initialized before trying to deserialize, which can be done with something like accounts.token_metadata.data_len() != 0
a simplest http handler func like
pub async fn new(mut payload: web::Payload) -> Result<impl Responder> {
return Ok("ok");
}
will raise error in the log:
[2022-06-03T01:39:58Z DEBUG actix_http::h1::dispatcher] cannot read request payload
[2022-06-03T01:39:58Z DEBUG actix_http::h1::dispatcher] handler dropped payload early; attempt to clean connection
[2022-06-03T01:39:58Z ERROR actix_http::h1::dispatcher] handler did not read whole payload and dispatcher could not drain read buf; return 500 and close connection
[2022-06-03T01:39:58Z ERROR actix_http::h1::dispatcher] stream error: Handler dropped payload before reading EOF
Seems that caused by the reason we haven't consume the payload.
Is there any way to fix this probrem?
What I really want to do is to protected a handler like this:
pub async fn new(user: User, mut payload: web::Payload) -> Result<impl Responder> {
/*
Do something with payload.
*/
}
where User implements the FromRequest trait, in its from_request function it will return an User or return the Unauthorized error.
So if there is an Unauthorized user calls the handler, it will return ErrorUnauthorized early.
But this will case the
stream error: Handler dropped payload before reading EOF.
This sounds similar to:
https://github.com/actix/actix-web/issues/2695
You likely need to drain the payload before returning your response/error. You can do so as:
payload.for_each(|_| ready(())).await;
or
while let Ok(Some(_)) = payload.try_next().await {}
I ran into a similar issue with processing multipart file uploads. Draining the payload as outlined above in option one worked in most cases, however, in some instances, like in the case the user hits the 'stop' or 'refresh' button on their browser, and the interruption occurs while looping on the stream, reading chunks of the payload would error 'Incomplete'. Attempting to turn the stream into a future and running for_each() on it would hang indefinitely. Option two worked consistently regardless.
Assuming your issue is interrelated, hopefully it should be resolved in v4.2, although you may still have to do some housekeeping with the payload when processing it like you are.
As far as the authorization comment, you can do this in a number of ways. One option is in middleware:
Rust Actix - early return from middleware
I have a Rust program which is exiting silently without any trace of the reason in the logs. This would happen after several successful calls to the same method. The last log I see is one after which a FFI call is made. I do not get a log after the return of the FFI call.
use cpython;
use cpython::ObjectProtocol;
use cpython::PyResult;
fn is_complete(&self, check: bool) -> Result<bool, PyModuleError> {
let gil = cpython::Python::acquire_gil();
let py = gil.python();
debug!("Calling complete"); //This is the last log
let res = self
.py_complete
.call_method(py, "complete", (check,), None)
.expect("No method complete on python module")
.extract::<bool>(py).unwrap();
if res {
debug!("Returning true"); //This does not appear
Ok(true)
}
else {
debug!("Returning false"); //This does not appear
Ok(false)
}
}
The Python module does return a value as I had debug logs there as well to confirm.
I have tried using RUST_BACKTRACE=1 but in vain.
I'm writing a chat server over TCP as a learning project. I've been tinkering with the ws crate today, but I've come across an issue. This is the code I wrote, modifying their server example.
extern crate ws;
extern crate env_logger;
use ws::listen;
fn main() {
// Setup logging
env_logger::init().unwrap();
// Listen on an address and call the closure for each connection
if let Err(error) = listen("127.0.0.1:3012", |out| {
let mut message: String;
// The handler needs to take ownership of out, so we use move
move |message| {
message = message.trim();
// Handle messages received on this connection
println!("Server got message '{}'. ", message);
// Use the out channel to send messages back
out.send(message)
}
}) {
// Inform the user of failure
println!("Failed to create WebSocket due to {:?}", error);
}
}
When I try compiling it I get an error:
error: the type of this value must be known in this context
--> src/main.rs:15:23
|
15 | message = message.trim();
| ^^^^^^^^^^^^^^
Why is this happening? How may I fix this?
move |message| shadows the message variable you've declared outside the closure. So within the closure.. message is said to be a ws::Message ... except you've done this:
message = message.trim();
The compiler goes "oh no! trim()? That doesn't exist for ws::Message".. and so now it doesn't quite know what to do.
Option 1
The first fix involves delegating the trim() call to the client who sends the message.
The fix is to not make any assumptions about what the message is inside this closure. If you keep this:
move |message|
..but remove the trim() call, the compiler happily infers its type as ws::Message and will build:
if let Err(error) = listen("127.0.0.1:3012", |out| {
// The handler needs to take ownership of out, so we use move
move |message| {
// --- REMOVED trim() call ---
// Handle messages received on this connection
println!("Server got message '{}'. ", message);
// Use the out channel to send messages back
out.send(message)
}
}
This gives you the option of delegating the trim() call to the client instead.
Option 2
Option 2 involves inspecting the type of message you've received, and making sure you trim it only if it is text:
// The handler needs to take ownership of out, so we use move
move |mut message: ws::Message| {
// Only do it if the Message is text
if message.is_text() {
message = ws::Message::Text(message.as_text().unwrap().trim().into());
}
// Handle messages received on this connection
println!("Server got message '{}'. ", message);
// Use the out channel to send messages back
out.send(message)
}
This is perhaps a little more verbose than it needs to be.. but hopefully it shows you what the actual issue is with your original snippet of code.
I have my simple Vertx script in Groovy that should send a request to Redis to get a value back:
def eb = vertx.eventBus
def config = [:]
def address = 'vertx.mod-redis-io'
config.address = address
config.host = 'localhost'
config.port = 6379
container.deployModule("io.vertx~mod-redis~1.1.4", config)
eb.send(address, [command: 'get', args: ['mykey']]) { reply ->
if (reply.body.status.equals('ok')) {
println 'ok'
// do something with reply.body.value
} else {
println("Error ${reply.body.message}")
}
}
The value for 'mykey' is stored regularly on my Redis (localhost:6379):
127.0.0.1:6379> get mykey
"Hello"
The script starts correctly but no values are returned (reply).
Am I missing something?
The issue is that you deployModule and send to the EventBus sequentially, even if the call is asynchronous.
So, when you call deployModule the module deployment gets triggered, but is not guaranteed before eb.send is called. By that you are sending the right command but it does not get computed because the module is not there.
Try the following in adding your test command to the AsyncHandler of the deployModule
container.deployModule("io.vertx~mod-redis~1.1.4", config) { asyncResult ->
if(asyncResult.succeeded) {
eb.send(address, [command: 'get', args: ['mykey']]) { reply ->
if (reply.body.status.equals('ok')) {
println 'ok'
// do something with reply.body.value
} else {
println("Error ${reply.body.message}")
}
}
} else {
println 'Deployment broken!'
}
}
The example from the https://github.com/vert-x/mod-redis is maybe not the best because it is just a snippet to point the direction.
This works as it only sends the request to the Bus as soon as the module is deployed and by that someone listening to it. I tested it locally on a Vagrant installment with Redis.
Overall, development in Vert.x is close to always asynchronous because of its key concept. It takes some time to get acquainted with it, but it has its benefits :)
Hope this helps.
Best