I need to call the contract's method from my Indexer. Now I use tokio::process and near-cli written on NodeJs. It looks soundless, and I would like to do that from Rust.
Recommended way
NEAR JSON-RPC Client RS is the recommended way to interact with NEAR Protocol from within the Rust code.
Example from the README
use near_jsonrpc_client::{methods, JsonRpcClient};
use near_jsonrpc_primitives::types::transactions::TransactionInfo;
let mainnet_client = JsonRpcClient::connect("https://archival-rpc.mainnet.near.org");
let tx_status_request = methods::tx::RpcTransactionStatusRequest {
transaction_info: TransactionInfo::TransactionId {
hash: "9FtHUFBQsZ2MG77K3x3MJ9wjX3UT8zE1TczCrhZEcG8U".parse()?,
account_id: "miraclx.near".parse()?,
},
};
// call a method on the server via the connected client
let tx_status = mainnet_client.call(tx_status_request).await?;
println!("{:?}", tx_status);
In the examples folder of the repo, you will find different use cases, and hopefully, you'll find yours there as well.
near-jsonrpc-client-rs is the best option.
Alternative way
NB! This way is using non-documented APIs. It is not recommended way, because using these assumes you will dig into the code and find out how to use it by yourself.
If you're using the NEAR Indexer Framework you're literally running a nearcore node which includes:
JSON RPC server
ClientActor and ViewClient
Based on the kind of call you need to perform to your contract: change method or view method, you can use ClientActor or ViewClient.
ViewClient example
Code is for understanding the concept, not a working example.
let indexer = near_indexer::Indexer::new(indexer_config);
let view_client = indexer.client_actors().0;
let block_response = view_client
.send(query)
.await
.context("Failed to deliver response")?
.context("Invalid request")?;
You can find the usage in NEAR Indexer for Explorer starting from here
ClientActor example
ClientActor is used to send a transaction. I guess here's a good starting point to look for ClientActor example.
async fn send_tx_async(
&self,
request_data: near_jsonrpc_primitives::types::transactions::RpcBroadcastTransactionRequest,
) -> CryptoHash {
let tx = request_data.signed_transaction;
let hash = tx.get_hash().clone();
self.client_addr.do_send(NetworkClientMessages::Transaction {
transaction: tx,
is_forwarded: false,
check_only: false, // if we set true here it will not actually send the transaction
});
hash
}
Related
I'm new to rust and encountered an issue while building an API with warp. I'm trying to pass some requests to another thread with a channel(trying to avoid using arc/mutex). Still, I noticed that when I pass an mpsc::sync::Sender to a warp handler, I get this error.
"std::sync::mpsc::Sender cannot be shared between threads safely"
and
"the trait Sync is not implemented for `std::sync::mpsc::Sender"
Can someone lead me in the right direction?
use std::sync::mpsc::Sender;
pub async fn init_server(run_tx: Sender<Packet>) {
let store = Store::new();
let store_filter = warp::any().map(move || store.clone());
let run_tx_filter = warp::any().map(move || run_tx.clone());
let update_item = warp::get()
.and(warp::path("v1"))
.and(warp::path("auth"))
.and(warp::path::end())
.and(post_json())
.and(store_filter.clone())
.and(run_tx_filter.clone()) //where I'm trying to send "Sender"
.and_then(request_token);
let routes = update_item;
println!("HTTP server started on port 8080");
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
pub async fn request_token(
req: TokenRequest,
store: Store,
run_tx: Sender<Packet>,
) -> Result<impl warp::Reply, warp::Rejection> {
let (tmp_tx, tmp_rx) = std::sync::mpsc::channel();
run_tx
.send(Packet::IsPlayerLoggedIn(req.address, tmp_tx))
.unwrap();
let logged_in = tmp_rx.recv().unwrap();
if logged_in {
return Ok(warp::reply::with_status(
"Already logged in",
http::StatusCode::BAD_REQUEST,
));
}
Ok(warp::reply::with_status("some token", http::StatusCode::OK))
}
I've looked through some of the examples for warp, and was also wondering what are some good resources to get knowledgable of the crate. Thank you!
This is because you're using std::sync::mpsc::Sender. std::sync implements !Sync, so you won't be able to use that. You don't want to use that anyway since it's blocking.
When you use async functionality in rust, you need to provide a runtime for it. The good news is that warp runs on the tokio runtime, so you should have access to tokio::sync::mpsc If you take a look at that link, the Sender for that mpsc implementation implements Sync and Send so it's safe to share across threads.
TLDR:
Use tokio::sync::mpsc instead of std::sync::mpsc and this won't be an issue.
I'm trying to iterate over logs from a docker container by using the bollard crate.
Here's my code:
use std::default::Default;
use bollard::container::LogsOptions;
use bollard::Docker;
fn main() {
let docker = Docker::connect_with_http_defaults().unwrap();
let options = Some(LogsOptions::<String>{
stdout: true,
..Default::default()
});
let data = docker.logs("2f6c52410d", options);
// ...
}
docker.logs() returns impl Stream<Item = Result<LogOutput, Error>>. I'd like to iterate over the results, but I have no idea how to do that. I've managed to find an example that uses try_collect::<Vec<LogOutput>>() from the future_utils crate, but I'd like to iterate over the results in a while loop instead of collecting the results in a vector. I know that I can iterate over a vector, but performing tasks in a loop will be better for my use case.
I've tried to call poll_next() method for the stream, but it requires a mysterious Context object which I don't understand. The poll_next() method was unavailable until I've used pin_mut!() macro on the stream.
How do I iterate over stream? What should I read to understand what's going on here? I know that the streams are related to Futures, but calling await or next() doesn't work here.
You typically bring in your library of choice's StreamExt trait, and then do something like
while let Some(foo) = stream.next().await {
// ...
}
When trying to return tiberius::QueryResult I am unable to do so because it references data owned. How do I return stream if this is now allowed?
pub async fn sql_conn(str_query: &str) -> std::result::Result<tiberius::QueryResult<'_>, tiberius::error::Error>{
let mut config = Config::new();
config.host("host");
config.port(1433);
config.authentication(AuthMethod::sql_server("usr", "pw"));
config.trust_cert();
let tcp = TcpStream::connect(config.get_addr()).await?;
tcp.set_nodelay(true)?;
let mut client = Client::connect(config, tcp.compat_write()).await?;
let stream = client.query(
str_query
, &[]).await?;
Ok(stream)
}
Error:
cannot return value referencing local variable `client`
returns a value referencing data owned by the current function
The reason this isn't working is because your query result object references your client and depends on resources that it uses. Most likely, that's because your query result is streaming and the client owns the connection required for that streaming to occur.
Rust won't let you return the query result because it needs the client and the client, as a local variable, is destroyed when the function returns, since it goes out of scope. If Rust let you return the query result, it would likely reference the closed client, and your program would either fail or segfault. This is a common problem in many languages that don't provide garbage collection, and Rust is specifically designed not to allow you to make this mistake.
There are a couple of options here. First, you can create a function which creates the SQL connection and returns a client, then use the client and the query results it returns in the function where you want the data. That way, both the client and the query results will have the right lifetimes.
You could also try to create a struct which instantiates and holds your client and then use that to make the query. For example (untested):
struct Connection<'a> {
client: tiberius::Client<'a>
}
impl<'a> Connection<'a> {
fn query(&mut self, query: &str) -> Result<tiberius::QueryResult<'a>, tiberius::error::Error> {
client.query(str_query, &[]).await
}
}
This is essentially the same as the first situation, just with a different structure.
The third option is to both instantiate the client and totally consume the results in the same function, and then return some structure (like a Vec) with the results. This means that you will have to consume the entirety of the data, which you may not want to do for efficiency reasons, but it does solve the lifetime issue, and depending on your scenario, may be a valid option.
I'm trying to implement a project where I can tail the logs of multiple Kubernetes container logs simultaneously. Think tmux split pane with two tails in each pane. Anyway, I'm far far away from my actual project because I'm stuck right at the beginning. If you look at the following code then the commented out line for lp.follow = true will keep the log stream open and stream logs forever. I'm not sure how to actually use this. I found a function called .into_stream() that I can tack onto the pods.log function, but then I'm not sure how to actually use the stream. I'm not experienced enough to know if this is a limitation of the kube library, or if I'm just doing something wrong. Anyway, here is the repo if you want to look at anything else. https://github.com/bloveless/kube-logger
I'd be forever grateful for any advice or resources I can look at. Thanks!
use kube::{
api::Api,
client::APIClient,
config,
};
use kube::api::{LogParams, RawApi};
use futures::{FutureExt, Stream, future::IntoStream, StreamExt};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
std::env::set_var("RUST_LOG", "info,kube=trace");
let config = config::load_kube_config().await?;
let client = APIClient::new(config);
// Manage pods
let pods = Api::v1Pod(client).within("fritzandandre");
let mut lp = LogParams::default();
lp.container = Some("php".to_string());
// lp.follow = true;
lp.tail_lines = Some(100);
let log_string = pods.log("fritzandandre-php-0", &lp).await?;
println!("FnA Log: {}", log_string);
Ok(())
}
Originally posted here https://www.reddit.com/r/learnrust/comments/eg49tx/help_with_futuresstreams_and_the_kubers_library/
I want to implement a simple server, used by 3 different module of my project.
These modules will send data to the server, which will save it into a file and merge these informations when these modules will finish their job.
All these informations have a timestamp (a float) and a label (a float or a string).
This is my data structure to save these informations:
pub struct Data {
file_name: String,
logs: Vec<(f32, String)>,
measures: Vec<(f32, f32)>,
statements: Vec<(f32, String)>,
}
I use socket to interact with the server.
I use also Arc to implement a Data struct and make it shareable for each of these modules.
So, when I handle the client, I verify if the message sent by the module is correct, and if it is I call a new function that process and save the message in the good data structure field (logs, measures or statements).
// Current ip address
let ip_addr: &str = &format!("{}:{}",
&ip,
port);
// Bind the current IP address
let listener = match TcpListener::bind(ip_addr) {
Ok(listener) => listener,
Err(error) => panic!("Canno't bind {}, due to error {}",
ip_addr,
error),
};
let global_data_struct = Data::new(DEFAULT_FILE.to_string());
let global_data_struct_shared = Arc::new(global_data_struct);
// Get and process streams
for stream in listener.incoming() {
let mut global_data_struct_shared_clone = global_data_struct_shared.clone();
thread::spawn(move || {
// Borrow stream
let stream = stream;
match stream {
// Get the stream value
Ok(mut stream_v) => {
let current_ip = stream_v.peer_addr().unwrap().ip();
let current_port = stream_v.peer_addr().unwrap().port();
println!("Connected with peer {}:{}", current_ip, current_port);
// PROBLEM IN handle_client!
// A get_mut from global_data_struct_shared_clone
// returns to me None, not a value - so I
// can't access to global_data_struct_shared_clone
// fields :'(
handle_client(&mut stream_v, &mut global_data_struct_shared_clone);
},
Err(_) => error!("Canno't decode stream"),
}
});
}
// Stop listening
drop(listener);
I have some problems to get a mutable reference in handle_client to process fields in global_data_struct_shared_clone, because the Arc::get_mut(global_data_struct_shared_clone) returns to me None - due to the global_data_struct_shared.clone() for each incoming request.
Can someone help me to manage correctly this structure between these 3 modules please?
The insight of Rust is that memory safety is achieved by enforcing Aliasing XOR Mutability.
Enforcing this single principle prevents whole classes of bugs: pointer/iterator invalidation (which was the goal) and also data races.
As much as possible, Rust will try to enforce this principle at compile-time; however it can also enforce it at run-time if the user opts in by using dedicated types/methods.
Arc::get_mut is such a method. An Arc (Atomic Reference Counted pointer) is specifically meant to share a reference between multiple owners, which means aliasing, and as a result disallows mutability by default; Arc::get_mut will perform a run-time check: if the pointer is actually not alias (count of 1), then it allows mutability.
However, as you realized, this is not suitable in your case since the Arc is aliased at that point in time.
So you need to turn to other types.
The simplest solution is Arc<Mutex<...>>, Arc allows sharing, Mutex allows controlled mutability, together you can share with run-time controlled mutability enforced by the Mutex.
This is coarse-grained, but might very well be sufficient.
More sophisticated approaches can use RwLock (Reader-Writer lock), more granular Mutex or even atomics; but I would advise starting with a single Mutex and see how it goes, you have to walk before you run.