How to select query's result to JSON in Rust and Nickel? - rust

I use nickel.rs:
router.get("/api/movies", middleware! { |request, response|
let mut test_movies =
r#"[
{ "title": "Ironman"},
{ "title": "The Walk"},
{ "title": "Paddington"}
]
"#;
let json = Json::from_str(test_movies);
format!("{}", json.unwrap())
});
I wanted to create JSON format. The code for connecting to PostgreSQL and converting to JSON definition is below:
extern crate rustc_serialize;
use rustc_serialize::json::{Json, Parser};
#[derive(RustcDecodable, RustcEncodable)]
struct Movie {
title: String,
}
And I tried to select query and create JSON
router.get("/api/movies", middleware! { |request, response|
let conn = Connection::connect("postgres://postgres#localhost", SslMode::None).unwrap();
let stmt = match conn.prepare("select title from movie") {
Ok(stmt) => stmt,
Err(e) => {
return response.send(format!("Preparing query failed: {}", e));
}
};
let res = match stmt.execute(&[]) {
Ok(v) => println!("Selecting movie was Success."),
Err(e) => println!("Selecting movie failed. => {:?}", e)
};
// ???
// let movies = Json::from_obj(res);
// let movies = request.json_as::<&[Movie]>().unwrap();
// let movies = request.json_as::Vec<Movie>().unwrap();
format!("{}", movies)
});
however, I have no idea how to convert the result to JSON.
let conn = conn.clone();
makes the errors.
error: no method named `clone` found for type `postgres::Connection` in the current scope
I added
use nickel::status::StatusCode;
//use rustc_serialize::json::{Json, Parser};
use rustc_serialize::{json};
json::encode(&movies).unwrap();
was work. but null returned...
Finally
I changed execute to query and also use Vec<Movie>.
let mut v: Vec<Movie> = vec![];
let movies = &conn.query("select title from movie", &[]).unwrap();
for row in movies {
let movie = Movie {
title: row.get(0),
};
v.push(movie);
}
let json_obj = json::encode(&v).unwrap();
response.set(MediaType::Json);
response.set(StatusCode::Ok);
return response.send(json_obj);
I also defined struct Moview like model
struct Movie {
// id: i32,
title: String,
}
hmm.. troublesome a lot.
however, I can't conn.clone() yet.

Try the following:
let json = json::encode(&res).unwrap();
response.set(MediaType::Json);
response.set(StatusCode::Ok);
return response.send(json);
Also, it's not efficient to create a new connection for each request. You can create one connection in the main() function and then clone it inside each request closure.
fn main(){
let mut server = Nickel::new();
let mut router = Nickel::router();
let conn = Connection::connect("postgres://postgres#localhost", SslMode::None).unwrap();
let shared_connection = Arc::new(conn);
{
let conn = shared_connection.clone();
router.get("/api/movies", middleware! ( |request, mut response|{
let mut v: Vec<Movie> = vec![];
let movies = &conn.query("select title", &[]).unwrap();
for row in movies {
let movie = Movie {
title: row.get(0),
};
v.push(movie);
}
let json_obj = json::encode(&v).unwrap();
res.set(MediaType::Json);
res.set(StatusCode::Ok);
return res.send(json_obj);
}));
}
{
let conn = shared_connection.clone();
router.post("/api/movies",middleware!(|request, mut response|{
//...
}));
}
server.utilize(router);
server.listen("127.0.0.1:6767");
}

Related

Parse http response into tuple vector of (chrono::DateTime, f32)

I send an http get request to a server and receive a response:
let resp = reqwest::blocking::get(req)?.text()?;
resp holds a String like this:
<?xml version=\"1.0\" encoding=\"UTF-8\">\n<Document xmlns=...
<datetime>202207102300</datetime>\n\t\t\t\t\t\t<value>320.08</value>\n\t\t\t\t\t<datetime>202207110000</datetime>\n\t\t\t\t\t\t<value>278.00</value>
...</Document>
What is the best way to get this text parsed into a vector containing tuple elements, as follows:
let mut tuple_items: (chrono::DateTime, f32)
This is my code that I created with the quickxml crate:
use chrono::NaiveDateTime;
use quick_xml::events::Event;
use quick_xml::Reader;
pub struct DatetimeValue {
pub dt: NaiveDateTime,
pub val: f32,
}
pub fn parse_xml_string(&self, xml_string: String) -> Vec<DatetimeValue> {
let mut response_vector: Vec<DatetimeValue> = vec![];
let mut reader = Reader::from_str(&xml_string[..]);
reader.trim_text(true);
let mut val_flag = false;
let mut dt_flag = false;
let mut buf = Vec::new();
let mut count = 0;
let mut actual_dt: NaiveDateTime;
loop {
match reader.read_event(&mut buf) {
Ok(Event::Start(ref e)) => {
if let b"value" = e.name() { val_flag = true }
else if let b"datetime" = e.name() { dt_flag = true }
}
Ok(Event::Text(e)) => {
if dt_flag {
actual_dt = NaiveDateTime::parse_from_str(e
.unescape_and_decode(&reader)
.unwrap(), "%Y%m%d%H%M").unwrap();
dt_flag = false;
}
else if val_flag {
response_vector.push(DatetimeValue {
dt: actual_dt,
val: e
.unescape_and_decode(&reader)
.unwrap()
.parse::<f32>()
.unwrap(),
});
val_flag = false;
}
}
Ok(Event::Eof) => break,
Err(e) => panic!("Error at position {}: {:?}", reader.buffer_position(), e),
_ => (),
}
buf.clear();
}
response_vector
}

How to query a specific set of attributes from dynamoDB using rust language?

I am new to Rust and this question may come off as silly. I am trying to develop a lambda that reads a single item from dynamoDB given a key. The returned item needs to be shared back as result to the calling lambda.
I want the response to be in JSON.
Here is what I have:
The Input Struct
#[derive(Deserialize, Clone)]
struct CustomEvent {
#[serde(rename = "user_id")]
user_id: String,
}
The Output Struct
#[derive(Serialize, Clone)]
struct CustomOutput {
user_name: String,
user_email: String,
}
The Main fn
#[tokio::main]
async fn main() -> std::result::Result<(), Error> {
let func = handler_fn(get_user_details);
lambda_runtime::run(func).await?;
Ok(())
}
The logic to query
async fn get_user_details(
e: CustomEvent,
_c: Context,
) -> std::result::Result<CustomOutput, Error> {
if e.user_id == "" {
error!("User Id must be specified as user_id in the request");
}
let region_provider =
RegionProviderChain::first_try(Region::new("ap-south-1")).or_default_provider();
let shared_config = aws_config::from_env().region(region_provider).load().await;
let client: Client = Client::new(&shared_config);
let resp: () = query_user(&client, &e.user_id).await?;
println!("{:?}", resp);
Ok(CustomOutput {
// Does not work
// user_name: resp[0].user_name,
// user_email: resp[0].user_email,
// Works because it is hardcoded
user_name: "hello".to_string(),
user_email: "world#gmail.com".to_string()
})
}
async fn query_user(
client: &Client,
user_id: &str,
) -> Result<(), Error> {
let user_id_av = AttributeValue::S(user_id.to_string());
let resp = client
.query()
.table_name("users")
.key_condition_expression("#key = :value".to_string())
.expression_attribute_names("#key".to_string(), "id".to_string())
.expression_attribute_values(":value".to_string(), user_id_av)
.projection_expression("user_email")
.send()
.await?;
println!("{:?}", resp.items.unwrap_or_default()[0]);
return Ok(resp.items.unwrap_or_default().pop().as_ref());
}
My TOML
[dependencies]
lambda_runtime = "^0.4"
serde = "^1"
serde_json = "^1"
serde_derive = "^1"
http = "0.2.5"
rand = "0.8.3"
tokio-stream = "0.1.8"
structopt = "0.3"
aws-config = "0.12.0"
aws-sdk-dynamodb = "0.12.0"
log = "^0.4"
simple_logger = "^1"
tokio = { version = "1.5.0", features = ["full"] }
I am unable to unwrap and send the response back to the called lambda. From query_user function, I want to be able to return a constructed CustomOutput struct to this
Ok(CustomOutput {
// user_name: resp[0].user_name,
// user_email: resp[0].user_email,
})
block in get_user_details. Any help or references would help a lot. Thank you.
After several attempts, here is what I learnt:
The match keyword can be used instead of collecting the result in a variable.
I did this:
match client
.query()
.table_name("users")
.key_condition_expression("#key = :value".to_string())
.expression_attribute_names("#key".to_string(), "id".to_string())
.expression_attribute_values(":value".to_string(), user_id_av)
.projection_expression("user_email")
.send()
.await
{
Ok(resp) => Ok(resp.items),
Err(e) => Err(e),
}
When a response comes back from the DB, it will have to have an items key-value inside it.
so,
this line:
Ok(resp) => Ok(resp.items)
will ensure that the items array is returned to the calling function.
Next, to get the values one by one out of the Hashmap and load them into CustomOutput, this is what I did:
let resp: std::option::Option<Vec<HashMap<std::string::String, AttributeValue>>> = query_user(&client, &e.user_id).await?;
Once I have the resp, I can burrow down to the first element if I need like this:
x[0]
.get("user_name")
.unwrap()
.as_s()
.unwrap()
.to_string()
and for Number types maybe something like "battery_voltage":
x[0]
.get("battery_voltage")
.unwrap()
.as_n()
.unwrap()
.to_string()
.parse::<f32>()
.unwrap(),
Finally, use a match block to determine the Some or None for the data:
match _val {
Some(x) => {
// pattern
if x.len() > 0 {
return Ok(json!(CustomOutput {
user_name: x[0].get("user_name").unwrap().as_s().unwrap().to_string(),
user_email: x[0].get("user_email").unwrap().as_s().unwrap().to_string(),
}));
} else {
return Ok(json!({
"code": "404".to_string(),
"message": "Not found.".to_string(),
}));
}
}
None => {
// other pattern
println!("Got nothing");
return Ok(json!({
"code": "404".to_string(),
"message": "Not found.".to_string(),
}));
}
Hope this helps someone else!

How to create other threads in main function

I am using the stream function of redis in actix-web 4, I want to create the consumer in the main function, this is my current code
[dependencies]
actix-web = "4"
tokio = { version = "1", features = ["full"] }
redis = { version = "0.21", features = [
# "cluster",
"tokio-comp",
"tokio-native-tls-comp",
] }
#[actix_web::main]
async fn main() -> std::io::Result<()> {
utils::init::init_envfile();
env_logger::init_from_env(env_logger::Env::new());
let redis_pool = utils::init::init_redis_pool();
let mysql_pool = utils::init::init_mysql_pool();
let redist_stream_consumer = web::block(redis_stream_group);
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(redis_pool.clone()))
.app_data(web::Data::new(mysql_pool.clone()))
.service(web::scope("/api").configure(controller::api::config))
})
.bind(("0.0.0.0", 7777))?
.run()
.await?;
redist_stream_consumer.await.unwrap();
Ok(())
}
fn redis_stream_group() {
let client = redis::Client::open("redis://127.0.0.1/").expect("client");
let mut con = client.get_connection().expect("con");
let key = "s.order";
let group_name = "g1";
let consumer_name = "c1";
let _: Result<(), _> = con.xgroup_create_mkstream(key, group_name, "$");
let opts = StreamReadOptions::default()
.group(group_name, consumer_name)
.count(1)
.block(0);
loop {
let read_reply: StreamReadReply =
con.xread_options(&[key], &[">"], &opts).expect("read err");
for StreamKey { key, ids } in read_reply.keys {
for StreamId { id, map } in &ids {
log::info!("id:{} | key:{} | data:{:?}", id, key, map);
}
let id_strs: Vec<&String> = ids.iter().map(|StreamId { id, map: _ }| id).collect();
let _: usize = con.xack(key, group_name, &id_strs).expect("ack err");
}
}
}
When I use cargo r, I can run the program normally and get the sent messages, but when I execute ctrl+c, I can't exit the program.
Also I'm not sure if using web::block in the main function is correct and if there is a better way to run child threads
UPDATE: Tried using tokio::spawn, seems to work
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let redis_pool = utils::init::init_redis_pool();
let mysql_pool = utils::init::init_mysql_pool();
for consumer_index in 1..=2 {
let c_redis_pool = redis_pool.clone();
tokio::spawn(async move {
let mut con = c_redis_pool.get().await.unwrap();
let key = "s.order";
let group_name = "g1";
let consumer_name = &format!("c{consumer_index}");
let _: Result<(), _> = con.xgroup_create_mkstream(key, group_name, "$").await;
let opts = StreamReadOptions::default()
.group(group_name, consumer_name)
.count(1)
.block(5000);
loop {
let read_reply: StreamReadReply = con
.xread_options(&[key], &[">"], &opts)
.await
.expect("err");
for StreamKey { key, ids } in read_reply.keys {
for StreamId { id, map } in &ids {
log::info!(
"consumer: {} | id:{} | key:{} | data:{:?}",
consumer_name,
id,
key,
map
);
}
let id_strs: Vec<&String> =
ids.iter().map(|StreamId { id, map: _ }| id).collect();
let _: usize = con
.xack(key, group_name, &id_strs)
.await
.expect("ack err");
}
}
});
}
let serve = HttpServer::new(move || {
...
}
This can be done with the standard library by useing std::thread and then creating the thread and whatever you want the other thread to do in a closure
fn main() {
thread::spawn(|| {
println!("doing things in the thread!");
});
println!("doing things outside the thread.... how boring");
}
if you want to pass data between them, you can use std::sync::mpsc to transfer data between the threads safely and quickly, using let (item_one,item_two) = mpsc::channel();, like so
fn main() {
let (sender,receiver) = mpsc::channel();
thread::spawn(move || {
let message = String::from("This message is from the thread");
sender.send(message).unwrap();
});
let letter = receiver.recv().unwrap();
note that the main thread proceeds as normal until it comes to the .recv(), at which it either receives the data from the thread, or waits until the other thread is done.
in your example you could do something like
use std::sync::mpsc;
use std::thread;
#[actix_web::main]
async fn main() -> std::io::Result<()> {
utils::init::init_envfile();
env_logger::init_from_env(env_logger::Env::new());
let port = get_env!("ACTIX_PORT", u16);
log::info!(
"starting HTTP server at http://{}:{}",
local_ipaddress::get().unwrap_or("localhost".to_string()),
port
);
let redis_pool = utils::init::init_redis_pool();
let mysql_pool = utils::init::init_mysql_pool();
let (consumer_sender,consumer_listener) = mpsc::channel();
thread::spawn(move || {
consumer_sender.send(redis_stream_group()).expect("You probably want to handle this case, but I'm too lazy");
});
let serve = HttpServer::new(move || {
let app_state = utils::init::AppState {
app_name: get_env!("APP_NAME", String),
pwd_secret: get_env!("PWD_SECRET", String),
jwt_secret: get_env!("JWT_SECRET", String),
jwt_exp: get_env!("JWT_EXP", i64),
};
App::new()
.app_data(web::Data::new(awc::Client::default()))
.app_data(web::Data::new(app_state))
.app_data(web::Data::new(redis_pool.clone()))
.app_data(web::Data::new(mysql_pool.clone()))
.wrap(actix_cors::Cors::default().allowed_origin_fn(|_, _| true))
.service(web::scope("/chat").configure(controller::chat::config))
.service(web::scope("/ws").configure(controller::ws::config))
.service(web::scope("/api").configure(controller::api::config))
});
if cfg!(debug_assertions) {
serve.bind(("0.0.0.0", port))?
} else {
let p = format!("/tmp/{}.socket", get_env!("APP_NAME", String));
let r = serve.bind_uds(&p)?;
let mut perms = std::fs::metadata(&p)?.permissions();
perms.set_readonly(false);
std::fs::set_permissions(&p, perms)?;
r
}
.run()
.await?;
let consumer = consumer_listener.recv().unwrap();
//then put things to do with the consumer here, or not idc
Ok(())
}
fn redis_stream_group() {
let client = redis::Client::open("redis://127.0.0.1/").expect("client");
let mut con = client.get_connection().expect("con");
let key = "s.order";
let group_name = "g1";
let consumer_name = "c1";
let _: Result<(), _> = con.xgroup_create_mkstream(key, group_name, "$");
let opts = StreamReadOptions::default()
.group(group_name, consumer_name)
.count(1)
.block(0);
loop {
let read_reply: StreamReadReply =
con.xread_options(&[key], &[">"], &opts).expect("read err");
for StreamKey { key, ids } in read_reply.keys {
for StreamId { id, map } in &ids {
log::info!("id:{} | key:{} | data:{:?}", id, key, map);
}
let id_strs: Vec<&String> = ids.iter().map(|StreamId { id, map: _ }| id).collect();
let _: usize = con.xack(key, group_name, &id_strs).expect("ack err");
}
}
}

Reduce code duplication with dynamic if let matching

I have this code below that is repeated in many places in my application.
The only thing that differs on the per-command basis is the if let Role { whatever } = role line
#[command]
pub async fn protect(ctx: &Context, msg: &Message, mut args: Args) -> CommandResult {
let target_tag: String = args.single()?;
let (user_id, guild_id) = msg.get_ids();
let (target_id, _) = LupusCtxHelper::parse_tag_to_target_id(ctx, Tag(target_tag))
.await
.ok_or(MyError)?;
let player = {
let dt = ctx.data.read().await;
dt.get_player(&guild_id, &user_id).await
};
if let Some(p) = player {
// this line is the problem
if let LupusRole::BODYGUARD { .. } = *p.current_role() {
LupusCtxHelper::send_lupus_command(ctx, msg, LupusAction::Protect(target_id)).await?
} else {
msg.channel_id
.say(&ctx.http, "check your role dude")
.await?;
}
}
Ok(())
}
How would you guys suggest I go about cleaning up this code/refactoring in an external function? It seems like I can't pattern match dynamically on the left.
ps: beware that the enum LupusRole has struct values
Example of another file:
#[command]
pub async fn frame(ctx: &Context, msg: &Message, mut args: Args) -> CommandResult {
let target_tag: String = args.single()?;
let (user_id, guild_id) = msg.get_ids();
let (target_id, _) = LupusCtxHelper::parse_tag_to_target_id(ctx, Tag(target_tag))
.await
.ok_or(MyError)?;
let player = {
let dt = ctx.data.read().await;
dt.get_player(&guild_id, &user_id).await
};
if let Some(p) = player {
if let LupusRole::GUFO = *p.current_role() {
LupusCtxHelper::send_lupus_command(ctx, msg, LupusAction::Frame(target_id)).await?
} else {
msg.channel_id
.say(&ctx.http, "fra... ruolo sbagliato")
.await?;
}
}
Ok(())
}

How do I convert arrays to a snips-nlu-rs whitelist or blacklist?

I use snips and built a C library. I want to connect the library to my Node environment with Rust.
JavaScript
var ffi = require('ffi');
const ArrayType = require('ref-array');
var nlu = require('./nlu');
const StringArray = ArrayType('string');
var nlulib = '../cargo/target/x86_64-apple-darwin/release/libp_public_transport_nlu.dylib'
var nlu = ffi.Library(nlulib, {
"load": ['pointer', ['string']],
"execute": ['string', ['pointer', 'string', StringArray]]
});
var ptrToEngine = nlu.load("../snips_public_transport_engine");
var responseNLU = nlu.execute(ptrToEngine, "myQuery", ['bestIntent'], ['worstIntent']);
Rust
#[no_mangle]
pub extern "C" fn execute(engine_pointer: *mut SnipsNluEngine, query: *const c_char, whitelist: &[u8], blacklist: &[u8]) -> CString {
let query_c_str = unsafe { CStr::from_ptr(query) };
let query_string = match query_c_str.to_str().map(|s| s.to_owned()){
Ok(string) => string,
Err(e) => e.to_string()
};
let engine = unsafe {
assert!(!engine_pointer.is_null());
&mut *engine_pointer
};
let result = engine.parse(query_string.trim(), None, None).unwrap();
let result_json = serde_json::to_string_pretty(&result).unwrap();
CString::new(result_json).unwrap()
}
The engine.parse function expects Into<Option<Vec<&'a str>>> as a parameter instead of None, so I need to convert the whitelist and blacklist into this format.
After much struggle with it, I found a solution. I know, this won't be the best that has ever been existing, but it's a solution :)
pub extern "C" fn execute(engine_pointer: *mut SnipsNluEngine, query: *const c_char, whitelist: *const *const c_char) -> CString {
let query_c_str = unsafe { CStr::from_ptr(query) };
let query_string = match query_c_str.to_str().map(|s| s.to_owned()){
Ok(string) => string,
Err(e) => e.to_string()
};
let engine = unsafe {
assert!(!engine_pointer.is_null());
&mut *engine_pointer
};
// count length of whitelist
let mut whitelist_count = 0;
let mut wc = whitelist;
unsafe {
while *wc != std::ptr::null() {
whitelist_count += 1;
wc = wc.offset(1);
}
}
// get single elements pointer from pointer
let sliced_whitelist = unsafe { slice::from_raw_parts(whitelist, whitelist_count) };
// get string for every pointer
let mut string_list_whitelist: Vec<String> = vec![];
for i in 0..whitelist_count {
let whitelist_element = sliced_whitelist[i];
let whitelist_value = unsafe { CStr::from_ptr(whitelist_element) };
let string_whitelist: String = match whitelist_value.to_str().map(|s| s.to_owned()){
Ok(string) => string,
Err(e) => e.to_string()
};
string_list_whitelist.insert(0, string_whitelist);
}
let mut snips_whitelist: Vec<&str> = vec![];
for i in 0..whitelist_count {
let whitelist_element_str: &str = &string_list_whitelist[i][..];
snips_whitelist.insert(0, whitelist_element_str);
}
// create an optional for the whitelist
let mut snips_whitelist_optional: Option<Vec<&str>> = None;
if whitelist_count != 0 {
snips_whitelist_optional = Some(snips_whitelist);
}
// parsing
let result = engine.parse(query_string.trim(), snips_whitelist_optional, snips_blacklist_optional).unwrap();
let result_json = serde_json::to_string_pretty(&result).unwrap();
CString::new(result_json).unwrap()
}
Hint: a Nullpointer (e.g. ref.NULL) has to be sent at the end of the whitelist string array. Alternatively, you can send the array length as parameter instead of counting the list length.

Resources