RUST: Better (simpler/pritter) way to use enum from struct? [closed] - struct

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Maybe it is a little weird question, but is there a better (simpler/prettier) way to use "enum from struct"? Without "option" ? Please.
Below some simple variants of code for: "Struct only", "Enum only" and bloody "Enum from Struct"...
It could be much easier to grab a hint, if "Your advise" could be used on presented code.
https://play.rust-lang.org/
fn main() {
// ---- STRUCT only:
#[derive(Debug)]
struct Ipv4Addr {
v4: (u8, u8, u8, u8),
}
#[derive(Debug)]
struct Ipv6Addr {
v6: String,
}
let home = Ipv4Addr { v4: (127, 0, 0, 1) };
let loopback = Ipv6Addr {
v6: String::from("::1"),
};
println!("\n\tStruct:\n");
println!("home: {:?} , loopback: {:?}", home, loopback);
// ---- ENUM 1 - only enum:
#[derive(Debug)]
enum IpAddr2 {
V4(u8, u8, u8, u8),
V6(String),
}
let home1 = IpAddr2::V4(127, 0, 0, 1);
let loopback1 = IpAddr2::V6(String::from("::1"));
println!("\n\tEnum1:\n");
println!("home1: {:?}, loopback1: {:?}\n", home1, loopback1);
// ---- ENUM 2 - from Struct:
#[derive(Debug)]
enum IpAddr {
V4(Ipv4Addr),
V6(Ipv6Addr),
}
let home2 = IpAddr::V4(Ipv4Addr { v4: (127, 0, 0, 1) }); // <-- *a little "meh"?
let loopback2 = IpAddr::V6(Ipv6Addr {
v6: ("::1").to_string(), // <-- *?
});
println!("\n\tEnum2:\n");
println!("home2: {:?}, loopback2: {:?}\n", home2, loopback2);}

Why can't you use tuple structs for this? Something like
struct Ipv4(u8,u8,u8,u8);
struct Ipv6(String); // isn't ipv6 8*u16?
enum IpAddr {
V4(Ipv4),
V6(Ipv6),
}
fn main() {
let ip4 = IpAddr::V4(Ipv4(192,168,0,1));
let ip6 = IpAddr::V6(Ipv6("::1".into()));
}

Related

What does `:#?` means in rust format [duplicate]

This question already has an answer here:
Pretty print struct in Rust
(1 answer)
Closed 2 months ago.
format!("{:#?}", (100, 200)); // => "(
// 100,
// 200,
// )"
Any docs to elaborate on this pattern {:#?}?
? means a debug format (use Debug and not Display), # means pretty-printing the debug format. For example:
#[derive(Debug)]
struct S {
a: i32,
b: i32,
}
fn main() {
let v = S { a: 1, b: 2 };
println!("{v:?}");
println!("{v:#?}");
}
Prints (Playground):
S { a: 1, b: 2 }
S {
a: 1,
b: 2,
}
See the docs.

Why does my program get stuck at epoll_wait? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 months ago.
Improve this question
I have some structs that are used to de-serialize requests from GCP alerting. The top-level structs implement FromResponse, and the nested structs all implement serde's Deserialize and Serialize traits.
Here are the structs (omitted ones that are filled by serde for brevity.)
#[derive(Debug, Clone, PartialEq)]
struct GcpAlert {
pub headers: GcpHeaders,
pub body: GcpBody,
}
#[async_trait]
impl FromRequest<Body> for GcpAlert {
type Rejection = StatusCode;
async fn from_request(req: &mut RequestParts<Body>) -> Result<Self, Self::Rejection> {
let body = GcpBody::from_request(req).await?;
let headers = GcpHeaders::from_request(req).await?;
Ok(Self { headers, body })
}
}
#[derive(Debug, Clone)]
struct GcpHeaders {
pub host: TypedHeader<Host>,
pub content_length: TypedHeader<ContentLength>,
pub content_type: TypedHeader<ContentType>,
pub user_agent: TypedHeader<UserAgent>,
}
#[async_trait]
impl FromRequest<Body> for GcpHeaders {
type Rejection = StatusCode;
async fn from_request(req: &mut RequestParts<Body>) -> Result<Self, Self::Rejection> {
let bad_req = StatusCode::BAD_REQUEST;
let host: TypedHeader<Host> =
TypedHeader::from_request(req).await.map_err(|_| bad_req)?;
let content_length: TypedHeader<ContentLength> =
TypedHeader::from_request(req).await.map_err(|_| bad_req)?;
let content_type: TypedHeader<ContentType> =
TypedHeader::from_request(req).await.map_err(|_| bad_req)?;
let user_agent: TypedHeader<UserAgent> =
TypedHeader::from_request(req).await.map_err(|_| bad_req)?;
Ok(Self {
host,
content_length,
content_type,
user_agent,
})
}
}
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
struct GcpBody {
pub incident: GcpIncident,
pub version: Box<str>,
}
#[async_trait]
impl FromRequest<Body> for GcpBody {
type Rejection = StatusCode;
async fn from_request(req: &mut RequestParts<Body>) -> Result<Self, Self::Rejection> {
let serv_err = StatusCode::INTERNAL_SERVER_ERROR;
let bad_req = StatusCode::BAD_REQUEST;
let body = req.body_mut().as_mut().ok_or(serv_err)?;
let buffer = body::to_bytes(body).await.map_err(|_| serv_err)?;
Ok(serde_json::from_slice(&buffer).map_err(|_|bad_req)? )
}
}
To test this, I have created a test that compares a manually instantiated GcpAlert struct and one created via axum. Note that I've omitted details about the manually created struct and request, as I'm fairly certain they are unrelated.
#[tokio::test]
async fn test_request_deserialization() {
async fn handle_alert(alert: GcpAlert) {
let expected = GcpAlert {
headers: GcpHeaders { /* headers */ },
body: GcpBody { /* body */ }
};
assert_eq!(alert, expected);
}
let app = Router::new().route("/", post(handle_alert));
// TestClient is similar to this: https://github.com/tokio-rs/axum/blob/main/axum/src/test_helpers/test_client.rs
let client = TestClient::new(app);
client.post("/")
.header("host", "<host>")
.header("content-length", 1024)
.header("content-type", ContentType::json().to_string())
.header("user-agent", "<user-agent>")
.header("accept-enconding", "gzip, deflate, br")
.body(/* body */)
.send().await;
}
My issue is that the program freezes in the following line of GcpBody's FromRequest impl.
let buffer = body::to_bytes(body).await.map_err(|_| serv_err)?;
I've tried to debug the issue a little bit, but I'm not really familiar with assembly/llvm/etc.
It looks like two threads are active for this. I can artificially increase the number of threads by using the multi-threading attribute on the test, but it doesn't change the end result, just a bigger call stack.
Thread1 callstack:
syscall (#syscall:12)
std::sys::unix::futex::futex_wait (#std::sys::unix::futex::futex_wait:64)
std::sys_common::thread_parker::futex::Parker::park_timeout (#std::thread::park_timeout:25)
std::thread::park_timeout (#std::thread::park_timeout:18)
std::sync::mpsc::blocking::WaitToken::wait_max_until (#std::sync::mpsc::blocking::WaitToken::wait_max_until:18)
std::sync::mpsc::shared::Packet<T>::recv (#std::sync::mpsc::shared::Packet<T>::recv:94)
std::sync::mpsc::Receiver<T>::recv_deadline (#test::run_tests:1771)
std::sync::mpsc::Receiver<T>::recv_timeout (#test::run_tests:1696)
test::run_tests (#test::run_tests:1524)
test::console::run_tests_console (#test::console::run_tests_console:290)
test::test_main (#test::test_main:102)
test::test_main_static (#test::test_main_static:34)
gcp_teams_alerts::main (/home/ak_lo/ドキュメント/Rust/gcp-teams-alerts/src/main.rs:1)
core::ops::function::FnOnce::call_once (#core::ops::function::FnOnce::call_once:6)
std::sys_common::backtrace::__rust_begin_short_backtrace (#std::sys_common::backtrace::__rust_begin_short_backtrace:6)
std::rt::lang_start::{{closure}} (#std::rt::lang_start::{{closure}}:7)
core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once (#std::rt::lang_start_internal:242)
std::panicking::try::do_call (#std::rt::lang_start_internal:241)
std::panicking::try (#std::rt::lang_start_internal:241)
std::panic::catch_unwind (#std::rt::lang_start_internal:241)
std::rt::lang_start_internal::{{closure}} (#std::rt::lang_start_internal:241)
std::panicking::try::do_call (#std::rt::lang_start_internal:241)
std::panicking::try (#std::rt::lang_start_internal:241)
std::panic::catch_unwind (#std::rt::lang_start_internal:241)
std::rt::lang_start_internal (#std::rt::lang_start_internal:241)
std::rt::lang_start (#std::rt::lang_start:13)
main (#main:10)
___lldb_unnamed_symbol3139 (#___lldb_unnamed_symbol3139:29)
__libc_start_main (#__libc_start_main:43)
_start (#_start:15)
Thread2 callstack:
epoll_wait (#epoll_wait:27)
mio::sys::unix::selector::epoll::Selector::select (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.4/src/sys/unix/selector/epoll.rs:68)
mio::poll::Poll::poll (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.4/src/poll.rs:400)
tokio::runtime::io::Driver::turn (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/io/mod.rs:162)
<tokio::runtime::io::Driver as tokio::park::Park>::park (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/io/mod.rs:227)
<tokio::park::either::Either<A,B> as tokio::park::Park>::park (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/park/either.rs:30)
tokio::time::driver::Driver<P>::park_internal (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/time/driver/mod.rs:238)
<tokio::time::driver::Driver<P> as tokio::park::Park>::park (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/time/driver/mod.rs:436)
<tokio::park::either::Either<A,B> as tokio::park::Park>::park (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/park/either.rs:30)
<tokio::runtime::driver::Driver as tokio::park::Park>::park (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/driver.rs:198)
tokio::runtime::scheduler::current_thread::Context::park::{{closure}} (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/scheduler/current_thread.rs:308)
tokio::runtime::scheduler::current_thread::Context::enter (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/scheduler/current_thread.rs:349)
tokio::runtime::scheduler::current_thread::Context::park (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/scheduler/current_thread.rs:307)
tokio::runtime::scheduler::current_thread::CoreGuard::block_on::{{closure}} (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/scheduler/current_thread.rs:554)
tokio::runtime::scheduler::current_thread::CoreGuard::enter::{{closure}} (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/scheduler/current_thread.rs:595)
tokio::macros::scoped_tls::ScopedKey<T>::set (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/macros/scoped_tls.rs:61)
tokio::runtime::scheduler::current_thread::CoreGuard::enter (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/scheduler/current_thread.rs:595)
tokio::runtime::scheduler::current_thread::CoreGuard::block_on (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/scheduler/current_thread.rs:515)
tokio::runtime::scheduler::current_thread::CurrentThread::block_on (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/scheduler/current_thread.rs:161)
tokio::runtime::Runtime::block_on (/home/ak_lo/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.21.0/src/runtime/mod.rs:490)
While stepping through, I noticed that the Data struct is polled twice, and it seems like the context and state might not match up (it should close with nothing to return on the second poll, I've confirmed all data is returned in the first). Does anyone have any idea why the program continues to wait for new data when it certainly won't come?
Edit: Just to test I changed the line that causes the freeze to the following:
let mut body = BodyStream::from_request(req).await.map_err(|_| serv_err)?.take(1);
let buffer = {
let mut buf = Vec::new();
while let Some(chunk) = body.next().await {
let data = chunk.map_err(|_| serv_err)?;
buf.extend(data);
}
buf
};
The test is successful in this case. But if I increase take to anymore than 1, the same issue reoccurs.
I was wrong about my assumption of the request contents being unrelated. I got the content length incorrect when creating the test, and it looks like axum was waiting forever as a result.

Mismatched types with modules

I am making small program for validating and computing IPs.
I wanted to try out modules and I encountered an error I do not know how to solve and can't find anything on the internet.
Here is my project structure:
src/
ip.rs
main.rs
mask.rs
show.rs
ip.rs
pub struct Ip {
pub first: u8,
pub second: u8,
pub third: u8,
pub forth: u8,
pub bin: [bool; 32]
}
pub fn build_ip(ip: String) -> Ip {
let split = ip.replace(".", ":");
let split = split.split(":");
let split_vec = split.collect::<Vec<&str>>();
let mut bin: [bool; 32] = [false; 32];
let mut octets: [u8; 4] = [0; 4];
if split_vec.len() != 4 {
panic!("Wrong amount of octets!");
}
for i in 0..4 {
let octet: u8 = match split_vec[i].trim().parse() {
Ok(num) => num,
Err(_) => panic!("Something wrong with first octet"),
};
octets[i] = octet;
let soctet = format!("{:b}", octet);
for (j, c) in soctet.chars().enumerate() {
bin[j + i*8] = c == '1';
}
}
Ip {
first: octets[0],
second: octets[1],
third: octets[2],
forth: octets[3],
bin: bin
}
}
show.rs
#[path = "ip.rs"] mod ip;
#[path = "mask.rs"] mod mask;
pub use ip::Ip;
pub fn print(name: String, ip: ip::Ip) {
println!("{}: ", name);
println!("{}.{}.{}.{}", ip.first, ip.second, ip.third, ip.forth);
for c in ip.bin.iter() {
print!("{}", *c as i32);
}
println!("");
println!("");
}
main.rs
pub mod mask;
pub mod ip;
pub mod show;
fn main() {
let ip = ip::build_ip("255:255:25:8:0".to_string());
// let mask = mask::build_mask("255.255.255.252".to_string());
show::print("ip".to_string(), ip)
}
When I try to compile it throws this at me and I have no idea what to do:
--> src\main.rs:9:32
|
9 | show::print("ip".to_string(), ip)
| ^^ expected struct `show::ip::Ip`, found struct `ip::Ip`
#[path = "ip.rs"] mod ip;
#[path = "mask.rs"] mod mask;
This declares new submodules, independent from the ones declared in mod.rs. That they have the same source code is immaterial, as far as typechecking and object identity are concerned they're completely unrelated.
Essentially, you've defined the following structure:
pub mod mask { ... }
pub mod ip {
pub struct Ip { ... }
pub fn build_ip(ip: String) -> Ip { ... }
}
pub mod show {
mod ip {
pub struct Ip { ... }
pub fn build_ip(ip: String) -> Ip { ... }
}
mod mask { ... }
pub fn print(name: String, ip: ip::Ip) { ... }
}
If you want to import modules, you should use use. If you need to import sibling modules, you can use the crate:: segment (in order to start resolving from the current crate's root), or super:: (to move up a level from the current module).
So here show should contain either use crate::{ip, mask} or use super::{ip, crate} in order to "see" its siblings.
The pub for use ip::Ip; is also completely unnecessary, you only needed it because you were declaring a new ip module, and thus needed its Ip to be public since you were using it in a pub function.
In show.rs, you write add the modules ip and mask, but these are already added in main.rs.
Instead, in show.rs use something like the following:
use crate::ip::Ip;
pub fn print(name: String, ip: Ip) {
...
}

Is it possible to downcast Arc<dyn Any + Send + Sync> to Arc<Mutex<String>>? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Rust's Arc has a downcast feature:
use std::any::Any;
use std::sync::Arc;
fn print_if_string(value: Arc<dyn Any + Send + Sync>) {
if let Ok(string) = value.downcast::<String>() {
println!("String ({}): {}", string.len(), string);
}
}
let my_string = "Hello World".to_string();
print_if_string(Arc::new(my_string));
print_if_string(Arc::new(0i8));
However, I use Arc with Mutex. For example:
let my_arc_type = Arc::new(Mutex::new("HelloWorld".to_string()));
is it possible to downcast value: Arc<dyn Any + Send + Sync> to Arc<Mutex<String>>?
And what's the problem? It worked straight away.
Playground
use std::any::Any;
use std::sync::Arc;
use std::sync::Mutex;
fn print_if_mutex(value: Arc<dyn Any + Send + Sync>) {
if let Ok(mutex) = value.downcast::<Mutex<String>>() {
if let Ok(string) = mutex.lock() {
println!("String ({}): {}", string.len(), string);
}
}
}
fn main(){
let my_string = "Hello World".to_string();
print_if_mutex(Arc::new(Mutex::new(my_string)));
print_if_mutex(Arc::new(0i8));
}

How to return a reference to a global vector or an internal Option?

I'm trying to create a method that can return a reference to Data that is either in a constant global array or inside an Option in an item. The lifetimes are certainly different, but it's safe to assume that the lifetime of the data is at least as long as the lifetime of the item. While doing this, I expected the compiler to warn if I did anything wrong, but it's instead generating wrong instructions and the program is crashing with SIGILL.
Concretely speaking, I have the following code failing in Rust 1.27.2:
#[derive(Debug)]
pub enum Type {
TYPE1,
TYPE2,
}
#[derive(Debug)]
pub struct Data {
pub ctype: Type,
pub int: i32,
}
#[derive(Debug)]
pub struct Entity {
pub idata: usize,
pub modifier: Option<Data>,
}
impl Entity {
pub fn data(&self) -> &Data {
if self.modifier.is_none() {
&DATA[self.idata]
} else {
self.modifier.as_ref().unwrap()
}
}
}
pub const DATA: [Data; 1] = [Data {
ctype: Type::TYPE2,
int: 1,
}];
fn main() {
let mut itemvec = vec![Entity {
idata: 0,
modifier: None,
}];
eprintln!("vec[0]: {:p} = {:?}", &itemvec[0], itemvec[0]);
eprintln!("removed item 0");
let item = itemvec.remove(0);
eprintln!("item: {:p} = {:?}", &item, item);
eprintln!("modifier: {:p} = {:?}", &item.modifier, item.modifier);
eprintln!("DATA: {:p} = {:?}", &DATA[0], DATA[0]);
let itemdata = item.data();
eprintln!("itemdata: {:p} = {:?}", itemdata, itemdata);
}
Complete code
I can't understand what I'm doing wrong. Why isn't the compiler generating a warning? Is it the removal of the (non-copy) item of the vector? Is it the ambiguous lifetimes?
How to return a reference to a global vector or an internal Option?
By using Option::unwrap_or_else:
impl Entity {
pub fn data(&self) -> &Data {
self.modifier.as_ref().unwrap_or_else(|| &DATA[self.idata])
}
}
but it's instead generating wrong instructions and the program is crashing with SIGILL
The code in your question does not have this behavior on macOS with Rust 1.27.2 or 1.28.0. On Ubuntu I see an issue when running the program in Valgrind, but the problem goes away in Rust 1.28.0.
See also:
Why should I prefer `Option::ok_or_else` instead of `Option::ok_or`?
What is this unwrap thing: sometimes it's unwrap sometimes it's unwrap_or

Resources