How to make rp2040 DynPin usable for Adc with OnesShot read - rust

I am trying to convert from a type erased type to a value type.
I have the following struct that accepts DynPin , which is the type erased type of a rp2040 gpio pin.
pub struct PhProbe{
analog_read_pin: DynPin,
}
Now using the rp2040 analog to digital converter with OneShot I want to use this pin in the read function (ignore return value). But the read function of the ADC only accepts pins that implement Channel<ADC> which is done using a macro, which as you can see is only implemented for the specific gpios 26-29.
pub fn read(&mut self, adc: Adc) {
let reading: u16 = adc.read(&mut <DynPin as Into<FloatingInput>>::into(self.analog_read_pin)).unwrap();
}
The pin is created using the into_floating_input() function and handed to the PhProbe::new() function by making a DynPin using into().
let ph_pin = pins.gpio26.into_floating_input();
let ph_meter = PhProbe::new(ph_pin.into());
I was trying to make a analog input pin (FloatingInput) out of the DynPin, but this doesn't satisfy the bounds Channel<Adc> of OneShot. Is there a a way to use any of the gpio26-29 pins with this or do I need to specify Gpio26 for this?

Related

Variadic generic trait paramterised on same type in Rust

I would broadly like to write something like this:
pub struct Example<..T> {...}
That is, parametrise Example over any number of type T. This is discussed in an RFC, but this seems to be quite stale.
I know that we can have variadic functions by using c_variadic. Is there any way to expand this to structs?
Edit to include more concrete example:
What I am trying to do is (may not be valid Rust, but just an example):
// `a` and `b` are some structs that communicate via channels
// Label is some label we can match on
// Continuation is a trait
let options = vec![(label1, cont1), (label2, cont2)...];
let a = Offer<...(Label, Continuation)>::new();
let b = Pick<Label>::new();
// offer and pick are just some functions that communicate
// the choice via channels
// The important part being that offer and pick would be parametrised
// over <..(Label, Continuation)>
let picked = a.offer(options);
let picked = b.pick(label1);
Perhaps you want a const size parameter:
struct Offer<T, const U: usize> {
contents: [T; U]
}
This can represent a list of any size. All items must be of the same type though but in your example this seems to be the case.

How Can I Hash By A Raw Pointer?

I want to create a function that provides a two step write and commit, like so:
// Omitting locking for brevity
struct States {
commited_state: u64,
// By reference is just a placeholder - I don't know how to do this
pending_states: HashSet<i64>
}
impl States {
fn read_dirty(&self) -> {
// Sum committed state and all non committed states
self.commited_state +
pending_states.into_iter().fold(sum_all_values).unwrap_or(0)
}
fn read_committed(&self) {
self.commited_state
}
}
let state_container = States::default();
async fn update_state(state_container: States, new_state: i64) -> Future {
// This is just pseudo code missing locking and such
// I'd like to add a reference to new_state
state_container.pending_states.insert(
new_state
)
async move {
// I would like to defer the commit
// I add the state to the commited state
state_container.commited_state =+ new_state;
// Then remove it *by reference* from the pending states
state_container.remove(new_state)
}
}
I'd like to be in a situation where I can call it like so
let commit_handler = update_state(state_container, 3).await;
// Do some external transactional stuff
third_party_transactional_service(...)?
// Commit if the above line does not error
commit_handler.await;
The problem I have is that HashMaps and HashSets, hash values based of their value and not their actual reference - so I can't remove them by reference.
I appreciate this a bit of a long question, but I'm just trying to give a bit more context as to what I'm trying to do. I know that in a typical database you'd generally have an atomic counter to generate the transaction ID, but that feels a bit overkill when the pointer reference would be enough.
However, I don't want to get the pointer value using unsafe, because it just seems a bit off to do something relatively simple.
Values in rust don't have an identity like they do in other languages. You need to ascribe them an identity somehow. You've hit on two ways to do this in your question: an ID contained within the value, or the address of the value as a pointer.
Option 1: An ID contained in the value
It's trivial to have a usize ID with a static AtomicUsize (atomics have interior mutability).
use std::sync::atomic::{AtomicUsize, Ordering};
// No impl of clone/copy as we want these IDs to be unique.
#[derive(Debug, Hash, PartialEq, Eq)]
#[repr(transparent)]
pub struct OpaqueIdentifier(usize);
impl OpaqueIdentifier {
pub fn new() -> Self {
static COUNTER: AtomicUsize = AtomicUsize::new(0);
Self(COUNTER.fetch_add(1, Ordering::Relaxed))
}
pub fn id(&self) -> usize {
self.0
}
}
Now your map key becomes usize, and you're done.
Having this be a separate type that doesn't implement Copy or Clone allows you to have a concept of an "owned unique ID" and then every type with one of these IDs is forced not to be Copy, and a Clone impl would require obtaining a new ID.
(You can use a different integer type than usize. I chose it semi-arbitrarily.)
Option 2: A pointer to the value
This is more challenging in Rust since values in Rust are movable by default. In order for this approach to be viable, you have to remove this capability by pinning.
To make this work, both of the following must be true:
You pin the value you're using to provide identity, and
The pinned value is !Unpin (otherwise pinning still allows moves!), which can be forced by adding a PhantomPinned member to the value's type.
Note that the pin contract is only upheld if the object remains pinned for its entire lifetime. To enforce this, your factory for such objects should only dispense pinned boxes.
This could complicate your API as you cannot obtain a mutable reference to a pinned value without unsafe. The pin documentation has examples of how to do this properly.
Assuming that you have done all of this, you can then use *const T as the key in your map (where T is the pinned type). Note that conversion to a pointer is safe -- it's conversion back to a reference that isn't. So you can just use some_pin_box.get_ref() as *const _ to obtain the pointer you'll use for lookup.
The pinned box approach comes with pretty significant drawbacks:
All values being used to provide identity have to be allocated on the heap (unless using local pinning, which is unlikely to be ergonomic -- the pin! macro making this simpler is experimental).
The implementation of the type providing identity has to accept self as a &Pin or &mut Pin, requiring unsafe code to mutate the contents.
In my opinion, it's not even a good semantic fit for the problem. "Location in memory" and "identity" are different things, and it's only kind of by accident that the former can sometimes be used to implement the latter. It's a bit silly that moving a value in memory would change its identity, no?
I'd just go with adding an ID to the value. This is a substantially more obvious pattern, and it has no serious drawbacks.

Why we use `Box::pin` for !Unpin and `Pin::new` for Unpin?

I get confused when learning implementation of Pin in rust.
I understand when a type is not safe to move, it can impl !Unpin. Pin is to prevent others get &mut T from these types, so that they can not be std::mem::swap.
If Pin is designed for !Unpin types, why we can't just call Pin::new on those types?
It gives error like the following. I know I should use Pin::new_unchecked, but why?
struct TestNUnpin {
b: String,
}
impl !Unpin for TestNUnpin {}
// error: the trait `Unpin` is not implemented for `TestNUnpin`
std::pin::Pin::new(&TestNUnpin{b: "b".to_owned()});
My reasoning is:
Pin is to help !Unpin types
We can pass !Unpin types to Pin::new to make them unmovable.
For Unpin types, they can not be pinned, so we can't created by Pin::new
I think what you're looking for can be found in the Safety section of Pin::new_unchecked. Essentially, Pin should guarantee that the pinned value will never move again (unless it implements Unpin), even after the Pin is dropped. An example of this failing is Pin<&mut T>. You can drop the Pin and the value is no longer borrowed, so you're free to move it again, breaking Pin's core guarantee. Here's an example:
use std::marker::PhantomPinned;
use std::pin::Pin;
fn main() {
let x = PhantomPinned;
{
let _pin = Pin::new(&x); // error[E0277]: `PhantomPinned` cannot be unpinned
}
let y = Box::new(x); // the PhantomPinned is moved here!
}
This check simply isn't doable at compile-time without adding a whole lot of extra complexity to the borrow checker, so it's marked as unsafe, essentially saying it's the developer's job to make sure it works. The reason Box::pin exists and is safe is because the developers of Box can guarantee its safety: Box is an owned and unique pointer, so once its Pin is dropped, its value is also dropped, and there's no longer any way to move the value.

Convert struct to byte array and read this via implementing `std::io::Read` trait

I am trying to implement std::io::Read trait for a struct.
My objective is to convert obj to byte array and read it through the implementation of Read trait.
Following is the code I have written so far.
use chrono::{DateTime, Utc};
use std::io::Error;
use std::io::Read;
use std::vec::Vec;
use std::str;
use super::{Chain, Transaction};
// The struct I need to convert to byte array and add the Read impl.
#[derive(Debug)]
pub struct Block {
index: u64,
timestamp: DateTime<Utc>,
transactions: Vec<Transaction>,
proof: i64,
previous_hash: String,
}
// The Read trait implementation for Block
impl Read for Block {
fn read(&mut self, buf: &mut [u8]) -> std::result::Result<usize, Error> {
let bytes: &[u8] = unsafe { any_as_u8_slice(&self) };
buf.clone_from_slice(bytes);
Ok(bytes.len())
}
}
// Function that converts to byte array. (found on stackoverflow)
unsafe fn any_as_u8_slice<T: Sized>(p: &T) -> &[u8] {
::std::slice::from_raw_parts((p as *const T) as *const u8, ::std::mem::size_of::<T>())
}
I get an error when I execute the code this way.
let mut buffer: Vec<u8> = Vec::new();
let result = block.read(buffer.as_mut());
ERROR
thread 'main' panicked at 'destination and source slices have
different lengths',
/Users/harsh/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/src/rust/library/core/src/slice/mod.rs:2554:9
I am new to Rust, trying to learn by porting another program in Rust.
How do I copy &[u8] to another &mut [u8] which is a vec. (Fix the Read impl for Block)?
And is there a better way to do this?
Convert object to byte array and return it from the Read implementation.
There's a few different problems here:
What you're trying to do here won't be sound in general. Rust structs might include padding bytes or otherwise initialized bytes, which means that reading them from a [u8] is undefined behavior. The name for what you're trying to do here is a Transmute and they are famously very difficult to do correctly.
It's not clear to me why specifically you're doing this in terms of the Read trait. Read is generally for i/o devices, like files or stdin, or in-memory buffers that behave like i/o devices. Even if we assume that a direct, inplace transmute to a byte slice is appropriate here, it would make more sense to just have a method on Block resembling fn as_byte_slice(&self) -> &[u8] { ... }.
Even if we set aside the above issues, it's still not clear to me that you're going to get the outcome you expect. Transmuting a struct to a byte array will convert only the raw bytes in the struct, which will work fine for primitive types like u64, but for types like Vec<T> and String will only return the underlying pointer to the allocated storage.
I'm guessing that what you actually want to have happen here is that all of the contents of the Block– including the list of transactions and the previous_hash– be converted into the byte slice. This is called serialization, and the de-facto way to do it in Rust is with serde. Serde is an abstract library that connects types (like Vec and your own Block) to data formats like json and bincode.
In your question, you've said you want to "convert the [object] to a byte array". This is a bit nonspecific; it's likely that there is actually a specific data format into which you're suppose to convert this Block. Your specific application will likely describe which specific data format is in use, and you can then look into whether there already exists a serde Serializer for that data format, or whether you'll need to write your own.

How to handle generic types with different concrete types in rust efficiently?

The main goal is to implement a computation graph, that handles nodes with values and nodes with operators (think of simple arithmetic operators like add, subtract, multiply etc..). An operator node can take up to two value nodes, and "produces" a resulting value node.
Up to now, I'm using an enum to differentiate between a value and operator node:
pub enum Node<'a, T> where T : Copy + Clone {
Value(ValueNode<'a, T>),
Operator(OperatorNode)
}
pub struct ValueNode<'a, T> {
id: usize,
value_object : &'a dyn ValueType<T>
}
Update: Node::Value contains a struct, which itself contains a reference to a trait object ValueType, which is being implemented by a variety of concrete types.
But here comes the problem. During compililation, the generic types will be elided, and replaced by the actual types. The generic type T is also being propagated throughout the computation graph (obviously):
pub struct ComputationGraph<T> where T : Copy + Clone {
nodes: Vec<Node<T>>
}
This actually restricts the usage of ComputeGraph to one specific ValueType.
Furthermore the generic type T cannot be Sized, since a value node can be an opqaue type handled by a different backend not available through rust (think of C opqaue types made available through FFI).
One possible solution to this problem would be to introduce an additional enum type, that "mirrors" the concrete implementation of the valuetype trait mentioned above. this approach would be similiar, that enum dispatch does.
Is there anything I haven't thought of to use multiple implementations of ValueType?
update:
What i want to achive is following code:
pub struct Scalar<T> where T : Copy + Clone{
data : T
}
fn main() {
let cg = ComputeGraph::new();
// a new scalar type. doesn't have to be a tuple struct
let a = Scalar::new::<f32>(1.0);
let b_size = 32;
let b = Container::new::<opaque_type>(32);
let op = OperatorAdd::new();
// cg.insert_operator_node constructs four nodes: 3 value nodes
// and one operator nodes internally.
let result = cg.insert_operator_node::<Container>(&op, &a, &b);
}
update
ValueType<T> looks like this
pub trait ValueType<T> {
fn get_size(&self) -> usize;
fn get_value(&self) -> T;
}
update
To further increase the clarity of my question think of a small BLAS library backed by OpenCL. The memory management and device interaction shall be transparent to the user. A Matrix type allocates space on an OpenCL device with types as a primitive type buffer, and the appropriate call will return a pointer to that specific region of memory. Think of an operation that will scale the matrix by a scalar type, that is being represented by a primitive value. Both the (pointer to the) buffer and the scalar can be passed to a kernel function. Going back to the ComputeGraph, it seems obvious, that all BLAS operations form some type of computational graph, which can be reduced to a linear list of instructions ( think here of setting kernel arguments, allocating buffers, enqueue the kernel, storing the result, etc... ). Having said all that, a computation graph needs to be able to store value nodes with a variety of types.
As always the answer to the problem posed in the question is obvious. The graph expects one generic type (with trait bounds). Using an enum to "cluster" various subtypes was the solution, as already sketched out in the question.
An example to illustrate the solution. Consider following "subtypes":
struct Buffer<T> {
// fields
}
struct Scalar<T> {
// fields
}
struct Kernel {
// fields
}
The value containing types can be packed into an enum:
enum MemType {
Buffer(Buffer<f32>);
Scalar(Scalar<f32>);
// more enum variants ..
}
Now MemType and Kernel can now be packed in an enum as well
enum Node {
Value(MemType);
Operator(Kernel);
}
Node can now be used as the main type for nodes/vertices inside the graph. The solution might not be very elegant, but it does the trick for now. Maybe some code restructuring might be done in the future.

Resources