RGB to YCbCr using SIMD vectors lose some data - colors

I'm writing JPEG decoder/encoder in Rust and I have some problem with RGB ↔ YCbCr conversion.
My code:
use std::simd::f32x4;
fn clamp<T>(val: T, min: T, max: T) -> T
where T: PartialOrd {
if val < min { min }
else if max < val { max }
else { val }
}
// in oryginal code there are 2 methods, one for processors with SSE3 and for rest
// both do the same and give the same results
pub fn sum_f32x4(f32x4(a, b, c, d): f32x4) -> f32 {
a + b + c + d
}
pub fn rgb_to_ycbcr(r: u8, g: u8, b: u8) -> (u8, u8, u8) {
let rgb = f32x4(r as f32, g as f32, b as f32, 1.0);
let y = sum_f32x4(rgb * f32x4( 0.2990, 0.5870, 0.1140, 0.0));
let cb = sum_f32x4(rgb * f32x4(-0.1687, -0.3313, 0.5000, 128.0));
let cr = sum_f32x4(rgb * f32x4( 0.5000, -0.4187, -0.0813, 128.0));
(y as u8, cb as u8, cr as u8)
}
pub fn ycbcr_to_rgb(y: u8, cb: u8, cr: u8) -> (u8, u8, u8) {
let ycbcr = f32x4(y as f32, cb as f32 - 128.0f32, cr as f32 - 128.0f32, 0.0);
let r = sum_f32x4(ycbcr * f32x4(1.0, 0.00000, 1.40200, 0.0));
let g = sum_f32x4(ycbcr * f32x4(1.0, -0.34414, -0.71414, 0.0));
let b = sum_f32x4(ycbcr * f32x4(1.0, 1.77200, 0.00000, 0.0));
(clamp(r, 0., 255.) as u8, clamp(g, 0., 255.) as u8, clamp(b, 0., 255.) as u8)
}
fn main() {
assert_eq!(rgb_to_ycbcr( 0, 71, 171), ( 61, 189, 84));
// assert_eq!(rgb_to_ycbcr( 0, 71, 169), ( 61, 189, 84)); // will fail
// for some reason we always lose data on blue channel
assert_eq!(ycbcr_to_rgb( 61, 189, 84), ( 0, 71, 169));
}
For some reason booth tests (in comments) passes. I would rather expect that at least one of them will fail. Am I wrong? At least it should stop at some point, but when I change jpeg::color::utils::rgb_to_ycbcr(0, 71, 171) to jpeg::color::utils::rgb_to_ycbcr(0, 71, 169) then test fails as YCbCr value has changed, so I will lose my blue channel forever.

#dbaupp put the nail in the coffin with the suggestion to use round:
#![allow(unstable)]
use std::simd::{f32x4};
use std::num::Float;
fn clamp(val: f32) -> u8 {
if val < 0.0 { 0 }
else if val > 255.0 { 255 }
else { val.round() as u8 }
}
fn sum_f32x4(v: f32x4) -> f32 {
v.0 + v.1 + v.2 + v.3
}
pub fn rgb_to_ycbcr((r, g, b): (u8, u8, u8)) -> (u8, u8, u8) {
let rgb = f32x4(r as f32, g as f32, b as f32, 1.0);
let y = sum_f32x4(rgb * f32x4( 0.299000, 0.587000, 0.114000, 0.0));
let cb = sum_f32x4(rgb * f32x4(-0.168736, -0.331264, 0.500000, 128.0));
let cr = sum_f32x4(rgb * f32x4( 0.500000, -0.418688, -0.081312, 128.0));
(clamp(y), clamp(cb), clamp(cr))
}
pub fn ycbcr_to_rgb((y, cb, cr): (u8, u8, u8)) -> (u8, u8, u8) {
let ycbcr = f32x4(y as f32, cb as f32 - 128.0f32, cr as f32 - 128.0f32, 0.0);
let r = sum_f32x4(ycbcr * f32x4(1.0, 0.00000, 1.40200, 0.0));
let g = sum_f32x4(ycbcr * f32x4(1.0, -0.34414, -0.71414, 0.0));
let b = sum_f32x4(ycbcr * f32x4(1.0, 1.77200, 0.00000, 0.0));
(clamp(r), clamp(g), clamp(b))
}
fn main() {
let mut rgb = (0, 71, 16);
println!("{:?}", rgb);
for _ in 0..100 {
let yuv = rgb_to_ycbcr(rgb);
rgb = ycbcr_to_rgb(yuv);
println!("{:?}", rgb);
}
}
Note that I also increased the precision of your values in rgb_to_ycbcr from the Wikipedia page. I also clamp in both functions, as well as calling round. Now the output is:
(0u8, 71u8, 16u8)
(1u8, 72u8, 16u8)
(1u8, 72u8, 16u8)
With the last value repeating for the entire loop.

Related

How to format to other number bases besides decimal, hexadecimal? [duplicate]

Currently I'm using the following code to return a number as a binary (base 2), octal (base 8), or hexadecimal (base 16) string.
fn convert(inp: u32, out: u32, numb: &String) -> Result<String, String> {
match isize::from_str_radix(numb, inp) {
Ok(a) => match out {
2 => Ok(format!("{:b}", a)),
8 => Ok(format!("{:o}", a)),
16 => Ok(format!("{:x}", a)),
10 => Ok(format!("{}", a)),
0 | 1 => Err(format!("No base lower than 2!")),
_ => Err(format!("printing in this base is not supported")),
},
Err(e) => Err(format!(
"Could not convert {} to a number in base {}.\n{:?}\n",
numb, inp, e
)),
}
}
Now I want to replace the inner match statement so I can return the number as an arbitrarily based string (e.g. base 3.) Are there any built-in functions to convert a number into any given radix, similar to JavaScript's Number.toString() method?
For now, you cannot do it using the standard library, but you can:
use my crate radix_fmt
or roll your own implementation:
fn format_radix(mut x: u32, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix;
x = x / radix;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
fn main() {
assert_eq!(format_radix(1234, 10), "1234");
assert_eq!(format_radix(1000, 10), "1000");
assert_eq!(format_radix(0, 10), "0");
}
If you wanted to eke out a little more performance, you can create a struct and implement Display or Debug for it. This avoids allocating a String. For maximum over-engineering, you can also have a stack-allocated array instead of the Vec.
Here is Boiethios' answer with these changes applied:
struct Radix {
x: i32,
radix: u32,
}
impl Radix {
fn new(x: i32, radix: u32) -> Result<Self, &'static str> {
if radix < 2 || radix > 36 {
Err("Unnsupported radix")
} else {
Ok(Self { x, radix })
}
}
}
use std::fmt;
impl fmt::Display for Radix {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let mut x = self.x;
// Good for binary formatting of `u128`s
let mut result = ['\0'; 128];
let mut used = 0;
let negative = x < 0;
if negative {
x*=-1;
}
let mut x = x as u32;
loop {
let m = x % self.radix;
x /= self.radix;
result[used] = std::char::from_digit(m, self.radix).unwrap();
used += 1;
if x == 0 {
break;
}
}
if negative {
write!(f, "-")?;
}
for c in result[..used].iter().rev() {
write!(f, "{}", c)?;
}
Ok(())
}
}
fn main() {
assert_eq!(Radix::new(1234, 10).to_string(), "1234");
assert_eq!(Radix::new(1000, 10).to_string(), "1000");
assert_eq!(Radix::new(0, 10).to_string(), "0");
}
This could still be optimized by:
creating an ASCII array instead of a char array
not zero-initializing the array
Since these avenues require unsafe or an external crate like arraybuf, I have not included them. You can see sample code in internal implementation details of the standard library.
Here is an extended solution based on the first comment which does not bind the parameter x to be a u32:
fn format_radix(mut x: u128, radix: u32) -> String {
let mut result = vec![];
loop {
let m = x % radix as u128;
x = x / radix as u128;
// will panic if you use a bad radix (< 2 or > 36).
result.push(std::char::from_digit(m as u32, radix).unwrap());
if x == 0 {
break;
}
}
result.into_iter().rev().collect()
}
This is faster than the other answer:
use std::char::from_digit;
fn encode(mut n: u32, r: u32) -> Option<String> {
let mut s = String::new();
loop {
if let Some(c) = from_digit(n % r, r) {
s.insert(0, c)
} else {
return None
}
n /= r;
if n == 0 {
break
}
}
Some(s)
}
Note I also tried these, but they were slower:
https://doc.rust-lang.org/std/collections/struct.VecDeque.html#method.push_front
https://doc.rust-lang.org/std/string/struct.String.html#method.push
https://doc.rust-lang.org/std/vec/struct.Vec.html#method.insert

Find minimum number of coins that make a given value, I'm trying to solve this for float coin values instead of integer coin values in Rust

Problem Statement: Write a function that returns the smallest number of coins needed to make change for the target amount using the given coin denominations.
I'm trying to convert the below i32 integer-based solution to float f32 or f64 based solution so it can take decimal input such as coin denomination of 0.5, 1.5, 2.0 etc.
use std::cmp;
fn min_number_of_change(n: i32, denoms: Vec<u32>) -> i32 {
let mut ways: Vec<i32> = vec![i32::MAX;n as usize + 1];
ways[0] = 0;
for denom in denoms.iter() {
for current in 0..ways.len() {
if *denom <= current as u32 {
ways[current as usize] = cmp::min(ways[current as usize], 1 + ways[current as usize - *denom as usize])
}
}
}
if ways[n as usize] != i32::MAX {
ways[n as usize]
} else {
-1
}
}
fn main() {
let denoms: Vec<u32> = vec![1, 5, 10, 2, 3];
let n: i32 = 6;
let result: i32 = min_number_of_change(n, denoms);
println!("Result: {}", result);
}
Play Ground for above code
Very naively I've tried replacing i32 with f32 and min function to get the float comparisons. When running it, the Compiler complains about mismatched types, cannot subtract f32 from usize, the type [f32] cannot be indexed by f32. I think I'm missing some very fundamental points.
fn min_number_of_change(n: f32, denoms: Vec<f32>) -> f32 {
let mut ways: Vec<f32> = vec![f32::INFINITY; n + 1.0];
ways[0] = 0.0;
for denom in denoms.iter() {
for current in 0..ways.len() {
if *denom <= current {
ways[current] = (ways[current].min(1 + ways[current - *denom ]), 1 + ways[current - *denom ])
}
}
}
if ways[n] != f32::INFINITY {
ways[n]
} else {
-1.0
}
}
fn main() {
let denoms: Vec<f32> = vec![2.00, 1.00, 0.50, 0.20, 0.10, 0.05, 0.02, 0.01];
let n: f32 = 4.55;
let result: f32 = min_number_of_change(n, denoms);
println!("Result: {}", result);
}
Playground for above code
The first error is simple to fix. Instead of current - *denom you can write current as f32 - *denom.
But that leaves us with the second error which is harder.
You can't index a Vec with a f32 because that would require to acces the 0.5th element, an element half an elements width away from the first one. This is simply not possible.
Your best bet is probably using integers and treating them as fixed point values i.e. 1 means 0.01 in your currency etc.
fn main() {
let denoms = [2.00, 1.00, 0.50, 0.20, 0.10, 0.05, 0.02, 0.01].into_iter().map(|v| (v * 100.0) as i32).collect();
let n = (4.55 * 100.0) as i32;
let result = min_number_of_change(n, denoms) as f32 / 100.0;
println!("Result: {}", result);
}
and use the integer version of min_number_of_change

How to use rayon to update a personal struct containing an Array in Rust

CONTEXT
General overview
(Here is the github page with the minimal example of my problem, and the page of my whole project)
I'm very new to rust and I'm trying to simulate the behavior of a fluid in Rust. This is easy: computing large arrays with some functions for each timestep.
I'd like to parallelize the computations done each timestep using rayon. But the compiler doesn't want me to access a mutable struct containing an Array that I want to modify, even if I'm sure that each modification will be on different places in the array: which assure me that it's safe. (I think?).
So my question is: should I use unsafe rust in here? If so, how?
And: is it possible to make Rust understand that it's safe, or to do it properly without unsafe rust?
I saw this post which kind of resembled my problem, but couldn't find a way to use the solution for my problem.
I also tried to put unsafe {...} keywords everywhere, but the compilator still complains...
You may only need to read "Structs" subsection to understand the problem, but I will also put a "Function" subsection, in case it can be important. If you think it might not be necessary, you can skip to "Main function" subsection.
Structs
Here are my structs:
I'd like to keep them that way, as they would give me (I think) more flexibility with setters/getters, but I'm open to change the way it's implemented right now.
#[derive(Debug, PartialEq)]
struct vec2D {pub x: f64, pub y: f64}
#[derive(Debug, PartialEq)]
struct ScalarField2D {
s: Array2<f64>,
}
#[derive(Debug, PartialEq)]
struct VectorField2D {
x: ScalarField2D,
y: ScalarField2D
}
impl ScalarField2D {
// also a constructor new() but not shown for simplicity
fn get_pos(&self, x: usize, y: usize) -> f64
{return self.s[[y,x]];}
fn set_pos(&mut self, x: usize, y: usize, f: f64)
{self.s[[y,x]] = f;}
}
impl VectorField2D {
// also a constructor new() but not shown for simplicity
fn get_pos(&self, x: usize, y: usize) -> vec2D
{let vec_at_pos = vec2D {
x: self.x.get_pos(x, y),
y: self.y.get_pos(x, y)};
return vec_at_pos;}
fn set_pos(&mut self, x: usize, y: usize, vec: &vec2D)
{self.x.set_pos(x, y, vec.x);
self.y.set_pos(x, y, vec.y);}
}
Function
Here is my function: which takes a ScalarField2D struct, and computes a vector called the "gradient" at a particular position of the ScalarField2D array, and then returning this vector as a "vec2D" struct.
// computes the gradient of a scalar field at a given position
fn grad_scalar(a: &ScalarField2D,
x: i32, y: i32,
x_max: i32, y_max: i32) -> vec2D
{
let ip = ((x+1) % x_max) as usize;
// i-1 with Periodic Boundaries
let im = ((x - 1 + x_max) % x_max) as usize;
// j+1 with Periodic Boundaries
let jp = ((y+1) % y_max) as usize;
// j-1 with Periodic Boundaries
let jm = ((y - 1 + y_max) % y_max) as usize;
let (i, j) = (x as usize, y as usize);
let grad = vec2D {
x: (a.get_pos(ip, j) - a.get_pos(im, j))/(2.),
y: (a.get_pos(i, jp) - a.get_pos(i, jm))/(2.)};
return grad;
}
Main function
Here is my problem:
I try to iterate over all the possible x and y using (0..x_max).into_par_iter() (or y_max for y), compute the gradient associated with each position, and then set the value to the ScalarField2D struct using the set_pos method... Here is the main function, and the import commands, and I will show you the error message I get in the next subsection
use ndarray::prelude::*;
use rayon::prelude::*;
fn main() {
let (x_max, y_max) = (2usize, 50usize);
let (x_maxi32, y_maxi32) = (x_max as i32, y_max as i32);
let mut GD_grad_rho = VectorField2D::new(x_max, y_max);
let mut GD_rho = ScalarField2D::new(x_max, y_max);
let x_iterator = (0..x_max).into_par_iter();
x_iterator.map(|xi| {
let y_iterator = (0..y_max).into_par_iter();
y_iterator.map(|yi| {
// unsafe here?
GD_grad_rho
.set_pos(xi, yi,
&grad_scalar(&GD_rho,
xi as i32, yi as i32,
x_maxi32, y_maxi32));
});});
}
Error message
Here is the compilation error I get
error[E0596]: cannot borrow `GD_grad_rho` as mutable, as it is a captured variable in a `Fn` closure
--> src/main.rs:104:13
|
104 | / GD_grad_rho
105 | | .set_pos(xi, yi,
106 | | &grad_scalar(&GD_rho,
107 | | xi as i32, yi as i32,
108 | | x_maxi32, y_maxi32));
| |__________________________________________________________^ cannot borrow as mutable
error[E0596]: cannot borrow `GD_grad_rho` as mutable, as it is a captured variable in a `Fn` closure
--> src/main.rs:101:24
|
101 | y_iterator.map(|yi| {
| ^^^^ cannot borrow as mutable
...
104 | GD_grad_rho
| ----------- mutable borrow occurs due to use of `GD_grad_rho` in closure
For more information about this error, try `rustc --explain E0596`.
error: could not compile `minimal_example_para` due to 2 previous errors
If you want the complete thing, I created a github repo with everything in it.
Tests after susitsm answer
So I did something like this:
use ndarray::prelude::*;
use rayon::prelude::*;
fn grad_scalar(a: &Array2<f64>,
i: usize, j: usize) -> (f64, f64)
{
let array_shape = a.shape();
let imax = array_shape[0];
let jmax = array_shape[1];
// i-1 with Periodic Boundaries
let ip = ((i + 1) % imax);
// i-1 with Periodic Boundaries
let im = ((i + imax) - 1) % imax;
// j+1 with Periodic Boundaries
let jp = ((j + 1) % jmax);
// j-1 with Periodic Boundaries
let jm = ((j + jmax) - 1) % jmax;
let grad_i = (a[[ip, j]] - a[[im, j]])/2.;
let grad_j = (a[[i, jp]] - a[[i, jm]])/2.;
return (grad_i, grad_j);
}
fn main() {
let a = Array::<f64, Ix2>::ones((dim, dim));
let index_list: Vec<(_, _)> = (0..a.len())
.into_par_iter()
.map(|i| (i / a.dim().0, i % a.dim().1))
.collect();
let (r1, r2): (Vec<_>, Vec<_>) = (0..a.len())
.into_par_iter()
.map(|i| (index_list[i].0, index_list[i].1))
.map(|(i, j)| grad_scalar(&a, i, j))
.collect();
let grad_row = Array2::from_shape_vec(a.dim(), r1).unwrap();
let grad_col = Array2::from_shape_vec(a.dim(), r2).unwrap();
}
Which gives me the result I want, even though I wanted to access the values through my Structs. Which is not exactly what I want but we're getting closer
But I wonder about the efficiency for more arrays, I'll post a separate question for that
You can do something like this:
use ndarray::Array2;
use rayon::prelude::*;
fn main() {
let a: Vec<u64> = (0..20000).collect();
let a = Array2::from_shape_vec((100, 200), a).unwrap();
let stuff = |arr, i, j| (i + j, i * j);
let (x, y): (Vec<_>, Vec<_>) = (0..a.len())
.into_par_iter()
.map(|i| (i / a.dim().0, i % a.dim().1))
.map(|(i, j)| stuff(&a, i, j))
.collect();
let grad_x = Array2::from_shape_vec(a.dim(), x);
let grad_y = Array2::from_shape_vec(a.dim(), y);
let grad_vector_field = VectorField2D {
x: ScalarField2D { s: grad_x },
y: ScalarField2D { s: grad_y },
};
}
Or implement the FromParallelIterator<vec2D>
impl FromParallelIterator<vec2D> for VectorField2D {
fn from_par_iter<I>(par_iter: I) -> Self
where I: IntoParallelIterator<Item = T>
{
let (x, y): (Vec<_>, Vec<_>) = par_iter
.into_par_iter()
.map(|vec_2D| {
let vec2D { x, y } = vec_2D;
(x, y)
})
.collect();
Self {
x: ScalarField2D { s: Array2::from_shape_vec(a.dim(), x) },
y: ScalarField2D { s: Array2::from_shape_vec(a.dim(), y) },
}
}
}
This will enable using collect for your type when using parallel iterators
let vector_field: VectorField2D = (0..a.len())
.into_par_iter()
.map(|i| (index_list[i].0, index_list[i].1))
.map(|(i, j)| grad_scalar(&a, i, j))
.collect();

Why am I getting unexpected colors?

I am trying to create gradient blobs. I supply the gen function with a Vec<ColorPoint> and each point should have go from a maximum intensity at the center to no effect at the radius. The main problem is that I am not getting the colors I expect in the output image.
use image::{ImageBuffer, RgbImage, Rgb};
use std::ops::Add;
/// Color with red, green, blue and alpha channels between 0.0 and 1.0
#[derive(Debug)]
struct RgbaColor {
r: f64,
g: f64,
b: f64,
a: f64,
}
impl RgbaColor {
fn to_rgb(&self) -> Rgb<u8> {
let r = (&self.r*255.0) as u8;
let g = (&self.g*255.0) as u8;
let b = (&self.b*255.0) as u8;
Rgb::from([r, g, b])
}
}
impl Add for RgbaColor {
type Output = Self;
fn add(self, fg: Self) -> Self {
// self is the background and fg is the foreground
let new_alpha = fg.a + self.a * (1 - fg.a);
Self {
r: (fg.r * fg.a + self.r * self.a * (1.0 - fg.a)) / new_alpha,
g: (fg.g * fg.a + self.g * self.a * (1.0 - fg.a)) / new_alpha,
b: (fg.b * fg.a + self.b * self.a * (1.0 - fg.a)) / new_alpha,
a: new_alpha
}
}
}
#[derive(Debug)]
struct ColorPoint {
color: RgbaColor,
center: (u32, u32),
radius: u32,
}
impl ColorPoint {
fn rgba_color_at_point(&self, x: u32, y: u32) -> RgbaColor {
let x_dist = x as f64 - self.center.0 as f64;
let y_dist = y as f64 - self.center.1 as f64;
let dist = (x_dist.powf(2.0) + y_dist.powf(2.0)).sqrt();
RgbaColor {
r: self.color.r,
g: self.color.g,
b: self.color.b,
a: self.color.a * dist / self.radius as f64,
}
}
}
fn main() {
let color_points = vec![
ColorPoint{color: RgbaColor{r: 1.0, g:1.0, b:0.0, a:1.0}, center: (0, 50), radius: 20},
ColorPoint{color: RgbaColor{r: 1.0, g:0.0, b:0.0, a:1.0}, center: (10, 0), radius: 30}
];
gen(color_points)
}
fn gen(color_points: Vec<ColorPoint>) {
let geometry = (50, 100);
let mut background: RgbImage = ImageBuffer::new(geometry.0 as u32, geometry.1 as u32);
for (x, y, pixel) in background.enumerate_pixels_mut() {
let mut curr_color = RgbaColor{ r:0.0, g:0.0, b:0.0, a:1.0 }; // hardcoded background color
for color_point in color_points.iter() {
curr_color = curr_color + color_point.rgba_color_at_point(x,y);
}
*pixel = curr_color.to_rgb();
}
background.save("image.png").unwrap();
}
Output:
This code almost does what I expect although the position of the yellow and red blobs seem to have swapped and when I change the hard coded background color to white RgbaColor{ r:1.0, g:1.0, b:1.0, a:1.0 } I seem to have a magenta background.
I'm not sure whether my color model is wrong or if it's something else because when adding individual RgbaColors I get the correct colors.

My test fails at "attempt to subtract with overflow"

use itertools::Itertools;
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord)]
struct Runner {
sec: u16,
}
impl Runner {
fn from(v: (u8, u8, u8)) -> Runner {
Runner {
sec: v.0 as u16 * 3600 + v.1 as u16 * 60 + v.2 as u16
}
}
}
fn parse_runner(strg: &str) -> Vec<Runner> {
strg.split(", ")
.flat_map(|personal_result| personal_result.split('|'))
.map(|x| x.parse::<u8>().unwrap())
.tuples::<(_, _, _)>()
.map(|x| Runner::from(x))
.sorted()
.collect::<Vec<Runner>>()
}
fn parse_to_format(x: u16) -> String {
let h = x / 3600;
let m = (x - 3600)/60;
let s = x % 60;
format!("{:02}|{:02}|{:02}", h, m, s)
}
fn return_stats(runners: &[Runner]) -> String {
let range: u16 = runners.last().unwrap().sec - runners.first().unwrap().sec;
let average: u16 = runners.iter().map(|&r| r.sec).sum::<u16>()/(runners.len() as u16);
let median: u16 = if runners.len()%2 != 0 {
runners.get(runners.len()/2).unwrap().sec
} else {
runners.get(runners.len()/2).unwrap().sec/2 + runners.get((runners.len()/2) + 1).unwrap().sec/2
};
format!("Range: {} Average: {} Median: {}", parse_to_format(range), parse_to_format(average), parse_to_format(median))
}
fn stati(strg: &str) -> String {
let run_vec = parse_runner(strg);
return_stats(&run_vec)
}
I cant find the mistake I made with supposedly substraction to make my code pass the test. Basically I'm trying to start with a &str like "01|15|59, 1|47|6, 01|17|20, 1|32|34, 2|3|17" and end up with another one like "Range: 00|47|18 Average: 01|35|15 Median: 01|32|34"
Sorry in advance for my mistake if it is really stupid, I've been trying to fix it for quite a while
https://www.codewars.com/kata/55b3425df71c1201a800009c/train/rust
let m = (x - 3600) / 60;
As Peter mentioned, that will indeed overflow if x is less than 3600. A u16 can not be negative.
Using integer arithmetic, here's another way of formatting seconds to hh|mm|ss and does not experience overflow:
fn seconds_to_hhmmss(mut s: u64) -> String {
let h = s / 3600;
s -= h * 3600;
let m = s / 60;
s -= m * 60;
format!("{:02}|{:02}|{:02}", h, m, s)
}

Resources