I'm trying to just add a character into the terminal by simply doing
use ncurses::*;
fn main()
{
initscr();
addch('#');
endwin();
}
but I'm getting the following error:
error[E0308]: mismatched types
--> src/main.rs:15:11
|
15 | addch('#');
| ^^^ expected `u32`, found `char`
I checked the documentation and it says that it takes a chtype, so I figuered that was just a character? I'm not sure how I am supposed to change a character to a u32. What is the difference between a char, and a chtype?
By looking at the source code chtype defined as an alias to u64(for 64bit platforms) and u32(for 32 bit platforms).
#[cfg(feature="wide_chtype")]
pub type chtype = u64;
#[cfg(not(feature="wide_chtype"))]
pub type chtype = u32;
In order to to solve the error, you can type cast # to chtype.
fn main()
{
initscr();
addch('#' as chtype);
endwin();
}
chtype holds more than a character, as described in the ncurses manual page:
ncurses
the "normal" library, which handles 8-bit characters. The
normal (8-bit) library stores characters combined with
attributes in chtype data.
Attributes alone (no corresponding character) may be stored in
chtype or the equivalent attr_t data. In either case, the data
is stored in something like an integer.
For more information, the waddch manual page elaborates:
Video attributes can be combined with a character argument passed to
addch or related functions by logical-ORing them into the character.
(Thus, text, including attributes, can be copied from one place to
another using inch(3x) and addch.) See the curs_attr(3x) page for
values of predefined video attribute constants that can be usefully
OR'ed into characters.
Related
I am trying to parse a single char variable into ASCII value, but all the time I am getting an error.
Basing on answer of hansaplast from this post Parsing a char to u32 I thought this code should work:
let char_variable = 'a';
let shoud_be_u32_varaible = a.to_digit(10).unwrap();
But this code will always throw this error:
thread 'main' panicked at 'called Option::unwrap() on a None value'
For this, code example (example provided in answer of hansaplast):
let a = "29";
for c in a.chars() {
println!("{:?}", c.to_digit(10));
}
This .to_digit() method will work.
In both cases I am using on .to_digit(10) on variables which are type of char, but for my example this code throws an error and for the code from hansaplast this works. Can someone explain to me what is the difference between those examples and what I am doing wrong because now I am super confused?
Both examples can be found there: Rust playground example
Is using casting in this case will be ok?
let c = 'a';
let u = c as u32 - 48;
If not, can you tell me, what is recommended of doing this?
Okay, I think you are confusing type casting and integer parsing.
to_digit is an integer parsing method. It takes the character and given a radix determines its value in that base. So 5 in base 10 is 5 and is stored as 00000101. 11 in base 15 is stored in memory as 00010000.
Type casting of primitives in rust like 'c' as u32 is probably more what you are after. It's distinct from integer parsing in the sense that you don't care about the "meaning" of the number what you care about is the value of the bits that represent it in memory. This means that the character 'c' is stored as 1100011 (99).
If you only care about ascii characters you should also check char.is_ascii() before doing your conversion. That way you can store your results in a u8 instead of a u32
fn print_ascii_values_of_characters(string: &str) {
for c in string.chars() {
if c.is_ascii() {
println!("{:b}", c as u8) // :b prints the binary representation
}
}
}
Why does this work?
fn main() {
println!("{:.3}", "this is just a test");
}
prints => thi
While this doesn't?
fn main() {
println!("{:.3}", format_args!("this is just a test"));
}
prints => this is just a test
Here's a playground.
For a little more context, I’m interested in the reasoning behind it, and a way to do it without any allocations.
I'm developing a terminal game in Rust, where I have a write! which shows some statistics about the rendering and game loop, and that text can be quite long. Now that I read the terminal size and adjust its output accordingly, I need to truncate that output, but without any allocations.
I thought I was super clever when I refactored this:
write!(
stdout,
"{} ({} {} {}) {}",
...
)
into this:
write!(
stdout,
"{:.10}", // simulate only 10 cols in terminal.
format_args!(
"{} ({} {} {}) {}",
...
)
)
How unfortunate, it doesn’t work… How to do that without allocating a String?
For one thing, not every type obeys all formatting arguments:
println!("{:.3}", 1024);
1024
Second, format_args! serves as the backbone for all of the std::fmt utilities. From the docs on format_args:
This macro functions by taking a formatting string literal containing {} for each additional argument passed. format_args! prepares the additional parameters to ensure the output can be interpreted as a string and canonicalizes the arguments into a single type. Any value that implements the Display trait can be passed to format_args!, as can any Debug implementation be passed to a {:?} within the formatting string.
This macro produces a value of type fmt::Arguments. This value can be passed to the macros within std::fmt for performing useful redirection. All other formatting macros (format!, write!, println!, etc) are proxied through this one. format_args!, unlike its derived macros, avoids heap allocations.
You can use the fmt::Arguments value that format_args! returns in Debug and Display contexts as seen below. The example also shows that Debug and Display format to the same thing: the interpolated format string in format_args!.
let debug = format!("{:?}", format_args!("{} foo {:?}", 1, 2));
let display = format!("{}", format_args!("{} foo {:?}", 1, 2));
assert_eq!("1 foo 2", display);
assert_eq!(display, debug);
Looking at the source for impl Display for Arguments, it just ignores any formatting parameters. I couldn't find this explicitly documented anywhere, but I can think of a couple reasons for this:
The arguments are already considered formatted. If you really want to format a formatted string, use format! instead.
Since its used internally for multiple purposes, its probably better to keep this part simple; its already doing the format heavy-lifting. Attempting to make the thing responsible for formatting arguments itself accept formatting parameters sounds needlessly complicated.
I'd really like to truncate some output without allocating any Strings, would you know how to do it?
You can write to a fixed-size buffer:
use std::io::{Write, ErrorKind, Result};
use std::fmt::Arguments;
fn print_limited(args: Arguments<'_>) -> Result<()> {
const BUF_SIZE: usize = 3;
let mut buf = [0u8; BUF_SIZE];
let mut buf_writer = &mut buf[..];
let written = match buf_writer.write_fmt(args) {
// successfully wrote into the buffer, determine amount written
Ok(_) => BUF_SIZE - buf_writer.len(),
// a "failed to write whole buffer" error occurred meaning there was
// more to write than there was space for, return entire size.
Err(error) if error.kind() == ErrorKind::WriteZero => BUF_SIZE,
// something else went wrong
Err(error) => return Err(error),
};
// Pick a way to print `&buf[..written]`
println!("{}", std::str::from_utf8(&buf[..written]).unwrap());
Ok(())
}
fn main() {
print_limited(format_args!("this is just a test")).unwrap();
print_limited(format_args!("{}", 123)).unwrap();
print_limited(format_args!("{}", 'a')).unwrap();
}
thi
123
a
This was actually more involved than I originally thought. There might be a cleaner way to do this.
I found this word here
For non-numeric types, this can be considered a "maximum width". If the resulting string is longer than this width, then it is truncated down to this many characters and that truncated value is emitted with proper fill, alignment and width if those parameters are set.
For integral types, this is ignored.
For floating-point types, this indicates how many digits after the decimal point should be printed.
And format_args return type is std::fmt::Arguments,that is not String ,even though it looks like a string.
If you want to get same print contents,i think those code will work
/// unstable
println!("{:.3}", format_args!("this is just a test").as_str().unwrap());
println!("{:.3}", format_args!("this is just a test").to_string().as_str());
This extremely simple Rust program:
fn main() {
let c = "hello";
println!(c);
}
throws the following compile-time error:
error: expected a literal
--> src/main.rs:3:14
|
3 | println!(c);
| ^
In previous versions of Rust, the error said:
error: format argument must be a string literal.
println!(c);
^
Replacing the program with:
fn main() {
println!("Hello");
}
Works fine.
The meaning of this error isn't clear to me and a Google search hasn't really shed light on it. Why does passing c to the println! macro cause a compile time error? This seems like quite unusual behaviour.
This should work:
fn main() {
let c = "hello";
println!("{}", c);
}
The string "{}" is a template where {} will be replaced by the next argument passed to println!.
TL;DR If you don't care why and just want to fix it, see the sibling answer.
The reason that
fn main() {
let c = "hello";
println!(c);
}
Cannot work is because the println! macro looks at the string at compile time and verifies that the arguments and argument specifiers match in amount and type (this is a very good thing!). At this point in time, during macro evaluation, it's not possible to tell that c came from a literal or a function or what have you.
Here's an example of what the macro expands out to:
let c = "hello";
match (&c,) {
(__arg0,) => {
#[inline]
#[allow(dead_code)]
static __STATIC_FMTSTR: &'static [&'static str] = &[""];
::std::io::stdio::println_args(&::std::fmt::Arguments::new(
__STATIC_FMTSTR,
&[::std::fmt::argument(::std::fmt::Show::fmt, __arg0)]
))
}
};
I don't think that it's actually impossible for the compiler to figure this out, but it would probably take a lot of work with potentially little gain. Macros operate on portions of the AST and the AST only has type information. To work in this case, the AST would have to include the source of the identifier and enough information to determine it's acceptable to be used as a format string. In addition, it might interact poorly with type inference - you'd want to know the type before it's been picked yet!
The error message asks for a "string literal". What does the word "literal" mean? asks about what that means, which links to the Wikipedia entry:
a literal is a notation for representing a fixed value in source code
"foo" is a string literal, 8 is a numeric literal. let s = "foo" is a statement that assigns the value of a string literal to an identifier (variable). println!(s) is a statement that provides an identifier to the macro.
If you really want to define the first argument of println! in one place, I found a way to do it. You can use a macro:
macro_rules! hello {() => ("hello")};
println!(hello!());
Doesn't look too useful here, but I wanted to use the same formatting in a few places, and in this case the method was very helpful:
macro_rules! cell_format {() => ("{:<10}")}; // Pads with spaces on right
// to fill up 10 characters
println!(cell_format!(), "Foo");
println!(cell_format!(), 456);
The macro saved me from having to duplicate the formatting option in my code.
You could also, obviously, make the macro more fancy and take arguments if necessary to print different things with different arguments.
If your format string will be reused only a moderate number of times, and only some variable data will be changed, then a small function may be a better option than a macro:
fn pr(x: &str) {
println!("Some stuff that will always repeat, something variable: {}", x);
}
pr("I am the variable data");
Outputs
Some stuff that will always repeat, something variable: I am the variable data
I am making a function that makes a array of size n random numbers but my comparison for the while throws an error.
while ar.len() as i32 < size { }
Complains with: expected one of !, (, +, ,, ::, <, or >, found {.
If I remove the as i32 it complains with mismatch types and if I add a as usize to the size variable then it doesn't complain.
When you cast from a smaller-sized type to a larger one, you won't lose any data, but the data will now take up more space.
When you cast from a larger-sized type to a smaller one, you might lose some of your data, but the data will take up less space.
Pretend I have a box of size 1 that can hold the numbers 0 to 9 and another box of size 2 that can hold the numbers 0 to 99.
If I want to store the number 7; both boxes will work, but I will have space left over if I use the larger box. I could move the value from the smaller box to the larger box without any trouble.
If I want to store the number 42; only one box can fit the number: the larger one. If I try to take the number and cram it in the smaller box, something will be lost, usually the upper parts of the number. In this case, my 42 would be transformed into a 2! Oops!
In addition, signed and unsigned plays a role; when you cast between signed and unsigned numbers, you might be incorrectly interpreting the value, as a number like -1 becomes 255!
See also:
How do I convert between numeric types safely and idiomatically?
In this particular case, it's a bit more complicated. A usize is defined to be a "pointer-sized integer", which is usually the native size of the machine. On a 64-bit x64 processor, that means a usize is 64 bits, and on a 32-bit x86 processor, it will be 32 bits.
Casting a usize to a i32 thus will operate differently depending on what type of machine you are running on.
The error message you get is because the code you've tried isn't syntactically correct, and the compiler isn't giving a good error message.
You really want to type
while (ar.len() as i32) < size { }
The parenthesis will help the precedence be properly applied.
To be on the safe side, I'd cast to the larger value:
while ar.len() < size as usize { }
See also:
How do I convert a usize to a u32 using TryFrom?
How to idiomatically convert between u32 and usize?
Why is type conversion from u64 to usize allowed using `as` but not `From`?
It seems that your size is of type i32. You either need parentheses:
while (ar.len() as i32) < size { }
or cast size to usize:
while ar.len() < size as usize { }
as len() returns a usize and the types on both sides of the comparison need to match. You need the parentheses in the first case so that the < operator doesn't try to compare i32 with size but rather ar.len() as i32 with size which is your intention.
I have a string and I need to scan for every occurrence of "foo" and read all the text following it until a second ". Since Rust does not have a contains function for strings, I need to iterate by characters scanning for it. How would I do this?
Edit: Rust's &str has a contains() and find() method.
I need to iterate by characters scanning for it.
The .chars() method returns an iterator over characters in a string. e.g.
for c in my_str.chars() {
// do something with `c`
}
for (i, c) in my_str.chars().enumerate() {
// do something with character `c` and index `i`
}
If you are interested in the byte offsets of each char, you can use char_indices.
Look into .peekable(), and use peek() for looking ahead. It's wrapped like this because it supports UTF-8 codepoints instead of being a simple vector of characters.
You could also create a vector of chars and work on it from there, but that's more time and space intensive:
let my_chars: Vec<_> = mystr.chars().collect();
The concept of a "character" is very ambiguous and can mean many different things depending on the type of data you are working with. The most obvious answer is the chars method. However, this does not work as advertised. What looks like a single "character" to you may actually be made up of multiple Unicode code points, which can lead to unexpected results:
"a̐".chars() // => ['a', '\u{310}']
For a lot of string processing, you want to work with graphemes. A grapheme consists of one or more unicode code points represented as a string slice. These map better to the human perception of "characters". To create an iterator of graphemes, you can use the unicode-segmentation crate:
use unicode_segmentation::UnicodeSegmentation;
for grapheme in my_str.graphemes(true) {
// ...
}
If you are working with raw ASCII then none of the above applies to you, and you can simply use the bytes iterator:
for byte in my_str.bytes() {
// ...
}
Although, if you are working with ASCII then arguably you shouldn't be using String/&str at all and instead use Vec<u8>/&[u8] directly.
fn main() {
let s = "Rust is a programming language";
for i in s.chars() {
print!("{}", i);
}}
Output: Rust is a programming language
I use the chars() method to iterate over each element of the string.