Rust ncurses: garbled Chinese characters and setlocale doesn't work - rust

I am trying to print Chinese character in a ncurses screen
log::debug!("row str {}", row_str);
ncurses::addstr(&row_str);
variable row_str is displayed well in as a parameter of log::debug , but get garbled by using ncurses::addstr, like this:
中彖~G潛®弾U // it should be "中文目录"
i've tried to fix it by the following 3 methods, but no one works.
// method 1
gettextrs::setlocale(gettextrs::LocaleCategory::LcAll, "");
// method 2
use libc::setlocale;
unsafe {
setlocale(0, "".as_bytes().as_ptr() as *const i8);
}
//method 3
ncurses::setlocale(ncurses::constants::LcCategory::all, "")

Related

How to handle multiple user inputs into a match?

I'm new to programming and I got stucked while trying to get user's input in a match expression. I read the documentation but I didn't find the answer to my problem.
The idea is simple : the user chooses a game mode (Human Vs Human or Human Vs Machine) then the program asks for the user's name. I don't get any error but the program lets me enter only the second name (player_2) but not the first one (player_1). It's like if the program is not stoping for me to enter the first player name.
Here is the code I wrote :
use std::io;
use text_io::read;
fn player_input() -> String {
let mut input = String::new();
io::stdin()
.read_line(&mut input)
.expect("Wrong input");
return input;
}
fn main() {
println!("GAME MODE");
println!("1. Human Vs Human");
println!("2. Human Vs Machine");
let game_mode : String = read!();
match game_mode.as_str() {
"1" => {
println!("Player 1 : \n");
let player_1 = player_input();
println!("Player 2 : ");
let player_2 = player_input();
println!("{} - {}", player_1, player_2);
},
"2" => {
println!("Player 1 : ");
let player_1 = player_input();
let player_2 = "Skynet";
println!("{} - {}", player_1, player_2);
},
_ => println!("Wrong input"),
};
}
And here is the output I get :
GAME MODE
1. Human Vs Human
2. Human Vs Machine
1
Player 1 :
Player 2 :
George
- George
I don't know what am I doing wrong because if I test the same code but without the match expression the program actually stops and lets me enter the first name (player_1) before asking me for the second one (player_2). Maybe match is not the way to do it and I should rather use an if/else ?
Thank you for your help
This is definitley not a problem with match. Your problem seems to occur because the read! macro doesn't remove the whitespace that causes it to end (in this case, \n). Then, the first call to player_input reads the \n and immediatley stops. Only then does the second call to player_input actually pick up any input. You can fix this in two different ways:
-You can not use text_io and simply rely on the built in std::io, which does remove the \n that caused its call to end
-You can change the read! call to also pick up the \n like so: read!("{}\n")

How to display a box drawing character using the ncurses crate?

I'm writing an app using Rust and ncurses.
I'm trying to display a box drawing character ('┃' (Unicode 9475 / 0x2503)), but it's as if it's not even printed to screen. When I select the text, it can be seen:
Here's a minimal example:
use ncurses::*;
fn populate_line_numbers() {
for i in 0..LINES() {
mvaddstr(i, 0, &i.to_string());
}
mvvline(0, 4, 9475, LINES());
refresh();
}
fn main() {
setlocale(LcCategory::all, "");
initscr();
start_color();
keypad(stdscr(), true);
noecho();
loop {
let user_input = get_wch();
match user_input.unwrap() {
WchResult::Char(ch) => {
match ch {
27 => break,
_ => {
addstr(&std::char::from_u32(ch).unwrap().to_string());
refresh();
}
}
},
WchResult::KeyCode(code) => {
match code {
KEY_F5 => {
populate_line_numbers();
},
_ => {}
}
}
}
}
endwin();
}
Hit F5 to make the program show line numbers.
How can the character be like the normal text? I tried to OR it together with a bunch of things, but nothing worked out.
Cargo.toml
[dependencies]
ncurses = { version = "5.99.0", features = ["wide"] }
Author's note: The following is a best-effort analysis based mostly on examining ncurses.h on my system and comparing it to the source code of ncurses-rs. I may be wrong in some particulars.
In ncurses, the chtype type alias (which is the third argument of mvvline) represents a single-byte character plus some appearance metadata. It cannot store a multibyte character. So mvvline simply cannot display ┃.
(Presumably the reason you get an invisible line is because the value 9475 represents a control character with some bogus appearance information. I wasn't able to figure out how to decode it. If you fiddle around with the byte values I'm sure you can get the line to change colors and contain any ASCII character you want.)
To draw a line using a "wide" (multibyte) character, you must link to a version of ncurses with wide character support, and use the wide character version of mvvline, which appears to be called mvvline_set. This function takes a const cchar_t * argument instead of chtype, where cchar_t is a struct containing a multibyte character plus some metadata.
Unfortunately, although the ncurses crate does link to the correct version of the ncurses library, it does not expose any way to call mvvline_set. There are a number of other functions that are also missing from the ncurses-rs API, mostly the ones that use the cchar_t struct. This means you won't be able to use Unicode line-drawing characters in character-oriented ncurses functions, unless you write the bindings yourself.
You should still be able to use multibyte characters in UTF-8 strings and pass them to string-oriented functions such as mvaddstr.

Getting hexadecimal character escape while printing string in Xcode

I'm trying to run this C++ code in Xcode 8.1:
std::string str = "g[+g]g[-g]g[−g[+g]g]g[+g]g";
for (auto& c : str) {
printf("%c", c);
}
and I'm getting this as output:
g[+g]g[-g]g[\342\210\222g[+g]g]g[+g]g
Does anyone knows why some characters are coming as hexadecimal characters?
I already tried to print as c_str().

cgo C struct field access from Go: underscore or no underscore?

I'm running into a disconnect between the online documentation and the behavior I see in my programs accessing C structs within GO code. go version says I am using:
go version go1.4.2 linux/amd64
According to the GO CGO documentation:
Within the Go file, C's struct field names that are keywords in Go can be
accessed by prefixing them with an underscore: if x points at a C struct with
a field named "type", x._type accesses the field. C struct fields that cannot
be expressed in Go, such as bit fields or misaligned data, are omitted in the
Go struct, replaced by appropriate padding to reach the next field or the end
of the struct.
I had troubles with this, so made a quick sample program to test it out:
package main
// struct rec
// {
// int i;
// double d;
// char* s;
// };
import "C"
import "fmt"
func main() {
s := "hello world"
r := C.struct_rec{}
r.i = 9
r.d = 9.876
r.s = C.CString(s)
fmt.Printf("\n\tr.i: %d\n\tr.d: %f\n\tr.s: %s\n",
r.i,
r.d,
C.GoString(r.s))
}
When I use underscores as the docs indicate (eg, substitute r._i for r.i above) I get the following compile error:
r._i undefined (type C.struct_rec has no field or method _i)
When I don't use underscores it works fine. I tried this with both pointers and non-pointers. The only other idea I can think of is that maybe it's because I allocated the instances in GO rather than C, is that the case??
Thanks for any help!
The answer is in the very quote you have in your question:
Within the Go file, C's struct field names that are keywords in Go can be accessed by prefixing them with an underscore(…)
i, d, and s are not keywords in Go.

Converting wchar_t* to char* on iOS

I'm attempting to convert a wchar_t* to a char*. Here's my code:
size_t result = wcstombs(returned, str, length + 1);
if (result == (size_t)-1) {
int error = errno;
}
It indeed fails, and error is filled with 92 (ENOPROTOOPT) - Protocol not available.
I've even tried setting the locale:
setlocale(LC_ALL, "C");
And this one too:
setlocale(LC_ALL, "");
I'm tempted to just throw the characters with static casts!
Seems the issue was that the source string was encoded with a non-standard encoding (two ASCII characters for each wide character), which looked fine in the debugger, but clearly internally was sour. The error code produced is clearly not documented, but it's the equivalent to simply not being able to decode said piece of text.

Resources