fltk: failed to resolve GlutWindow - rust

I'm trying to follow example of fltk application which uses openGl, but the build is not functioning:
let app = app::App::default();
let mut win = window::GlutWindow::default().with_size(800, 600);
win.set_mode(enums::Mode::Opengl3);
win.end();
win.show();
unsafe {
let gl = glow::Context::from_loader_function(|s| {
win.get_proc_address(s) as *const _
});
I get: failed to resolve: could not find GlutWindow in window.
I'm using fltk version 1
Thank you
p.s. I'm using Rust

Check your imports which are missing from that snippet.
Also if you’re not enabling the enable-glwindow feature, you should, try changing your Cargo.toml to include the missing feature:
[dependencies]
fltk = { version = "1", features = ["enable-glwindow"] }

Related

cursive-flexi-logger-view throwing unhandled error

After I start the application I got the error below and how it looks in the app
mintozzy#laptop:~/tmp/storytel-tui$ cargo run
Finished dev [unoptimized + debuginfo] target(s) in 0.50s
Running `target/debug/storytel-tui`
[flexi_logger][ERRCODE::Time] flexi_logger has to work with UTC rather than with local time, caused by IndeterminateOffset
See https://docs.rs/flexi_logger/latest/flexi_logger/error_info/index.html#time
[flexi_logger][ERRCODE::Write] writing log line failed, caused by Custom { kind: BrokenPipe, error: "cursive callback sink is closed!" }
See https://docs.rs/flexi_logger/latest/flexi_logger/error_info/index.html#write
[dependencies]
reqwest = { version = "0.11.11", features = ["json", "blocking"] }
serde = { version = "1.0.139", features = ["derive"] }
serde_json = "1.0.82"
mpv = "0.2.3"
openssl = { version = "0.10.41" }
cursive = { version = "0.18" , default-features = false, features = ["crossterm-backend"]}
cursive-flexi-logger-view = "^0"
flexi_logger = "0.22.6"
I used the example code https://docs.rs/cursive-flexi-logger-view/latest/cursive_flexi_logger_view/#using-the-flexiloggerview screen is flickering, looks like logger printing below and breaking the UI.
what can be the reason ?
After cursive-flexi-logger-view failing I found that cursive has debug console. I think it is better solution because no need to add another dependency.
To enable debug log added code below
cursive::logger::init();
match std::env::var("RUST_LOG").unwrap_or_else(|_| "info".to_string()).as_ref() {
"trace" => log::set_max_level(LevelFilter::Trace),
"debug" => log::set_max_level(LevelFilter::Debug),
"info" => log::set_max_level(LevelFilter::Info),
"warn" => log::set_max_level(LevelFilter::Warn),
"error" => log::set_max_level(LevelFilter::Error),
_ => log::set_max_level(LevelFilter::Off),
}
siv.add_global_callback('~', Cursive::toggle_debug_console);
with this setup in UI I can enable debug console by typing ~ and it opens the debug window which has logged data in it. Below you can see how it looks
You can find full working code here

Hitting upload limit on crates.io on a sys crate

I'm trying to publish a sys crate for libvmaf. Unfortunately I can not simply dynamically link to libvmaf because it's not distributed anywhere and I need to build it from source and include it in my library. Unfortunately libvmaf is absolutely huge and my .rlib file is ending up at 1.4 megabytes which is over the upload limit for crates.io. Am I boned here?
Here's my build.rs file
use meson_next;
use std::env;
use std::fs::canonicalize;
use std::path::PathBuf;
fn main() {
//env::set_var("RUST_BACKTRACE", "1");
let build_dir = PathBuf::from(env::var("OUT_DIR").unwrap()).join("build");
let lib_dir = build_dir.join("src");
let build_dir_str = build_dir.to_str().unwrap();
let lib_dir_str = lib_dir.to_str().unwrap();
meson_next::build("vmaf/libvmaf", build_dir_str);
println!("cargo:rustc-link-lib=static=vmaf");
println!("cargo:rustc-link-search=native={lib_dir_str}");
// Path to vendor header files
let headers_dir = PathBuf::from("vmaf/libvmaf/include");
let headers_dir_canonical = canonicalize(headers_dir).unwrap();
let include_path = headers_dir_canonical.to_str().unwrap();
// Generate bindings to libvmaf using rust-bindgen
let bindings = bindgen::Builder::default()
.header("vmaf/libvmaf/include/libvmaf/libvmaf.h")
.clang_arg(format!("-I{include_path}"))
.parse_callbacks(Box::new(bindgen::CargoCallbacks))
.generate()
.expect("Unable to generate bindings");
// Write bindings to build directory
let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
bindings
.write_to_file(out_path.join("bindings.rs"))
.expect("Couldn't write bindings!");
}
In general you should not be including compiled libraries in your package. Include the source code, and have your build script perform the build.
This will usually result in a smaller package, and also means that your package works on any target architecture (that is supported by the library).

oxipng throwing RuntimeError: unreachable when called

I'm trying to create a small WASM project for image compression.
After some search in github, I noticed that oxipng 2.2.2 has a target for wasm32-unknown-unknown, hence why I'm using that.
I'm using wasm-pack for creating the wasm file + JS bindings with target -t web
This is the code:
extern crate oxipng;
mod utils;
use std::error::Error;
use wasm_bindgen::prelude::*;
use oxipng::*;
#[wasm_bindgen]
extern "C" {
// Use `js_namespace` here to bind `console.log(..)` instead of just
// `log(..)`
#[wasm_bindgen(js_namespace = console)]
fn log(s: &str);
}
// Next let's define a macro that's like `println!`, only it works for
// `console.log`. Note that `println!` doesn't actually work on the wasm target
// because the standard library currently just eats all output. To get
// `println!`-like behavior in your app you'll likely want a macro like this.
#[macro_export]
macro_rules! console_log {
// Note that this is using the `log` function imported above during
// `bare_bones`
($($t:tt)*) => (crate::log(&format_args!($($t)*).to_string()))
}
// When the `wee_alloc` feature is enabled, use `wee_alloc` as the global
// allocator.
#[cfg(feature = "wee_alloc")]
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
#[wasm_bindgen]
pub fn compress(data: &[u8]) -> Vec<u8> {
console_log!("{}", data.len());
let opts = Options::from_preset(6);
console_log!("after options");
let res = match optimize_from_memory(data, &&opts) {
Ok(res) => Ok(res),
Err(err) => Err(err),
};
match &res {
Ok(_) => console_log!("Optimized"),
Err(err) => console_log!("Error: {}", err),
}
return res.unwrap();
}
I don't ever get an error message, the last log I have is "after options".
In a nutshell, I'm using a Flutter web application that gets a PNG file, converts it into a Uint8List, and I send it as an integer List to the JS bindings.
When called, the following error happens:
RuntimeError: unreachable
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[1019]:0x5c6be
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[414]:0x4cd37
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[619]:0x54c96
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[915]:0x5b4ba
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[986]:0x5c139
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[645]:0x55885
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[569]:0x5324b
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[594]:0x53ff1
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[2]:0x554f
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[84]:0x2cbf2
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[73]:0x2a501
at http://localhost:3000/pkg/rust_png_module_bg.wasm:wasm-function[563]:0x52eaa
at compress (http://localhost:3000/pkg/rust_png_module.js:49:14)
at compressImage (http://localhost:3000/packages/rust_wasm/ui/screens/home/home_page.dart.lib.js:568:72)
at compressImage.next (<anonymous>)
at http://localhost:3000/dart_sdk.js:38640:33
at _RootZone.runUnary (http://localhost:3000/dart_sdk.js:38511:59)
at _FutureListener.thenAwait.handleValue (http://localhost:3000/dart_sdk.js:33713:29)
at handleValueCallback (http://localhost:3000/dart_sdk.js:34265:49)
at Function._propagateToListeners (http://localhost:3000/dart_sdk.js:34303:17)
at _Future.new.[_completeWithValue] (http://localhost:3000/dart_sdk.js:34151:23)
at async._AsyncCallbackEntry.new.callback (http://localhost:3000/dart_sdk.js:34172:35)
at Object._microtaskLoop (http://localhost:3000/dart_sdk.js:38778:13)
at _startMicrotaskLoop (http://localhost:3000/dart_sdk.js:38784:13)
at http://localhost:3000/dart_sdk.js:34519:9
Since this version is old, I don't know if I should revert back to an older version of Rust
$ rustup --version
rustup 1.24.3 (ce5817a94 2021-05-31)
info: This is the version for the rustup toolchain manager, not the rustc compiler.
info: The currently active `rustc` version is `rustc 1.55.0 (c8dfcfe04 2021-09-06)`
Thank you in advance
The problem is you're using a very old version of oxipng (v2.2.2) that didn't support wasm yet. I believe wasm support was added in v2.3.0 (link to issue that was fixed). Anyways, you should be able to use the latest version just fine with wasm, just make sure you disable the default features when adding the crate to your Cargo.toml:
[dependencies]
oxipng = { version = "5", default-features = false }

Why do I get the error "there is no reactor running, must be called from the context of Tokio runtime" even though I have #[tokio::main]?

I'm following the mdns Rust documentation and pasted the example code but it throws the following error:
thread 'main' panicked at 'there is no reactor running, must be called from the context of Tokio runtime'
Here's the code that I have:
use futures_util::{pin_mut, stream::StreamExt};
use mdns::{Error, Record, RecordKind};
use std::{net::IpAddr, time::Duration};
const SERVICE_NAME: &'static str = "_googlecast._tcp.local";
#[tokio::main]
async fn main() -> Result<(), Error> {
// Iterate through responses from each Cast device, asking for new devices every 15s
let stream = mdns::discover::all(SERVICE_NAME, Duration::from_secs(15))?.listen();
pin_mut!(stream);
while let Some(Ok(response)) = stream.next().await {
let addr = response.records().filter_map(self::to_ip_addr).next();
if let Some(addr) = addr {
println!("found cast device at {}", addr);
} else {
println!("cast device does not advertise address");
}
}
Ok(())
}
fn to_ip_addr(record: &Record) -> Option<IpAddr> {
match record.kind {
RecordKind::A(addr) => Some(addr.into()),
RecordKind::AAAA(addr) => Some(addr.into()),
_ => None,
}
}
Dependencies:
[dependencies]
mdns = "1.1.0"
futures-util = "0.3.8"
tokio = { version = "0.3.3", features = ["full"] }
What am I missing? I tried looking online but haven't found how to create a reactor for this use case.
You are using a newer version of Tokio, such as 0.3 or 1.x, and many packages, including mdns 1.1.0, rely on an older version of Tokio, such as 0.2.
% cargo tree -d
tokio v0.2.22
└── mdns v1.1.0
└── example_project v0.1.0
tokio v0.3.3
└── example_project v0.1.0
For now, you will need to match versions of the Tokio runtime. The easiest way is to use Tokio 0.2 yourself. The tokio-compat-02 crate may also be useful in some cases.
See also:
Why is a trait not implemented for a type that clearly has it implemented?
Various error messages with the same root cause:
there is no reactor running, must be called from the context of a Tokio 1.x runtime
there is no reactor running, must be called from the context of Tokio runtime
not currently running on the Tokio runtime
Fix for me was adding this to Cargo.toml:
[dependencies]
async-std = { version = "1", features = ["attributes", "tokio1"] }
https://github.com/ATiltedTree/ytextract/issues/25
At the time of writing, a fair amount of crates are already using Tokio v1, but others might still be under an experimental phase. Check your crates for prerelease versions which might have already upgraded their tokio runtime compatibility.
A relevant example of this was actix-web, which uses runtime 1.0 of Tokio since version 4. Although there were prereleases of this major increment since 2022-01-07, version 4.0.0 was only released in 2022-02-25.
actix-web = { version = "4.0.0-beta.10" }

How do I set default options for traceur.compile and traceur.require?

Using the official traceur module, is it possible to set the default options for compile and require?
For example, this code works:
var traceur = require('traceur');
console.log(
traceur.compile('{ let x = 1; }', { experimental:true }).js
);
Now if I remove traceur.compile's 2nd argument (the options object):
console.log(
traceur.compile('{ let x = 1; }').js
);
Traceur will throw an error as the blockBinding option is not enabled. Is there any way to change the default options, in order to compile files without always passing an options object?
My main concern, apart from applying the DRY principle, is getting the traceur.require function to compile files with customized options -- as far as I can see, traceur.require and traceur.require.makeDefault() do not even take an options argument.
For instance, considering this code sample:
require('traceur').require('./index');
And this piece of code:
require('traceur').require.makeDefault();
require('./index');
Is there any way to compile the required file with the experimental option enabled?
Preferably by altering the default options, as I cannot see any other viable way.
Using Node 0.10.29 and Traceur 0.0.49.
Here's a full example of what I'd like to achieve.
bootstrap.js (entry point):
var traceur = require('traceur');
traceur.options.experimental = true;
traceur.require.makeDefault();
require('./index');
index.js:
import {x} from './lib';
// using a block binding in order to check
// whether this file was compiled with experimental features enabled
{
let y = x;
console.log(y);
}
lib.js:
export var x = (() => {
if (true) {
// should be compiled with experimental features enabled too
let x = 1;
return x;
}
})();
Expected console output: 1
traceur.options.experimental=true serves as a setter which enables the experimental features in the traceur.options object, but unfortunately traceur.options does not seem to affect traceur.compile nor traceur.require as far as I can see.
The Using Traceur with Node.js Wiki page does not mention anything about compiling options. The Options for Compiling page does not mention the Traceur API in Node.js, in fact, I cannot find any documentation about the Traceur API in Node.js.
Fabrício Matté ;-) added support for giving the default options to makeDefault(), see
https://github.com/google/traceur-compiler/blob/master/src/node/require.js#L58
A separate bug with the option experimental was fixed today, 16JUL14.

Resources