Access bevy asset right after loading via AssetServer - rust

Is it possible to access a bevy asset right after it was loaded from the AssetServer?
I've read actual loading happens in the background but couldn't find in either official documentation or the bevy cheat book if it is possible to wait for the actual loading to happen inside the same function?
Example of what I'm trying to do
fn f(asset_server: Res<AssetServer>, image_assets: Res<Assets<Image>>,) {
let image: Handle<Image> = assert_server.load("img.png");
// wait for `image` to be loaded...
// NOTE: that this doesn't work - it just goes into an infinite loop
while asset_server.get_load_state(&image) != LoadState::Loaded { }
let image_asset = image_assets.get(&image).unwrap();
}
The reason I need this is to check some of the image's data, for example it's size.

As you probably shouldn't be halting the function for the asynchronous load of the file. Maybe you can create some Entity with the handle. You could have another system running a query for this entity and consume the entity when the handle shows the image is loaded.
Looking it up this appears to be a suggested pattern in the unoffical bevy book.
#[derive(Resource)]
struct AssetsLoading(Vec<HandleUntyped>);
fn setup(server: Res<AssetServer>, mut loading: ResMut<AssetsLoading>) {
// we can have different asset types
let font: Handle<Font> = server.load("my_font.ttf");
let menu_bg: Handle<Image> = server.load("menu.png");
let scene: Handle<Scene> = server.load("level01.gltf#Scene0");
// add them all to our collection for tracking
loading.0.push(font.clone_untyped());
loading.0.push(menu_bg.clone_untyped());
loading.0.push(scene.clone_untyped());
}
fn check_assets_ready(
mut commands: Commands,
server: Res<AssetServer>,
loading: Res<AssetsLoading>
) {
use bevy::asset::LoadState;
match server.get_group_load_state(loading.0.iter().map(|h| h.id)) {
LoadState::Failed => {
// one of our assets had an error
}
LoadState::Loaded => {
// all assets are now ready
// this might be a good place to transition into your in-game state
// remove the resource to drop the tracking handles
commands.remove_resource::<AssetsLoading>();
// (note: if you don't have any other handles to the assets
// elsewhere, they will get unloaded after this)
}
_ => {
// NotLoaded/Loading: not fully ready yet
}
}
}

Related

How to configure tower_http TraceLayer in a separate function?

I'm implementing a tokio/axum HTTP server. In the function where I run the server, I configure routing, add shared application services and add tracing layer.
My tracing configuration looks like this:
let tracing_layer = TraceLayer::new_for_http()
.make_span_with(|_request: &Request<Body>| {
let request_id = Uuid::new_v4().to_string();
tracing::info_span!("http-request", %request_id)
})
.on_request(|request: &Request<Body>, _span: &Span| {
tracing::info!("request: {} {}", request.method(), request.uri().path())
})
.on_response(
|response: &Response<BoxBody>, latency: Duration, _span: &Span| {
tracing::info!("response: {} {:?}", response.status(), latency)
},
)
.on_failure(
|error: ServerErrorsFailureClass, _latency: Duration, _span: &Span| {
tracing::error!("error: {}", error)
},
);
let app = Router::new()
// routes
.layer(tracing_layer)
// other layers
...
Trying to organize the code a bit I move the tracing layer configuration to a separate function. The trick is to provide a compiling return type for this function.
The first approach was to move the code as is and let an IDE generate the return type:
TraceLayer<SharedClassifier<ServerErrorsAsFailures>, fn(&Request<Body>) -> Span, fn(&Request<Body>, &Span), fn(&Response<BoxBody>, Duration, &Span), DefaultOnBodyChunk, DefaultOnEos, fn(ServerErrorsFailureClass, Duration, &Span)>
Which is completely unreadable, but the worst is it does not compile: "expected fn pointer, found closure"
In the second approach I changed fn into impl Fn that would mean a closure type. Again, I get an error that my closures are not Clone.
Third, I try to extract closures into separate functions. But then I get "expected fn pointer, found fn item".
What can I do 1) to make it compile and 2) to make it more readable?
Speaking from experience, breaking up the code like that is very hard due to all the generics. I would instead recommend functions that accept and return axum::Routers. That way you bypass all the generics:
fn add_middleware(router: Router) -> Router {
router.layer(
TraceLayer::new_for_http().make_span_with(...)
)
}

Handling duplicate inserts into database in async rust

Beginner in both rust and async programming here.
I have a function that downloads and stores a bunch of tweets in the database:
pub async fn process_user_timeline(config: &Settings, pool: &PgPool, user_object: &Value) {
// get timeline
if let Ok((user_timeline, _)) =
get_user_timeline(config, user_object["id"].as_str().unwrap()).await
{
// store tweets
if let Some(tweets) = user_timeline["data"].as_array() {
for tweet in tweets.iter() {
store_tweet(pool, &tweet, &user_timeline, "normal")
.await
.unwrap_or_else(|e| {
println!(
">>>X>>> failed to store tweet {}: {:?}",
tweet["id"].as_str().unwrap(),
e
)
});
}
}
}
}
It's being called in an asynchronous loop by another function:
pub async fn loop_until_hit_rate_limit<'a, T, Fut>(
object_arr: &'a [T],
settings: &'a Settings,
pool: &'a PgPool,
f: impl Fn(&'a Settings, &'a PgPool, &'a T) -> Fut + Copy,
rate_limit: usize,
) where
Fut: Future,
{
let total = object_arr.len();
let capped_total = min(total, rate_limit);
let mut futs = vec![];
for (i, object) in object_arr[..capped_total].iter().enumerate() {
futs.push(async move {
println!(">>> PROCESSING {}/{}", i + 1, total);
f(settings, pool, object).await;
});
}
futures::future::join_all(futs).await;
}
Sometimes two async tasks will try to insert the same tweet at the same time, producing this error:
failed to store tweet 1398307091442409475: Database(PgDatabaseError { severity: Error, code: "23505", message: "duplicate key value violates unique constraint \"tweets_tweet_id_key\"", detail: Some("Key (tweet_id)=(1398307091442409475) already exists."), hint: None, position: None, where: None, schema: Some("public"), table: Some("tweets"), column: None, data_type: None, constraint: Some("tweets_tweet_id_key"), file: Some("nbtinsert.c"), line: Some(656), routine: Some("_bt_check_unique") })
Mind the code already checks for whether a tweet is present before inserting it, so this only happens in the following scenario: READ from task 1 > READ from task 2 > WRITE from task 1 (success) > WRITE from task 2 (error).
To solve this, my best attempt so far has been to place an unwrap_or_else() clause which lets one of the tasks fail without panicking out of the entire execution. I am aware of at least one drawback - sometimes both tasks will bail out and the tweet never gets written. It happens in <1% of cases, but it happens.
Are there other drawbacks to my approach I'm not aware of?
What's the right way to handle this? I hate losing data, and even worse doing so non-deterministically.
PS I'm using actix web and sqlx as my webserver / db libraries.
Generally for anything that may be written by multiple threads/processes, any logic like
if (!exists) {
writeValue()
}
needs to either be protected by some kind of lock, or the code needs to be changed to write atomically with the possibility the write will fail because something else already wrote to it.
For in-memory data in Rust you'd use Mutex to ensure that you can read and then write the data back before anything else reads it, or Atomic to modify the data in such a way that if something already wrote it, you can detect that.
In databases, for any query that might conflict with some other query happening around the same time, you'd want to use an ON CONFLICT clause in your query so that the database itself knows what to do when it tries to write data and it already exists.
For your case since I'm guessing the tweets are immutable, you'd likely want to do ON CONFLICT tweet_id DO NOTHING (or whatever your ID column is), in which case the INSERT will skip inserting if there is already a tweet with the ID you are inserting, and it won't throw an error.

Calling getBBox for a SVG text element in a Seed Rust application

I just made my first steps with WASM and Seed which was a very smooth experience so far. I was able to create SVG using svg!, circle!, text!, ... and similar macros. To generate my SVG in the proper way, I have to measure text. My idea is to generate SVG text nodes and call getBBox on the node. I figured out that Seed is using web_sys and that getBBox is implemented there.
My problem is how to get from the Node created by text! to the SvgTextElement. I tried to access the node_ws field, but it seems to be "empty". It might not yet been created, but I don't now enough about the Seed internals.
So how do I create a SVG text node so that I can call getBBox on it before generating the "main" SVG nodes?
You can use el_ref to get a reference to the DOM element. Something like this ought to work:
struct Model {
element: ElRef<web_sys::SvgTextElement>,
}
fn view(model: &Model) -> Node<Msg> {
svg!![
text![
el_ref(&model.element),
// ...
],
// ...
]
}
fn init(orders: &mut impl Orders<Msg>) -> Model {
orders.after_next_render(|_| Msg::Rendered);
// ...
}
fn update(msg: Msg, model: &mut Model, orders: &mut impl Orders<Msg>) {
match msg {
Msg::Rendered => {
let element = model.element.get().expect("no svg text element");
let b_box = element.get_b_box();
// ...
}
// ...
}
}

How can I transfer some values ​into a Rust generator at each step?

I use generators as long-lived asynchronous threads (see
How to implement a lightweight long-lived thread based on a generator or asynchronous function in Rust?) in a user interaction scenario. I need to pass user input into the generator at each step. I think I can do it with a RefCell, but it is not clear how to transfer the reference to the RefCell inside the generator when creating its instance?
fn user_scenario() -> impl Generator<Yield = String, Return = String> {
|| {
yield format!("what is your name?");
yield format!("{}, how are you feeling?", "anon");
return format!("{}, bye !", "anon");
}
}
The UserData structure contains user input, the second structure contains a user session consisting of UserData and the generator instance. Sessions are collected in a HashMap.
struct UserData {
sid: String,
msg_in: String,
msg_out: String,
}
struct UserSession {
udata_cell: RefCell<UserData>,
scenario: Pin<Box<dyn Generator<Yield = String, Return = String>>>,
}
type UserSessions = HashMap<String, UserSession>;
let mut sessions: UserSessions = HashMap::new();
UserData is created at the time of receiving user input - at this moment I need to send a link to UserData inside the generator, wrapping it in RefCell, but I don’t know how to do it since the generator has a 'static lifetime, and the RefCell lives less!
let mut udata: UserData = read_udata(&mut stream);
let mut session: UserSession;
if udata.sid == "" { //new session
let sid = rnd.gen::<u64>().to_string();
udata.sid = sid.clone();
sessions.insert(
sid.clone(),
UserSession {
udata_cell: RefCell::new(udata),
scenario: Box::pin(user_scenario())
}
);
session = sessions.get_mut(&sid).unwrap();
}
The full code is here, but the generator here does not see user input.
Disclaimer: resumption arguments are a planned extension for generators, so at some point in the future it will be possible to resume the argument with &UserData.
For now, I will recommend sharing ownership. The cost is fairly minor (one memory allocation, one indirection) and will save you a lot of troubles:
struct UserSession {
user_data: Rc<RefCell<UserData>>,
scenario: ..,
}
Which is built with:
let user_data = Rc::new(RefCell::new(udata));
UserSession {
user_data: user_data.clone(),
scenario: Box::pin(user_scenario(user_data))
}
Then, both the session and the generator have access to the UserData each on their turn, and everything is fine.
There is one little wrinkle: be careful of scopes. If you keep a .borrow() alive across a yield point, which is possible, then you will have a run-time error when trying to write to it outside the generator.
A more involved solution would be using a queue of messages; which would also involve memory allocation, etc... I would consider your UserData structure to be a degenerate form of a pair of queues: it's two queues with capacity for one message. You could make it more explicit with a regular queue, but that would not buy you much.

passing around NSManagedObjects

I get strange errors when I am trying to pass around NSManagedObject through several functions. (all are in the same VC).
Here are the two functions in question:
func syncLocal(item:NSManagedObject,completionHandler:(NSManagedObject!,SyncResponse)->Void) {
let savedValues = item.dictionaryWithValuesForKeys([
"score",
"progress",
"player"])
doUpload(savedParams) { //do a POST request using params with Alamofire
(success) in
if success {
completionHandler(item,.Success)
} else {
completionHandler(item,.Failure)
}
}
}
func getSavedScores() {
do {
debugPrint("TRYING TO FETCH LOCAL SCORES")
try frc.performFetch()
if let results = frc.sections?[0].objects as? [NSManagedObject] {
if results.count > 0 {
print("TOTAL SCORE COUNT: \(results.count)")
let incomplete = results.filter({$0.valueForKey("success") as! Bool == false })
print("INCOMPLETE COUNT: \(incomplete.count)")
let complete = results.filter({$0.valueForKey("success") as! Bool == true })
print("COMPLETE COUNT: \(complete.count)")
if incomplete.count > 0 {
for pendingItem in incomplete {
self.syncScoring(pendingItem) {
(returnItem,response) in
let footest = returnItem.valueForKey("player") //only works if stripping syncScoring blank
switch response { //response is an enum
case .Success:
print("SUCCESS")
case .Duplicate:
print("DUPLICATE")
case .Failure:
print("FAIL")
}
}
} //sorry for this pyramid of doom
}
}
}
} catch {
print("ERROR FETCHING RESULTS")
}
}
What I am trying to achieve:
1. Look for locally saved scores that could not submitted to the server.
2. If there are unsubmitted scores, start the POST call to the server.
3. If POST gets 200:ok mark item.key "success" with value "true"
For some odd reason I can not access returnItem at all in the code editor - only if I completely delete any code in syncLocal so it looks like
func syncLocal(item:NSManagedObject,completionHandler:(NSManagedObject!,SyncResponse)->Void) {
completionHandler(item,.Success)
}
If I do that I can access .syntax properties in the returning block down in the for loop.
Weirdly if I paste the stuff back in, in syncLocal the completion block keeps being functional, the app compiles and it will be executed properly.
Is this some kind of strange XCode7 Bug? Intended NSManagedObject behaviour?
line 1 was written with stripped, line 2 pasted rest call back in
There is thread confinement in Core Data managed object contexts. That means that you can use a particular managed object and its context only in one and the same thread.
In your code, you seem to be using controller-wide variables, such as item. I am assuming the item is a NSManagedObject or subclass thereof, and that its context is just one single context you are using in your app. The FRC context must be the main thread context (a NSManagedObjectContext with concurrency type NSMainThreadConcurrencyType).
Obviously, the callback from the server request will be on a background thread. So you cannot use your managed objects.
You have two solutions. Either you create a child context, do the updates you need to do, save, and then save the main context. This is a bit more involved and you can look for numerous examples and tutorials out there to get started. This is the standard and most robust solution.
Alternatively, inside your background callback, you simply make sure the context updates occur on the main thread.
dispatch_async(dispatch_get_main_queue()) {
// update your managed objects & save
}

Resources