I'm trying to follow along the "hello compute" example from wgpu on Windows 10 (with some minor modifications, mainly gutting the shader so it does basically no actual computing), but when I read the buffer at the end, it's always zeroed out.
This is the shader I'm trying to run, it compiles fine and I think it's correct
[[block]]
struct Numbers
{
data: [[stride(4)]] array<u32>;
};
[[group(0), binding(0)]]
var<storage, read_write> numbers: Numbers;
[[stage(compute), workgroup_size(1)]]
fn main()
{
numbers.data[0] = numbers.data[0] + u32(1);
numbers.data[1] = numbers.data[1] + u32(1);
numbers.data[2] = numbers.data[2] + u32(1);
}
As for the wgpu code, it follows the tutorial quite closely:
I get the instance, device, and queue
let instance = Instance::new(Backends::PRIMARY);
let adapter = block_on(instance
.request_adapter(&RequestAdapterOptions
{
power_preference: PowerPreference::default(),
compatible_surface: None,
}))
.unwrap();
let (device, queue) = block_on(adapter
.request_device(&Default::default(), None))
.unwrap();
Compile the shader and make a pipeline:
let shader = device.create_shader_module(&ShaderModuleDescriptor
{
label: Some("shader"),
source: ShaderSource::Wgsl(shader_src.into()),
});
let pipeline = device.create_compute_pipeline(&ComputePipelineDescriptor
{
label: None,
layout: None,
module: &shader,
entry_point: "main",
});
Make the staging and storage buffer. The dbg!(size) prints 12, which should be correct for a 3-length array for 4-byte u32s.
let buffer = [1u32, 2, 3];
let size = std::mem::size_of_val(&buffer) as u64;
dbg!(size);
let staging_buffer = device.create_buffer(&BufferDescriptor
{
label: None,
size: size,
usage: BufferUsages::MAP_READ | BufferUsages::COPY_DST,
mapped_at_creation: false,
});
let storage_buffer = device.create_buffer_init(&BufferInitDescriptor
{
label: Some("storage buffer"),
contents: cast_slice(&buffer),
usage: BufferUsages::STORAGE
| BufferUsages::COPY_DST
| BufferUsages::COPY_SRC,
});
set up the bind group:
let bg_layout = pipeline.get_bind_group_layout(0);
let bind_group = device.create_bind_group(&BindGroupDescriptor
{
label: None,
layout: &bg_layout,
entries: &[BindGroupEntry
{
binding: 0,
resource: storage_buffer.as_entire_binding(),
}]
});
Get the encoder and create the compute pass. The copy_buffer_to_buffer should copy the storage buffer to the staging buffer so I can read it at the end.
let mut encoder = device.create_command_encoder(&CommandEncoderDescriptor
{
label: None,
});
{
let mut cpass = encoder.begin_compute_pass(&ComputePassDescriptor
{
label: None
});
cpass.set_pipeline(&pipeline);
cpass.set_bind_group(0, &bind_group, &[]);
cpass.dispatch(1, 1, 1);
}
encoder.copy_buffer_to_buffer(
&storage_buffer, 0,
&staging_buffer, 0,
size);
queue.submit(Some(encoder.finish()));
And then submit the compute pass and block for the result:
let buf_slice = staging_buffer.slice(..);
let buf_future = buf_slice.map_async(MapMode::Read);
device.poll(Maintain::Wait);
if let Ok(()) = block_on(buf_future)
{
let data = buf_slice.get_mapped_range();
let result = cast_slice::<u8, u32>(&data).to_vec();
drop(data);
staging_buffer.unmap();
println!("{:?}", result);
}
else
{
println!("error");
}
The error case isn't reached, and the program terminates with no errors, but the result is always printed [0, 0 ,0], when it should be [2, 3, 4].
What am I doing wrong?
The program works fine when I'm running it on my discrete graphics card, but wgpu is bugged on my integrated Intel HD Graphics 630, which is why the program appeared not to work.
Related
I want contents' size in a imgui window looks always same, but on resizing OS window, imgui window size and its contents scale too. Actually contents looks 'wrong' when resized, this is scaled down figure of 'hello world' example from imgui-rs repository.
I think I should achieve it by using glViewport, but it looks like I have no access to the function or equivalent things, and ContextWrapper::resize has no effect.
Event::NewEvents(_) => {
let now = Instant::now();
imgui.io_mut().update_delta_time(now - last_frame);
last_frame = now;
}
Event::MainEventsCleared => {
let gl_window = display.gl_window();
platform
.prepare_frame(imgui.io_mut(), gl_window.window())
.expect("Failed to prepare frame");
gl_window.window().request_redraw();
}
Event::RedrawRequested(_) => {
let ui = imgui.new_frame();
let mut run = true;
run_ui(&mut run, ui);
if !run {
*control_flow = ControlFlow::Exit;
}
let gl_window = display.gl_window();
let mut target = display.draw();
target.clear_color_srgb(1.0, 1.0, 1.0, 1.0);
platform.prepare_render(ui, gl_window.window());
let draw_data = imgui.render();
renderer
.render(&mut target, draw_data)
.expect("Rendering failed");
target.finish().expect("Failed to swap buffers");
}
Event::WindowEvent {
event: WindowEvent::Resized(size),
..
} => {
display.gl_window().resize(size);
}
What should I do to keep contents' size constant and to render things correctly?
I'm using imgui with glium and winit.
I'm evaluating/learning flatbuffers and I've written a schema and some basic code. The schema contains two root tables but when I try to convert a wrong root it doesn't fail. Is this expected behavior?
schema.fbs:
table Weapon {
name:string;
damage:short;
two_handed:bool;
}
root_type Weapon;
table Shield {
name:string;
damage:short;
}
root_type Shield;
main.rs:
use flatbuffers;
// import the generated code
#[allow(dead_code, unused_imports)]
#[path = "./schema_generated.rs"]
mod schema;
fn main() {
let mut sword_builder = flatbuffers::FlatBufferBuilder::new();
let sword_name = sword_builder.create_string("Sword");
let sword = schema::Weapon::create(
&mut sword_builder,
&schema::WeaponArgs {
name: Some(sword_name),
damage: 10,
two_handed: false,
},
);
sword_builder.finish(sword, None);
let sword_buffer = sword_builder.finished_data();
let mut shield_builder = flatbuffers::FlatBufferBuilder::new();
let shield_name = shield_builder.create_string("Shield");
let shield = schema::Weapon::create(
&mut shield_builder,
&schema::WeaponArgs {
name: Some(shield_name),
damage: 2,
two_handed: true,
},
);
shield_builder.finish(shield, None);
let shield_buffer = shield_builder.finished_data();
// Lets decode our buffers
let sword_decoded = flatbuffers::root::<schema::Weapon>(&sword_buffer).unwrap();
println!("{:#?}", sword_decoded);
let shield_decoded = flatbuffers::root::<schema::Shield>(&shield_buffer).unwrap();
println!("{:#?}", shield_decoded);
// This should fail:
let sword_decoded_failure = flatbuffers::root::<schema::Weapon>(&shield_buffer).unwrap();
println!("{:#?}", sword_decoded_failure);
}
output:
Weapon {
name: Some(
"Sword",
),
damage: 10,
two_handed: false,
}
Shield {
name: Some(
"Shield",
),
damage: 2,
}
Weapon {
name: Some(
"Shield",
),
damage: 2,
two_handed: true,
}
github link: https://github.com/ic3man5/fb_test
Documentation for root:
Gets the root of the Flatbuffer, verifying it first with default options. Note that verification is an experimental feature and may not be maximally performant or catch every error (though that is the goal). See the _unchecked variants for previous behavior.
I would expect it to be able to catch a basic error like this? If so I can see two work arounds, one to prepend a header in front of the bytes to identify the table or using a flatbuffer union (I don't want to do this).
Flatbuffers only allows one root type per schema and one instance of it per buffer. So your schema needs to change to reflect this.
As for there being no error, a verifier takes a binary buffer of bytes and checks that it can be safely accessed according to the current schema. There is no type information in the binary bytes, so if the bytes happen to be safely accessible by another schema, it may succeed. To force it to not succeed, you could add a file_identifier to your schema, which if the Rust verifier checks it, would cause it to fail for the wrong schema.
I've attempted to create a pipeline for receiving RTP video/audio streams via Gstreamer using the gstreamer-rs crate, but I am not having much luck. Here is a quick distillation of my approach:
let udp_src = ElementFactory::make("udpsrc", Some("udp_src"))?;
udp_src.set_property("port", &5004);
udp_src.set_property("caps", &"application/x-rtp".to_string());
let rtpbin = ElementFactory::make("rtpbin", Some("rtp_bin"))?;
let pipeline = Pipeline::new(Some("RTP Pipeline"));
pipeline.add_many(&[&udp_src, &rtpbin]);
udp_src.link(&rtpbin)?;
rtpbin.connect_pad_added(move |src, src_pad| {
println!("Received new pad {} from {}", src_pad.get_name(), src.get_name());
let new_pad_caps = src_pad.get_current_caps().expect("Failed to get caps of new pad");
let new_pad_struct = new_pad_caps.get_structure(0).expect("Failed to get first structure of caps");
let new_pad_type = new_pad_struct.get_name();
println!("{}", new_pad_type);
});
But I'm not getting anything from the connect_pad_added when I run the code and send UDP signals to the specified port. I'm not even sure if this is the right approach, and would appreciate any directions on where to find examples or pointers on how to use rtpbin with udpsrc.
You're not specifying to which pad of the rtpbin you're linking your udpsrc, so it probably selects the wrong one here (maybe the one for a sender-rtpbin).
Try with udp_src.link_pads(Some("src"), &rtpbin, Some("recv_rtp_sink_%u")) instead. Then you should get a pad-added with a "recv_rtp_src_%u_%u_%u" name. The first number will be 0, the other two will be payload type and the ssrc.
You can link a udp source to an rtpbin via the following:
fn get_static_pad(element: &Element, pad_name: &'static str) -> Result<Pad, Error> {
let pad = element.get_static_pad(pad_name);
pad.ok_or(Error::msg(format!("Failed to get pad: {}", pad_name)))
}
fn get_request_pad(element: &Element, pad_name: &'static str) -> Result<Pad, Error> {
match element.get_request_pad(pad_name) {
Some(pad) => Ok(pad),
None => {
let element_name = element.get_name();
Err(Error::msg(pad_name))
}
}
}
let rtp_udp_src = ElementFactory::make("udpsrc", Some("rtp_udp_src"))?;
let caps = Caps::new_simple("application/x-rtp", &[("clock-rate", &90000i32)]);
rtp_udp_src.set_property("port", &5006);
rtp_udp_src.set_property("caps", &caps);
let rtpbin = ElementFactory::make("rtpbin", Some("rtp_bin"))?;
rtpbin.connect("request-pt-map", false, |values| {
let pt = values[2].get::<u32>().expect("Invalid argument");
match pt {
Some(100) => Some(
Caps::new_simple(
"application/x-rtp",
&[
("media", &"video"),
("clock-rate", &90000i32),
],
)
.to_value(),
),
Some(96) => Some(
Caps::new_simple(
"application/x-rtp",
&[
("media", &"video"),
("clock-rate", &90000i32),
("encoding-name", &"VP8"),
],
)
.to_value(),
),
_ => None,
}
})?;
let rtp_udp_src_pad = get_static_pad(&rtp_udp_src, "src")?;
let rtp_recv_sink_pad = get_request_pad(&rtpbin, "recv_rtp_sink_0")?;
rtp_udp_src_pad.link(&rtp_recv_sink_pad).expect("Failed to link rtp_udp_src_pad and rtpbin");
This will correctly fire off the connect_pad_added sequence and let you use it with other elements.
I'm using Yew to program a theme switcher that by clicking cycles through different themes.
This is my update function. It gets the current theme which is stored in shared state, depending on what would come next inside theme_cycle the theme value inside the shared state gets set to it.
fn update(&mut self, msg: Self::Message) -> ShouldRender {
match msg {
Msg::ChangeTheme => {
let theme_cycle: [&str; 3] = ["light", "dark", "rust"];
let current_theme = self.props.handle.state().theme.clone();
// eval next theme
let next_theme = match theme_cycle.iter().position(|x| x == ¤t_theme) {
None => theme_cycle[0].to_string(),
Some(i) => {
if i >= (current_theme.len() - 1) {
theme_cycle[0].to_string()
} else {
theme_cycle[i + 1].to_string()
}
},
};
// set next theme
self.props.handle.reduce(move |state| state.theme = next_theme.clone());
// store it inside localstorage
},
Msg::ToggleLangDropdown => self.show_dropdown = !self.show_dropdown,
};
true
}
But if the theme val inside shared state is "rust" and I click the button again that calls Msg::ChangeTheme, the theme should be set to "light" but instead my code panics and I get an "Uncaught Error: undefined" inside the Browser Console.
I've found a workaround; instead of using an array and accessing the values, I've tried to do the same task but just with iterators and made sure that the update function takes no ownership of any variable outside the function itself (I don't really know if that is really necessary though...)
fn update(&mut self, msg: Self::Message) -> ShouldRender {
match msg {
Msg::ChangeTheme => {
let theme_cycle = ["light".to_string(), "dark".to_string(), "rust".to_string()];
let current_theme = self.props.handle.state().theme.clone();
let indexof_current_theme = match theme_cycle.iter().position(|x| x.to_string() == current_theme) {
None => 0,
Some(x) => x.clone(),
};
let next_theme = match theme_cycle.iter().nth(indexof_current_theme + 1) {
None => theme_cycle.iter().nth(0).unwrap().clone(),
Some(x) => x.clone(),
};
self.props.handle.reduce(move |state| state.theme = next_theme.to_string());
},
Msg::ToggleLangDropdown => self.show_lang_dropdown = !self.show_lang_dropdown,
Msg::ToggleThemeDropdown => self.show_theme_dropdown = !self.show_theme_dropdown,
};
true
}
Still would be cool if anyone knows what I did wrong in my first attempt.
I'm storing the xp in a json file for practice. How can I display the top 10 users? The first time I
would have wanted to sort but I don't know how it would be more efficient.
let xpAdd = Math.floor(Math.random() * 9) + 8;
if(!xp[message.author.id]) {
xp[message.author.id] = {
xp:0,
level:1
};
}
let curxp = xp[message.author.id].xp;
let curlvl = xp[message.author.id].level;
let nxtLevel = xp[message.author.id].level * 300;
xp[message.author.id].xp = curxp + xpAdd;
fs.writeFile("./xp.json", JSON.stringify(xp), (error) => {
if(error) console.log(error);
});
This is the code I store
And with this I show the level of xp
if(!xp[message.author.id]) {
xp[message.author.id] = {
xp: 0,
level:1
};
}
let curxp = xp[message.author.id].xp;
let curlvl = xp[message.author.id].level;
let nxtLevelXp = curlvl * 300;
let difference = nxtLevelXp - curxp;
I'd suggest you should convert your object to an array so that you can format it and sort it however you prefer.
// sample object
// I'm going to show the top three in this example for the interest of space
const xp = {
"ID #1": {
level: 3,
xp: 300,
},
"ID #2": {
level: 4,
xp: 400,
},
"ID #3": {
level: 2,
xp: 200,
},
"ID #4": {
level: 1,
xp: 100,
},
};
// sort entries in order of exp (descending), then single out the top three
let entries = Object.entries(xp).sort(([, a], [, b]) => b.xp > a.xp ? 1 : -1).slice(0, 3);
// map entries to the prefered format using the data in their objects
// (in your actual command, you should use `client.users.cache.get(id).tag`,
// but since I don't have access to the client object here, I'm just writing `id`
entries = entries.map(([id, { level, xp }], idx) => `${idx + 1} - ${id} (level ${level}; ${xp} xp)`);
// join each user by a line break
console.log(entries.join('\n'));
Working With Objects
Object.entries()
Array.prototype.sort()
Array.prototype.map()
Array.prototype.join()