Is it possible to compile a Vulkano shader at runtime? - rust

I've been using Vulkano in order to get some simple 3D graphics going on. Generally, I like to write my GLSL shaders in text and restart my program, or even changing shaders while the program is running. The examples given in Vulkano appear to use a macro to convert the GLSL to some form of SPIR-V based shader with Rust functions attached, but the GLSL is actually compiled into the binary (even when using a path to a file).
I've managed to get the crate shaderc to build my SPIR-V on the fly:
let mut f = File::open("src/grafx/vert.glsl")
.expect("Can't find file src/bin/runtime-shader/vert.glsl
This example needs to be run from the root of the example crate.");
let mut source = String::new();
f.read_to_string(&mut source);
//let source = "#version 310 es\n void EP() {}";
let mut compiler = shaderc::Compiler::new().unwrap();
let mut options = shaderc::CompileOptions::new().unwrap();
options.add_macro_definition("EP", Some("main"));
let binary_result = compiler.compile_into_spirv(
&source, shaderc::ShaderKind::Vertex,
"shader.glsl", "main", Some(&options)).unwrap();
assert_eq!(Some(&0x07230203), binary_result.as_binary().first());
let text_result = compiler.compile_into_spirv_assembly(
&source, shaderc::ShaderKind::Vertex,
"shader.glsl", "main", Some(&options)).unwrap();
assert!(text_result.as_text().starts_with("; SPIR-V\n"));
//println!("Compiled Vertex Shader: {}", text_result.as_text());
let vert_spirv = {
unsafe { ShaderModule::new(device.clone(), binary_result.as_binary_u8()) }.unwrap()
};
vert_spirv
So far, so good, we have a ShaderModule which seems to be the first step. However, we we actually need is a GraphicsEntryPoint which we can then put into our GraphicsPipeline. Apparently, GraphicsPipeline is where we string together our shaders, triangles and depth maps and all that lovely stuff.
Trouble is, I've no idea what is going on with the code that performs this feat:
pub fn shade_vertex <'a, S> (vert_spirv: &'a Arc<ShaderModule>) ->
GraphicsEntryPoint<'a, S, VertInput, VertOutput, VertLayout> {
let tn = unsafe {
vert_spirv.graphics_entry_point(
CStr::from_bytes_with_nul_unchecked(b"main\0"),
VertInput,
VertOutput,
VertLayout(ShaderStages { vertex: true, ..ShaderStages::none() }),
GraphicsShaderType::Vertex
)
};
tn
}
Specifically, what is VertInput and VertOutput? I've copied them from the example.
This is the closest example I could find that deals with loading Shaders on the fly. It looks like Input and Output are looking for entry points into the SPIR-V or something but I've no idea what to do with that. I'm hoping there is a function somewhere in the existing macro that will just take care of this for me. I've gotten this far but I seem a little stuck.
Has anyone else tried loading shaders at runtime?

I'm using wgpu, I've made my device, render_pipeline multithreaded like this:
let rx = Arc::new(Mutex::new(rx));
let window = Arc::new(Mutex::new(window));
let fs = Arc::new(Mutex::new(fs));
let fs_module = Arc::new(Mutex::new(fs_module));
let render_pipeline = Arc::new(Mutex::new(render_pipeline));
let device = Arc::new(Mutex::new(device));
used notify to listen to change events:
notify = "4.0.15"
use notify::{RecommendedWatcher, Watcher, RecursiveMode};
//mainxx
let (tx, rx) = mpsc::channel();
let mut watcher: RecommendedWatcher =
Watcher::new(tx, Duration::from_millis(500)).unwrap();
log::info!("Starting watcher on {:?}", *FRAG_SHADER_PATH);
watcher.watch((*FRAG_SHADER_PATH).clone(), RecursiveMode::NonRecursive).unwrap();
Then spawn a thread that listens to changes:
thread::spawn(move || {
log::info!("Shader watcher thread spawned");
loop {
if let Ok(notify::DebouncedEvent::Write(..)) = rx.lock().unwrap().recv() {
log::info!("Write event in fragment shader");
window.lock().unwrap().set_title("Loading shader.frag...");
*fs.lock().unwrap() = load_fs().unwrap();
*fs_module.lock().unwrap() = load_fs_module(Arc::clone(&device), &Arc::clone(&fs).lock().unwrap());
*render_pipeline.lock().unwrap() = create_render_pipeline_multithreaded(Arc::clone(&device), Arc::clone(&fs_module));
render.lock().unwrap().deref_mut()();
window.lock().unwrap().set_title(TITLE);
};
}
});
where load_fs is a closure that uses glsl_to_spirv:
let load_fs = move || -> Result<Vec<u32>, std::io::Error> {
log::info!("Loading fragment shader");
let mut buffer = String::new();
let mut f = File::open(&*FRAG_SHADER_PATH)?;
f.read_to_string(&mut buffer)?;
// Load fragment shader
wgpu::read_spirv(
glsl_to_spirv::compile(
&buffer,
glsl_to_spirv::ShaderType::Fragment
).expect("Compilation failed")
)
};

There is an updated example for this in the vulkano repository.
I followed that and the example for shaderc-rs to get to this:
fn compile_to_spirv(src: &str, kind: shaderc::ShaderKind, entry_point_name: &str) -> Vec<u32> {
let mut f = File::open(src).unwrap_or_else(|_| panic!("Could not open file {}", src));
let mut glsl = String::new();
f.read_to_string(&mut glsl)
.unwrap_or_else(|_| panic!("Could not read file {} to string", src));
let compiler = shaderc::Compiler::new().unwrap();
let mut options = shaderc::CompileOptions::new().unwrap();
options.add_macro_definition("EP", Some(entry_point_name));
compiler
.compile_into_spirv(&glsl, kind, src, entry_point_name, Some(&options))
.expect("Could not compile glsl shader to spriv")
.as_binary()
.to_vec()
}
let vs = {
unsafe {
ShaderModule::from_words(
device.clone(),
&compile_to_spirv(
"shaders/triangle/vs.glsl",
shaderc::ShaderKind::Vertex,
"main",
),
)
}
.unwrap()
};
After this, vs can be used as in the example.

Related

How do I modify a mesh after it has been created in bevy rust?

I am working on a procedural generation system and want to be able to modify a mesh frame by frame in bevy rust.
I have tried using assets.get_mut() however this results in an error: help: trait `DerefMut` is required to modify through a dereference, but it is not implemented for `bevy::prelude::Res<'_, bevy::prelude::Assets<bevy::prelude::Mesh>>
Any help would be greatly appreciated.
Here is what my current code looks like roughly:
// Function which is executed at the very start
fn setup (
mut commands: Commands,
mut meshes: ResMut<Assets<Mesh>>,
mut materials: ResMut<Assets<StandardMaterial>>,
asset_server: Res<AssetServer>
) {
let mut mesh = Mesh::from(bevy::prelude::shape::Icosphere { radius: 0.5, subdivisions: 10 });
commands.spawn()
.insert_bundle(PbrBundle {
mesh: meshes.add(mesh),
material: materials.add(colour.into()),
..Default::default()
})
.insert(Transform::from_xyz(0.0, 0.0, 0.0));
}
// Function which is executed every frame
fn update_planets (
mut query: Query<(&Transform, &Handle<Mesh>)>,
assets: Res<Assets<Mesh>>
) {
let (transform, handle) = query.get_single_mut().expect("");
let mut mesh = assets.get_mut(handle.id); // Error caused here
if mesh.is_some() {
let positions = temp.attribute(Mesh::ATTRIBUTE_POSITION).unwrap();
if let VertexAttributeValues::Float32x3(thing) = positions {
let mut temporary = Vec::new();
for i in thingy {
let temp = Vec3::new(i[0], i[1], i[2]);
... // Modify temp here
temporary.push(temp);
}
mesh.unwrap().insert_attribute(Mesh::ATTRIBUTE_POSITION, temporary);
}
}
Fairly new to Bevy, but if you want to mutate an asset, I believe you should be using ResMut rather than Res.
i.e. assets: Res<Assets<Mesh>> should actually be mut assets: ResMut<Assets<Mesh>>.

Tui-rs: flickering when drawing multiple widgets

Good evening!
I'm trying to write a very simple terminal application that draws two textboxes on screen, accepting input on one and showing output on the other, using Rust and tui-rs. The first part works perfectly, but my problems arose when i tried to draw two blocks at the same time: for some reason, it only shows the second block (in order of drawing) and if i move my mouse, it flickers between the two in a weird way. My best guess is that this is due to my drawing implementation, which somehow "clears" the screen whenever it needs to draw something, but if that's the case, i couldn't find any doc on it, and i wouldn't know how to go about working around this. I've provided some code that should be enough to replicate the issue on a smaller scale.
#![allow(unused_imports)]
#![allow(unused_variables)]
use crossterm::{
event::{self, DisableMouseCapture, EnableMouseCapture, Event},
execute,
terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen},
};
use std::io
use tui::{
backend::CrosstermBackend,
layout::Rect,
widgets::{Block, Borders},
Terminal,
};
struct FirstStruct {}
impl FirstStruct {
pub fn draw(&self, term: &mut Terminal<CrosstermBackend<io::Stdout>>) -> io::Result<()> {
term.draw(|f| {
let size = f.size();
let (w, h) = (size.width / 2, size.height);
let (x, y) = (size.x, size.y);
let rect = Rect::new(x, y, w, h);
let block = Block::default()
.title("One")
.borders(Borders::ALL);
f.render_widget(block, rect)
})?;
Ok(())
}
}
struct SecondStruct { }
impl SecondStruct {
pub fn draw(&self, term: &mut Terminal<CrosstermBackend<io::Stdout>>) -> io::Result<()> {
term.draw(|f| {
let size = f.size();
let (w, h) = (size.width / 2, size.height);
let (x, y) = (size.x + w, size.y);
let rect = Rect::new(x, y, w, h);
let block = Block::default()
.title("Two")
.borders(Borders::ALL);
f.render_widget(block, rect)
})?;
Ok(())
}
}
fn main() -> io::Result<()>{
enable_raw_mode()?;
let mut stdout = io::stdout();
execute!(stdout, EnterAlternateScreen, EnableMouseCapture)?;
let backend = CrosstermBackend::new(stdout);
let mut terminal = Terminal::new(backend)?;
let first = FirstStruct {};
let second = SecondStruct {};
let mut running = true;
while running {
if let Event::Key(key) = event::read()? {
running = false;
}
second.draw(&mut terminal)?;
first.draw(&mut terminal)?;
}
disable_raw_mode()?;
execute!(
terminal.backend_mut(),
LeaveAlternateScreen,
DisableMouseCapture
)?;
terminal.show_cursor()?;
Ok(())
}
Does anybody know how i can fix this issue? Thanks in advance!!
Every time you call Terminal::draw(), you must draw everything that you want to be visible at once. Instead of passing Terminal to your own draw functions, pass the Frame that you get from Terminal::draw(). That is, replace
second.draw(&mut terminal)?;
first.draw(&mut terminal)?;
with
terminal.draw(|f| {
first.draw(f)?;
second.draw(f)?;
});
and change the signature of FirstStruct and SecondStruct to match.
Also, it would be more usual to, instead of computing the rectangle for each widget in the individual functions, decide at the top level (using Layout, perhaps) and pass Rects down to the drawing functions. That way, they can be positioned differently in different situations. What you have will work, but it's not as easy to change.
Layout code from the documentation's example, adjusted to your situation:
terminal.draw(|f| {
let chunks = Layout::default()
.direction(Direction::Horizontal)
.constraints(
[
Constraint::Percentage(50),
Constraint::Percentage(50),
].as_ref()
)
.split(f.size());
first.draw(f, chunks[0])?;
second.draw(f, chunks[1])?;
});

Peripheral Initialisation of GPIO Output with stm32f1xx_hal on bluepill development board

I would like to initialize a basic output GPIO pin on my blue pill board. I am using Rust and the stm32f1xx_hal crate. I want to create a struct Peripherals which holds the handle to the output in the following way:
use cortex_m_rt;
use stm32f1xx_hal::{
pac,
prelude::*,
gpio,
afio,
serial::{Serial, Config},
};
use crate::pac::{USART1};
type GpioOutput = gpio::gpioc::PC13<gpio::Output<gpio::PushPull>>;
pub struct Peripherals{
led: Option<GpioOutput>
}
impl Peripherals {
fn init() -> Peripherals {
let dp = pac::Peripherals::take().unwrap();
let cp = cortex_m::Peripherals::take().unwrap();
// set clock frequency to internal 8mhz oscillator
let mut rcc = dp.RCC.constrain();
let mut flash = dp.FLASH.constrain();
let clocks = rcc.cfgr.sysclk(8.mhz()).freeze(&mut flash.acr);
// access PGIOC registers
let mut gpioc = dp.GPIOC.split(&mut rcc.apb2);
return Peripherals{
led: Peripherals::init_led(&mut gpioc)
}
}
fn init_led(gpioc: &mut gpio::gpioc::Parts) -> Option<GpioOutput> {
let led = &gpioc.pc13.into_push_pull_output(&mut gpioc.crh);
return Some(led);
}
}
This code does not work, since init_led returns Option<&GpioOutput>. Now I am wondering if it makes sense to use a lifetime parameter in the Peripherals struct and store a reference to the GpioOutput within the struct. Or is it more sensible to store the unreferenced value - and how would I implement either of these options?
The only solution which seems to work is moving the init_led code to the scope of the init function:
return Peripherals{
led: Some(gpioc.pc13.into_push_pull_output(&mut gpioc.crh))
}
But i would like to seperate that code within its own function. How can i do that?
Ok, i figured out a way in case someone else is having the same problem:
pub fn init() -> Peripherals {
let dp = pac::Peripherals::take().unwrap();
let cp = cortex_m::Peripherals::take().unwrap();
// set clock frequency to internal 8mhz oscillator
let rcc = dp.RCC.constrain();
let mut flash = dp.FLASH.constrain();
// access PGIOC and PGIOB registers and prepare the alternate function I/O registers
let mut apb2 = rcc.apb2;
let gpioc = dp.GPIOC.split(&mut apb2);
let clocks = rcc.cfgr.sysclk(8.mhz()).freeze(&mut flash.acr);
return Peripherals{
led: Peripherals::init_led(gpioc)
}
}
fn init_led(mut gpioc: stm32f1xx_hal::gpio::gpioc::Parts) -> Option<GpioOutput> {
let led = gpioc.pc13.into_push_pull_output(&mut gpioc.crh);
return Some(led);
}
I am just wondering if this is the correct way to do it or will it create extra overhead, because i am passing gpioc by value instead of by reference in the init_led function?

How can I figure out why a call to LLVMTargetMachineEmitToFile fails when called using llvm-sys?

extern crate llvm_sys;
use llvm_sys::*;
use llvm_sys::prelude::*;
use llvm_sys::core::*;
pub fn emit(module: LLVMModuleRef) {
unsafe {
use llvm_sys::target::*;
use llvm_sys::target_machine::*;
let triple = LLVMGetDefaultTargetTriple();
LLVM_InitializeNativeTarget();
let target = LLVMGetFirstTarget();
let cpu = "x86-64\0".as_ptr() as *const i8;
let feature = "\0".as_ptr() as *const i8;
let opt_level = LLVMCodeGenOptLevel::LLVMCodeGenLevelNone;
let reloc_mode = LLVMRelocMode::LLVMRelocDefault;
let code_model = LLVMCodeModel::LLVMCodeModelDefault;
let target_machine = LLVMCreateTargetMachine(target, triple, cpu, feature, opt_level, reloc_mode, code_model);
let file_type = LLVMCodeGenFileType::LLVMObjectFile;
LLVMTargetMachineEmitToFile(target_machine, module, "/Users/andyshiue/Desktop/main.o\0".as_ptr() as *mut i8, file_type, ["Cannot generate file.\0".as_ptr()].as_mut_ptr() as *mut *mut i8);
}
}
I'm writing a toy compiler and I want to generate object files, but the file LLVM outputs is empty.
I found that LLVMTargetMachineEmitToFile returns 1, which means something I'm doing is wrong, but what am I doing wrong?
It would be better if I can know how I can know what is wrong. Is there any way I can get some error message? I don't have any experience in C/C++.
As commenters have already said, to do what you want to do (write a compiler using LLVM), you are going to need to be able to read (and probably write) at the very least C and maybe C++.
Even though you are compiling code with the Rust compiler, you aren't really writing any Rust yet. Your entire program is wrapped in unsafe blocks because you are calling the C functions exposed by LLVM (which is written in C++). This may be why some commenters are asking if you have gotten your code to work in C first.
As in your other question, you are still calling the LLVM methods incorrectly. In this case, review the documentation for LLVMTargetMachineEmitToFile:
LLVMBool LLVMTargetMachineEmitToFile(LLVMTargetMachineRef T,
LLVMModuleRef M,
char *Filename,
LLVMCodeGenFileType codegen,
char **ErrorMessage)
Returns any error in ErrorMessage. Use LLVMDisposeMessage to dispose the message.
The method itself will tell you what is wrong, but you have to give it a place to store the error message. You should not provide an error string to it. I'm pretty sure that the current code is likely to generate some exciting memory errors when it tries to write to the string literal.
If I rewrite your code to use the error message:
extern crate llvm_sys;
use llvm_sys::*;
use llvm_sys::prelude::*;
use llvm_sys::core::*;
use std::ptr;
use std::ffi::{CStr, CString};
pub fn emit(module: LLVMModuleRef) {
let cpu = CString::new("x86-64").expect("invalid cpu");
let feature = CString::new("").expect("invalid feature");
let output_file = CString::new("/tmp/output.o").expect("invalid file");
unsafe {
use llvm_sys::target::*;
use llvm_sys::target_machine::*;
let triple = LLVMGetDefaultTargetTriple();
LLVM_InitializeNativeTarget();
let target = LLVMGetFirstTarget();
let opt_level = LLVMCodeGenOptLevel::LLVMCodeGenLevelNone;
let reloc_mode = LLVMRelocMode::LLVMRelocDefault;
let code_model = LLVMCodeModel::LLVMCodeModelDefault;
let target_machine = LLVMCreateTargetMachine(target, triple, cpu.as_ptr(), feature.as_ptr(), opt_level, reloc_mode, code_model);
let file_type = LLVMCodeGenFileType::LLVMObjectFile;
let mut error_str = ptr::null_mut();
let res = LLVMTargetMachineEmitToFile(target_machine, module, output_file.as_ptr() as *mut i8, file_type, &mut error_str);
if res == 1 {
let x = CStr::from_ptr(error_str);
panic!("It failed! {:?}", x);
// TODO: Use LLVMDisposeMessage here
}
}
}
fn main() {
unsafe {
let module = LLVMModuleCreateWithName("Main\0".as_ptr() as *const i8);
emit(module);
}
}
TargetMachine can't emit a file of this type
So that's your problem.
Rust-wise, you may want to wrap up the work needed to handle the silly LLVMBool so you can reuse it. One way would be:
fn llvm_bool<F>(f: F) -> Result<(), String>
where F: FnOnce(&mut *mut i8) -> i32
{
let mut error_str = ptr::null_mut();
let res = f(&mut error_str);
if res == 1 {
let err = unsafe { CStr::from_ptr(error_str) };
Err(err.to_string_lossy().into_owned())
//LLVMDisposeMessage(error_str);
} else {
Ok(())
}
}
// later
llvm_bool(|error_str| LLVMTargetMachineEmitToFile(target_machine, module, output_file.as_ptr() as *mut i8, file_type, error_str)).expect("Couldn't output");

How should I read the contents of a file respecting endianess?

I can see that in Rust I can read a file to a byte array with:
File::open(&Path::new("fid")).read_to_end();
I can also read just one u32 in either big endian or little endian format with:
File::open(&Path::new("fid")).read_be_u32();
File::open(&Path::new("fid")).read_le_u32();
but as far as I can see i'm going to have to do something like this (simplified):
let path = Path::new("fid");
let mut file = File::open(&path);
let mut v = vec![];
for n in range(1u64, path.stat().unwrap().size/4u64){
v.push(if big {
file.read_be_u32()
} else {
file.read_le_u32()
});
}
But that's ugly as hell and I'm just wondering if there's a nicer way to do this.
Ok so the if in the loop was a big part of what was ugly so I hoisted that as suggested, the new version is as follows:
let path = Path::new("fid");
let mut file = File::open(&path);
let mut v = vec![];
let fun = if big {
||->IoResult<u32>{file.read_be_u32()}
} else {
||->IoResult<u32>{file.read_le_u32()}
};
for n in range(1u64, path.stat().unwrap().size/4u64){
v.push(fun());
}
Learned about range_step and using _ as an index, so now I'm left with:
let path = Path::new("fid");
let mut file = File::open(&path);
let mut v = vec![];
let fun = if big {
||->IoResult<u32>{file.read_be_u32()}
} else {
||->IoResult<u32>{file.read_le_u32()}
};
for _ in range_step(0u64, path.stat().unwrap().size,4u64){
v.push(fun().unwrap());
}
Any more advice? This is already looking much better.
This solution reads the whole file into a buffer, then creates a view of the buffer as words, then maps those words into a vector, converting endianness. collect() avoids all the reallocations of growing a mutable vector. You could also mmap the file rather than reading it into a buffer.
use std::io::File;
use std::num::{Int, Num};
fn from_bytes<'a, T: Num>(buf: &'a [u8]) -> &'a [T] {
unsafe {
std::mem::transmute(std::raw::Slice {
data: buf.as_ptr(),
len: buf.len() / std::mem::size_of::<T>()
})
}
}
fn main() {
let buf = File::open(&Path::new("fid")).read_to_end().unwrap();
let words: &[u32] = from_bytes(buf.as_slice());
let big = true;
let v: Vec<u32> = words.iter().map(if big {
|&n| { Int::from_be(n) }
} else {
|&n| { Int::from_le(n) }
}).collect();
println!("{}", v);
}

Resources