trying to wrap my head around how to do some simple plotting in egui. I have a data member in myapp struct that is a Vec::<Value>.
Is there a way to pass that into Points::new(Values::from_values(data.to_vec()) without creating a copy of the values?
Examples generally generate on the fly but it feels a bit excessive to reading in from disc and parse the text data for each frame.
struct MyApp {
data: Vec<Value>,
}
myplot.show(ui, |plot_ui| {
let points = Points::new(Values::from_values(data.to_vec()));
plot_ui.points(points);
but it feels a bit excessive to reading in from disc and parse the text data for each frame.
You don't have to (and should not) do this every frame. Parse it once and store the results in a persistent structure, and copy from there during your show closure.
That said, it does look like you will need to create a new Points object every frame, as plot_ui.points takes the points object by value. The way you are doing it now - storing a Vec<Value> and cloning it each frame - is probably the best you are going to get.
Related
At the beginning of my program, I read data from a file:
let file = std::fs::File::open("data/games.json").unwrap();
let data: Games = serde_json::from_reader(file).unwrap();
I would like to know how it would be possible to do this at compile time for the following reasons:
Performance: no need to deserialize at runtime
Portability: the program can be run on any machine without the need to have the json file containing the data with it.
I might also be useful to mention that, the data can be read only which means the solution can store it as static.
This is straightforward, but leads to some potential issues. First, we need to deal with something: do we want to load the tree of objects from a file, or parse that at runtime?
99% of the time, parsing on boot into a static ref is enough for people, so I'm going to give you that solution; I will point you to the "other" version at the end, but that requires a lot more work and is domain-specific.
The macro (because it has to be a macro) you are looking for to be able to include a file at compile-time is in the standard library: std::include_str!. As the name suggests, it takes your file at compile-time and generates a &'static str from it for you to use. You are then free to do whatever you like with it (such as parsing it).
From there, it is a simple matter to then use lazy_static! to generate a static ref to our JSON Value (or whatever it may be that you decide to go for) for every part of the program to use. In your case, for instance, it could look like this:
const GAME_JSON: &str = include_str!("my/file.json");
#[derive(Serialize, Deserialize, Debug)]
struct Game {
name: String,
}
lazy_static! {
static ref GAMES: Vec<Game> = serde_json::from_str(&GAME_JSON).unwrap();
}
You need to be aware of two things when doing this:
This will massively bloat your file size, as the &str isn't compressed in any way. Consider gzip
You'll need to worry about the usual concerns around multiple, threaded access to the same static ref, but since it isn't mutable you only really need to worry about a portion of it
The other way requires dynamically generating your objects at compile-time using a procedural macro. As stated, I wouldn't recommend it unless you really have a really expensive startup cost when parsing that JSON; most people will not, and the last time I had this was when dealing with deeply-nested multi-GB JSON files.
The crates you want to look out for are proc_macro2 and syn for the code generation; the rest is very similar to how you would write a normal method.
When you are deserializing something at runtime, you're essentially building some representation in program memory from another representation on disk. But at compile-time, there's no notion of "program memory" yet - where will this data deserialize too?
However, what you're trying to achieve is, in fact, possible. The main idea is like following: to create something in program memory, you must write some code which will create the data. What if you're able to generate the code automatically, based on the serialized data? That's what uneval crate does (disclaimer: I'm the author, so you're encouraged to look through the source to see if you can do better).
To use this approach, you'll have to create build.rs with approximately the following content:
// somehow include the Games struct with its Serialize and Deserialize implementations
fn main() {
let games: Games = serde_json::from_str(include_str!("data/games.json")).unwrap();
uneval::to_out_dir(games, "games.rs");
}
And in you initialization code you'll have the following:
let data: Games = include!(concat!(env!("OUT_DIR"), "/games.rs"));
Note, however, that this might be fairly hard to do in ergonomic way, since the necessary struct definitions now must be shared between the build.rs and the crate itself, as I mentioned in the comment. It might be a little easier if you split your crate in two, keeping struct definitions (and only them) in one crate, and the logic which uses them - in another one. There's some other ways - with include! trickery, or by using the fact that the build script is an ordinary Rust binary and can include other modules as well, - but this will complicate things even more.
from numpy import *
arr1=array([1,2,3])
arr2=arr1 #aliasing
arr3=arr1.view() #shallow copy
arr4=arr1.copy() #deep copy
id(arr1) #120638624
id(arr2) #120638624
id(arr3) #120639004
id(arr4) #123894390
I know about shallow copy and deep copy as in C,C++ but what is it which is happening in python?
Look the c++ code . is it the same happen?
int main()
{
int arr[]={1,2,3};
int (&a)[3]=arr;//aliasing
int* b=arr;// shallow copy
int c[3];//deep copy
int i;
for(i=0;i<3;i++)
c[i]=arr[i];
}
You have aliasing and deep copy right (though copying array values in a for-loop is not usually considered a good way to do it).
On the other hand, a Numpy view is not a pointer. It's a much heavier duty thing, and a proper object instance in it's own right. Conceptually, it's the closest thing to an actual pointer-to-array that exists in Python (though the semantics are of course different), and can fulfill some of the same roles in your code. A view will never be as performant as a raw pointer, since the view needs to carry around a set of data, such as shape and strides, that may be different from that of its "parent" array.
On the other-other hand, both Numpy arrays and views wrap the __array_interface__, which in turn wraps a pointer to the underlying buffer that holds the actual data. So when you make a new view of an array, you do end up making a proper shallow copy of the underlying data, since you make a copy of the pointer to that data (albeit through several layers of wrapping and indirection).
In C, I can define many structures and structure of structures.
From a buffer, I can just set the pointer at the beginning of this structure to say this buffer represents this structure.
Of course, I do not want to copy anything, just mapping, otherwise I loose the benefit of the speed.
Is it possible in NodeJs ? How can I do ? How can I be sure it's a mapping and not creating a new object and copy information inside ?
Example:
struct House = {
uint8 door,
uint16BE kitchen,
etc...
}
var mybuff = Buffer.allocate(10, 0)
var MyHouse = new House(mybuff) // same as `House* MyHouse = (House*) mybuff`
console.log(MyHouse.door) // will display the value of door
console.log(MyHouse.kitchen) // will display the value of kitchen with BE function.
This is wrong but explain well what I am looking for.
This without copying anything.
And if I do MyHouse.door=56, mybuff contains know the 56. I consider mybuff as a pointer.
Edit after question update below
Opposed to C/C++, javascript uses pionters by default, so you don't have to do anything. It's the other way around, actually: You have to put some effort in if you want a copy of the current object.
In C, a struct is nothing more than a compile-time reference to different parts of data in the struct. So:
struct X {
int foo;
int bar;
}
is nothing more than saying: if you want bar from a variable with type X, just add the length of foo (length of int) to the base pointer.
In Javascript, we do not even have such a type. We can just say:
var x = {
foo: 1,
bar: 2
}
The lookup of bar will automatically be a pointer (we call them references in javascript) lookup. Because javascript does not have types, you can view an object as a map/dictionary with pointers to mixed types.
If you, for any reason, want to create a copy of a datastructure, you would have to iterate through the entire datastructure (recursively) and create a copy of the datastructure manually. The basic types are not pointer based. These include number (Javascript automatically differentiates between int and float under the hood), string and boolean.
Edit after question update
Although I am not an expert on this area, I do not think it is possible. The problem is, the underlying data representation (as in how the data is represented as bytes in memory) is different, because javascript does not have compile-time information about data structures. As I said before, javascript doesn't have classes/structs, just objects with fields, which basically behave (and may be implemented as) maps/dictionaries.
There are, however, some third party libraries to cope with these problems. There are two general approaches:
Unpack everything to javascript objects. The data will be copied, but you can work with it as normal javascript objects. You should use this if you read/write the data intensively, because the performance increase you get when working with normal javascript objects outweighs the advantage of not having to unpack the data. Link to example library
Leave all data in the buffer. When you need some of the data, compute the location of the data in the buffer at runtime, and read/write at this location accordingly. Because the struct data location computations are done in runtime, you should use this only when you have loads of data and only a few reads/writes to it. In this case the performance decrease of unpacking all data outweighs the few runtime computations that have to be done. Link to example library
As a side-note, if the amount of data you have to process isn't that much, I'd recommend to just unpack the data. It saves you the headache of having to use the library as interface to your data. Computers are fast enough nowadays to copy/process some amount of data in memory. Also, these third party libraries are just some examples. I recommend you do a little more research for libraries to decide which one suits your needs.
I'm using Glium to do rendering for an emulator I'm writing. I've pieced together something that works (based on this example) but I suspect that it's pretty inefficient. Here's the relevant function:
fn update_screen(display: &Display, screen: &Rc<RefCell<NesScreen>>) {
let target = display.draw();
// Write screen buffer
let borrowed_scr = screen.borrow();
let mut buf = vec![0_u8; 256 * 240 * 3];
buf.clone_from_slice(&borrowed_scr.screen_buffer[..]);
let screen = RawImage2d::from_raw_rgb_reversed(buf, SCREEN_DIMENSIONS);
glium::Texture2d::new(display, screen)
.unwrap()
.as_surface()
.fill(&target, MagnifySamplerFilter::Nearest);
target.finish().unwrap();
}
At a high level, this is what I'm doing:
Borrow NesScreen which contains the screen buffer, which is an array.
Clone the screen buffer into a vector
Create a texture from the vector data and render it
My suspicion is that cloning the entire screen buffer via clone_from_slice is really inefficient. The RawImage2d::from_raw_rgb_reversed function takes ownership of the vector passed into it, so I'm not sure how to do this in a way that avoids the clone.
So, two questions:
Is this actually inefficient? I don't have enough experience rendering stuff to know intuitively.
If so, is there a more efficient way to do this? I've scoured Glium quite a bit but there isn't much specific to 2D rendering.
This won't be a very good answer, but maybe a few things here could help you.
First of all: is this really inefficient? That's really hard to say, especially the OpenGL part, as OpenGL performance depends a lot on when synchronization is required/requested.
As for the cloning of the screen buffer: you are merely copying 180kb, which is not too much. I quickly benchmarked it on my machine and cloning a 180kb vector takes around 5µs, which is really not a lot.
Note that you can create a RawImage2d without using a method, because all fields are public. This means that you can avoid the simple 5µs clone if you create a reversed vector yourself. However, reversing the vector with the method glium uses is a lot slower than just cloning the vector; on my machine it takes 170µs for a vector of the same length. This is probably still tolerable if you just want to achieve 60fps = 17ms per frame, but still not very nice.
You could think about using the correct row ordering in your original array to avoid this problem. OR you could, instead of directly copying the texture to the framebuffer, just draw a fullscreen quad (one vertex for each screen corner) with the texture on it. Sure, then you need a mesh, a shader and all that stuff, but you could just "reverse" the image by tweaking the texture coordinates.
Lastly, I unfortunately don't know a lot about the time the GPU takes to execute the OpenGL commands. I'd guess that it's not optimal because OpenGL doesn't have a lot of room to schedule your commands, but has to execute them right away (forced synchronization). But maybe that's not avoidable in your case.
I have a question regarding a VTK class called vtkPoints. The class has the functionality to insert individual points, but doesn't have the functionality to remove individual points. This is inconvenient for the case when the view needs to be updated by calling vtkPoints::Modified() to drive the graphics pipeline again to update/re-render the view. The obvious case of re-initializing vtkPoints, adding all points again and updating/re-rendering the view is too slow and resource intensive.
Is there a possible solution to this problem?
Thanks,
timecatcher
The example http://www.vtk.org/Wiki/VTK/Examples/Cxx/PolyData/DeletePoint has a rather simple solution. Copy points to another temporary vtkPoints by filtering the id to remove, and shallow-copy it to the original one:
void ReallyDeletePoint(vtkSmartPointer<vtkPoints> points, vtkIdType id)
{
vtkSmartPointer<vtkPoints> newPoints =
vtkSmartPointer<vtkPoints>::New();
for(vtkIdType i = 0; i < points->GetNumberOfPoints(); i++)
{
if(i != id)
{
double p[3];
points->GetPoint(i,p);
newPoints->InsertNextPoint(p);
}
}
points->ShallowCopy(newPoints);
}
There is no way to remove individual points from vtkPoints. Depending on what your problem is here are some potential solutions:
Store all the points in a single vtkPoint instance, overwrite points you want to get rid of with a value to replace it. This would be useful to cap the max amount of memory a point cloud could use.
Store all the points in a single vtkPoint instance, overwrite points you want to get rid of with a value a value that is far away from your scene.
Create a vtkPoint, vtkCellArray, and vtkPolyData for each point, join them together using vtkAppendPolyData. This has a RemoveInput(vtkPolyData*) so you could remove individual points.
this is a way to remove a point from vtkPoints in python.
def deletePoint(vtk_points, *args):
if isinstance(args[0], list):
args = args[0]
points = vtk.vtkPoints()
for i in range(vtk_points.GetNumberOfPoints()):
if i in args: continue
p = vtk_points.GetPoint(i)
points.InsertNextPoint(p)
return points
No: it has the same limitations on mutability as a float[] array. The only way to remove a point is to copy and exclude. Note that you will incur the same copy penalty when doing Insert() operations if you exceed pre-allocated storage.
Other related data structure options include vtkCollection and vtkPolyData. Also, it might be informative to look at the source for some of the PolyData clip filters to get an idea of the way these type of operations are implemented internally - those should be about as fast as they can be within the limitations of the data structure.
Allowing a point to be deleted from vtkPoints can cause a data set that uses the point to become corrupted. You would also have to delete all cells that use that point as well as shrink the point data arrays.
I would suggest that if you have a filter that is creating the vtkPoints to modify the vtkPoints object and anything that depends on that in the RequestData() method.