On Android platform (API-19) I would like to copy a direct byte buffer into a render script allocation. Is it possible to improve the following code, for example by using NDK?
final ByteBuffer buffer = ...src;
final byte[] bytes;
if (buffer.hasArray()) {
bytes = buffer.array();
} else {
bytes = new byte[buffer.capacity()];
buffer.get(bytes);
buffer.rewind();
}
allocation.copyFromUnchecked(bytes);
Unfortunately, no. The APIs are not constructed where you can provide the backing data store for the Allocation or even retrieve an NIO based buffer that the Allocation created. The closest thing you could use would be to have a Bitmap based Allocation which was created with USAGE_SHARED so it could be sync'd as differences rather than a full copy.
It seems you can do the following:
Prepare your fixed size nio ByteBuffer
Fill the buffer in NDK (memcpy can be quite fast)
Use the yourAllocation.copyFromUnchecked(nioBuffer.array()) method
I hope it helps
Related
I have an 8k buffer of bytes. The data in the first part of the buffer is highly structured, with a set of variables of size u32 and u16. The data in the later part of the buffer is for small blobs of bytes. After the structured data, there is an array of offsets that point to the start of each small blob. Something like this:
struct MyBuffer {
myvalue: u32,
myothervalue: u16,
offsets: [u16], // of unknown length
bytes: [u8] // fills the rest of the buffer
}
I'm looking for an efficient way to take an 8k blob of bytes fetched from disk, and then overlay it or cast it to the MyBuffer struct. In this way I can get/set the structured values easily (let myvar = my_buffer.myvalue), and I can also access the small blobs as slices (let myslice = my_buffer[offsets[2]..offsets[3]]).
The benefit of this approach is you get efficient, zero-copy access to the data.
The fact that the number of offsets and the number of blobs of bytes is unknown makes this tricky.
In C, it's easy; you just cast a pointer to the 8k buffer to the appropriate struct and it just works. You have two different data structures pointing at the same memory.
How can I do the same thing in Rust?
I have discovered that there is an entire Rust ecosystem dedicated to solving this problem. Rust itself handles it poorly.
There are many Rust serialization frameworks, including those that are "zero-copy". The best list is here: https://github.com/djkoloski/rust_serialization_benchmark
The zero-copy frameworks include abomonation, capnp, flatbuffers, rkyv, and alkahest.
I develop a WPF application which uses NLog.
When I profile it using dotMemory I can see ~300k of Memory used by a Dictionary which NLogs creates during configuration.
I do not know what the ObjectReflectionCache and MruCache are used for an whether their memory will be freed at some time. Maybe someone can clarify the purpose of the classes and the huge capacity used for the Dictionary.
Thank you.
stacktrace how NLog creates Dictionary
memory usage of Dictionary
NLog version: 4.7.2
Platform: .NET Framework 4.6.1
Current NLog config
LoggingConfiguration config = new LoggingConfiguration();
DebuggerTarget debuggerTarget = new DebuggerTarget { Name = "vs", Layout = DebuggerLayout };
DebuggerLoggingRule = new LoggingRule(nlogLoggerNamePattern, debuggerTarget);
config.LoggingRules.Add(DebuggerLoggingRule);
LogManager.Configuration = config;
Taking my hat off for someone that cares about 300 KByte. Long time since I have been focusing on overhead of this size (But still important).
From your screenshot that it is this collection:
https://github.com/NLog/NLog/blob/29879ece25a7d2e47a148fc3736ec310aee29465/src/NLog/Internal/Reflection/ObjectReflectionCache.cs#L52
The dictionary capacity is 10103 entries. Which is probably the prime number closest to 10000.
I'm guessing the size of Tkey + TValue of the Dictionary is close to 30 bytes. This gives the total result of 300 KBytes though probably unused. Guess NLog could reduce its initial overhead by not allocating 300 KByte upfront.
The dictionary is used for caching object-reflection for object-types logged for use in structured logging. Before NLog 4.7 the dictionary was only allocated when actually doing structured logging, but this changed with https://github.com/NLog/NLog/pull/3610
Update Memory footprint has been reduced with NLog ver. 4.7.3
I have an old application which uses CString through out the code.
Maximum size of the string which is written to CString is 8,9 characters, but I noticed that it allocates more. (at least 128 bytes per CString)
Is there a way to limit the size of CString buffer. Fox example to 64bytes?
Thanks in advance,
No.
In detail:
The CString implementation is internal. You find the code in CSimpleStringT::PrepareWrite2 and in the Reallocate function of the string manager.
PrepareWrite2 allocates the buffer. If there was no buffer before, it requests the exact size. If the buffer is changes. The buffer is newLength*1.5.
Finally the request is passed to the Reallocate function of the string manager. Finally this size is passed to the CRT function realloc.
Keep in mind that the memory manager itself decides again what blocksize is "effective" and might change the size again.
So as I see (in VS-2013/VS-2010) you have no chance to change the blocksize. The job is finally done by realloc. And even this function passes its request to HeapAlloc...
Does anyone know if it's more memory-efficient to use NSData.FromFile or FromStream vs filling an NSData.FromArray? My specific case is that I'm sending a large file via email (MFMailComposeViewController.AddAttachmentData). Right now I'm filling an NSData with the bytes that I want to send, but I was hoping that if I use NSData.FromFile or FromStream, it wouldn't ever keep ALL the file data in memory at once.
I think you are out of luck here. If you pass data on to AddAttachmentData(), the mail composer will most probably copy the bytes and hold them in memory (you should see from Instruments). Best you can do is to Dispose() your NSData as soon as you passed it on to release memory as fast as possible.
does J2ME have something similar to RandomAccessFile class, or is there any way to emulate this particular (random access) functionality?
The problem is this: I have a rather large binary data file (~600 KB) and would like to create a mobile application for using that data. Format of that data is home-made and contains many index blocks and data blocks. Reading the data on other platforms (like PHP or C) usually goes like this:
Read 2 bytes for index key (K), another 2 for index value (V) for the data type needed
Skip V bytes from the start of the file to seek to a file position there the data for index key K starts
Read the data
Profit :)
This happens many times during the program flow.
Um, and I'm investigating possibility of doing the very same on J2ME, and while I admit I'm quite new to the whole Java thing, I can't seem to be able to find anything beyond InputStream (DataInputStream) classes which don't have the basic seeking/skipping to byte/returning position functions I need.
So, what are my chances?
You should have something like this
try {
DataInputStream di = new DataInputStream(is);
di.marke(9999);
short key = di.readShort();
short val = di.readShort();
di.reset();
di.skip(val);
byte[] b= new byte[255];
di.read(b);
}catch(Exception ex ) {
ex.printStackTrace();
}
I prefer not to use the marke/reset methods, I think it is better to save the offset from the val location not from the start of the file so you can skip these methods. I think they have som issues on some devices.
One more note, I don't recommend to open a 600 KB file, it will crash the application on many low end devices, you should split this file to multiple files.