I'm new to memory mapping, what I want to do is to share a map file between many threads, for that I need to create the map file and use the function: MapViewOfFile so every thread can access to a part of the file, of course I need to send the offset of the view to each thread that respects allocation granularity. But the part that I don't understand is: dwFileOffsetHigh & dwFileOffsetLow.
MSDN says:
The combination of the high and low offsets must specify an offset within the file mapping.
So how can I set the values of these two parameters in a way that they can specify the right offset. Do I need to make any calculations or just use variables and the system handles the rest (Finding the offset) ?, I'm really stuck with this, and every time I make a try I get an exception. So assuming that I know the offset and the size of each view, how can I possibly know the values of these too parametres? An example is worth a thousand explanations. And here is an explanation of what I'm trying to do:
// The main thread create map file and specify the view for every worker thread:
WorkerThreads[i] := WorkerThread.create(...,bloc_offset,bloc_size,...); // So each worker writes in a specified view.
//The worker thread then opens the view and writes data in:
data := mapViewOfFile(mapfileH, FILE_MAP_WRITE, dwFileOffsetHigh, dwFileOffsetLow, blocSize);`
Thanks for answering.
If your file is <= 2GB in size, you can pass the desired offset to each thread as a DWORD and then each thread can assign its offset directly to dwFileOffsetLow and set dwFileOffsetHigh to 0.
pView := MapViewOfFile(hMapping, FILE_MAP_WRITE, 0, offset, size);
If your file is > 2GB in size, pass the desired offset to each thread as an Int64 or UInt64, and then use a ULARGE_INTEGER variable to break up the value into its low and high components, which can then be assigned to dwFileOffsetLow and dwFileOffsetHigh.
var
ul: ULARGE_INTEGER;
ul.QuadPart := offset;
pView := MapViewOfFile(hMapping, FILE_MAP_WRITE, ul.HighPart, ul.LowPart, size);
Related
Is there any way to receive the alignment, in bytes, of the offset within the allocation required for a buffer with usage VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT on a given VkDevice?
If I already got such a VkBuffer, then this value can be retrieved from the size field of the VkMemoryRequirements structure received from a call to vkGetBufferMemoryRequirements.
But if I want to obtain this value with a given buffer, do I need to create a "dummy" buffer with size 1 (specifying size 0 yields a validation error, when the validation layer is enabled)?
The alignment requirement for a UBO is a device limitation: VkPhysicalDeviceLimits::minUniformBufferOffsetAlignment. The reason for this is that it applies not just to the requirement for the offset used when binding a buffer to a memory allocation, but also to any offsets used within a buffer to the start of UBO data when using that buffer as a UBO descriptor.
If I understand your question right, you're looking for an alignment for the memoryOffset parameter to vkBindBufferMemory that will be valid for any VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT VkBuffer you create later. Essentially you want the worst-case / most restrictive alignment you'll get in VkMemoryRequirements::alignment for any such buffer. Correct?
I don't think you can directly query such a worst-case alignment. VkPhysicalDeviceLimits::minUniformBufferOffsetAlignment is close, but is a lower bound on the buffer-to-memory alignment requirements, not an upper bound ([1]):
The alignment member satisfies the buffer descriptor offset alignment requirements associated with the VkBuffer’s usage:
If usage included VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT, alignment must be an integer multiple of VkPhysicalDeviceLimits::minUniformBufferOffsetAlignment.
(This means that any minUniformBufferOffsetAlignment-aligned chunk within the VkBuffer can be used for a uniform buffer descriptor. But the base offset of the VkBuffer might need to be more strongly aligned than the offsets of descriptors within it).
However, if you do create a proxy VkBuffer and query its alignment, you are guaranteed that the alignment of other VkBuffers with the same usage and flags will have the same alignment requirement:
The alignment member is identical for all VkBuffer objects created with the same combination of values for the usage and flags members in the VkBufferCreateInfo structure passed to vkCreateBuffer.
Since the buffer size can't affect alignment, it's okay to use a tiny proxy buffer like you proposed.
I want to make a multithread download using Idhttp (indy), so I have a principal thread that starts secondary threads, each secondary thread creates a file: "fileThreadNB" that is supposed to contain downloaded data, then this secondary thread downloads a part of the file on the server using idhttp.request.range and it writes downloaded data in fileThreadNB , then all these files (files created by secondary threads) are copied in one file to get the same file on the server, but the copy here takes a lot of time especially when the file on the server has a big size, so is there any other way that allows threads to write data in the same file, to be clearer; thread 0: downloads from position 0 to m, writes in fileX from position 0 to m .. thread n:downloads from position j to filesize-1, writes in fileX from position j to filesize-1.
Note: threads must write data in hard drive, so I can resume download later if something bad occurs.
I tried this code instead:
procedure TSecondaryThread.Execute;
begin
HTTP.Request.Range := Format('%d-%d',[BeginPos ,BeginPos +BlockSize -1]);
File.Position:=BeginPos;
HTTP.Get(url,File);
end;
BlockSize is the same for all threads, BeginPos changes from thread to other, the too variables are initialised in TSecondaryThread.Create.
NB:
when I try use one secondary thread, the file is well downloaded, but when I use more I get this error:External SIGSEGV, and the size of the downloaded file is bigger than the file on the server's size.
File is a global variable.
i guess that the problem is due to:File.Position:=BeginPos;but I don't know how to fix it, I would be grateful if someone helps me to solve this.
As you know the filesize you can create a empty fike with the allocation already configured to that size, then just take care to write for each thread to the right range. There should be no concurrence issues.
I have a volume stored as slices in c# memory. The slices may not be consecutive in memory. I want to import this data and create a vtkImageData object.
The first way I found is to use a vtkImageImporter, but this importer only accepts a single void pointer as data input it seems. Since my slices may not be consecutive in memory, I cannot hand a single pointer to my slice data.
A second option is to create the vtkImageData from scratch and use vtkImageData->GetScalarPointer()" to get a pointer to its data. Than fill this using a loop. This is quite costly (although memcpy could speed things up a bit). I could also combine the copy approach with the vtkImageImport ofcourse.
Are these my only options, or is there a better way to get the data into a vtk object? I want to be sure there is no other option before I take the copy approach (performance heavy), or modify the low level storage of my slices so they become consecutive in memory.
I'm not too familiar with VTK for C# (ActiViz). In C++ is a good approach and rather fast one to use vtkImageData->GetScalarPointer() and manually copy your slices. It will increase your speed storing all memory first as you said, perhaps you want to do it this more robust way (change the numbers):
vtkImageData * img = vtkImageData::New();
img->SetExtent(0, 255, 0, 255, 0, 9);
img->SetSpacing(sx , sy, sz);
img->SetOrigin(ox, oy, oz);
img->SetNumberOfScalarComponents(1);
img->SetScalarTypeToFloat();
img->AllocateScalars();
Then is not to hard do something like:
float * fp = static_cast<float *>(img->GetScalarPointer());
for ( int i = 0; i < 256* 256* 10; i ++) {
fp[i] = mydata[i]
}
Another fancier option is to create your own vtkImporter basing the code in the vtkImageImport.
I wish to record the microphone audio stream so I can do realtime DSP on it.
I want to do so without having to use threads and without having .read() block while it waits for new audio data.
UPDATE/ANSWER: It's a bug in Android. 4.2.2 still has the problem, but 5.01 IS FIXED! I'm not sure where the divide is but that's the story.
NOTE: Please don't say "Just use threads." Threads are fine but this isn't about them, and the android developers intended for AudioRecord to be fully usable without me having to specify threads and without me having to deal with blocking read(). Thank you!
Here is what I have found:
When the AudioRecord object is initialized, it creates its own internal ring type buffer.
When .start() is called, it begins recording to said ring buffer (or whatever kind it really is.)
When .read() is called, it reads either half of bufferSize or the specified number of bytes (whichever is less) and then returns.
If there is more than enough audio samples in the internal buffer, then read() returns instantly with the data. If there is not enough yet, then read() waits till there is, then returns with the data.
.setRecordPositionUpdateListener() can be used to set a Listener, and .setPositionNotificationPeriod() and .setNotificationMarkerPosition() can be used to set the notification Period and Position, respectively.
However, the Listener seems to be never called unless certain requirements are met:
1: The Period or Position must be equal to bufferSize/2 or (bufferSize/2)-1.
2: A .read() must be called before the the Period or Position timer starts counting - in other words, after calling .start() then also call .read(), and each time the Listener is called, call .read() again.
3: .read() must read at least half of bufferSize each time.
So using these rules I am able to get the callback/Listener working, but for some reason the reads are still blocking and I can't figure out how to get the Listener to only be called when there is a full read's worth.
If I rig up a button view to click to read, then I can tap it and if tap rapidly, read blocks. But if I wait for the audio buffer to fill, then the first tap is instant (read returns right away) but subsiquent rapid taps are blocked because read() has to wait, I guess.
Greatly appreciated would be any insight on how I might make the Listener work as intended - in such a way that my listener gets called when there's enough data for read() to return instantly.
Below is the relavent parts of my code.
I have some log statements in my code which send strings to logcat which allows me to see how long each command is taking, and this is how I know that read() is blocking.
(And the buttons in my simple test app also are very doggy slow to respond when it is reading repeatedly, but CPU is not pegged.)
Thanks,
~Jesse
In my OnCreate():
bufferSize=AudioRecord.getMinBufferSize(samplerate,AudioFormat.CHANNEL_CONFIGURATION_MONO,AudioFormat.ENCODING_PCM_16BIT)*4;
recorder = new AudioRecord (AudioSource.MIC,samplerate,AudioFormat.CHANNEL_CONFIGURATION_MONO,AudioFormat.ENCODING_PCM_16BIT,bufferSize);
recorder.setRecordPositionUpdateListener(mRecordListener);
recorder.setPositionNotificationPeriod(bufferSize/2);
//recorder.setNotificationMarkerPosition(bufferSize/2);
audioData = new short [bufferSize];
recorder.startRecording();
samplesread=recorder.read(audioData,0,bufferSize);//This triggers it to start doing the callback.
Then here is my listener:
public OnRecordPositionUpdateListener mRecordListener = new OnRecordPositionUpdateListener()
{
public void onPeriodicNotification(AudioRecord recorder) //This one gets called every period.
{
Log.d("TimeTrack", "AAA");
samplesread=recorder.read(audioData,0,bufferSize);
Log.d("TimeTrack", "BBB");
//player.write(audioData, 0, samplesread);
//Log.d("TimeTrack", "CCC");
reads++;
}
#Override
public void onMarkerReached(AudioRecord recorder) //This one gets called only once -- when the marker is reached.
{
Log.d("TimeTrack", "AAA");
samplesread=recorder.read(audioData,0,bufferSize);
Log.d("TimeTrack", "BBB");
//player.write(audioData, 0, samplesread);
//Log.d("TimeTrack", "CCC");
}
};
UPDATE: I have tried this on Android 2.2.3, 2.3.4, and now 4.0.3, and all act the same.
Also: There is an open bug on code.google about it - one entry started in 2012 by someone else then one from 2013 started by me (I didn't know about the first):
UPDATE 2016: Ahhhh finally after years of wondering if it was me or android, I finally have answer! I tried my above code on 4.2.2 and same problem. I tried above code on 5.01, AND IT WORKS!!! And the initial .read() call is NOT needed anymore either. Now, once the .setPositionNotificationPeriod() and .StartRecording() are called, mRecordListener() just magically starts getting called every time there is data available now so it no longer blocks, because the callback is not called until after enough data has been recorded. I haven't listened to the data to know if it's recording correctly, but the callback is happening like it should, and it is not blocking the activity, like it used to!
http://code.google.com/p/android/issues/detail?id=53996
http://code.google.com/p/android/issues/detail?id=25138
If folks who care about this bug log in and vote for and/or comment on the bug maybe it'll get addressed sooner by Google.
It's late answear, but I think I know where Jesse did a mistake. His read call is getting blocked because he is requesting shorts which are sized same as buffer size, but buffer size is in bytes and short contains 2 bytes. If we make short array to be same length as buffer we will read twice as much data.
The solution is to make audioData = new short[bufferSize/2] If the buffer size is 1000 bytes, this way we will request 500 shorts which are 1000 bytes.
Also he should change samplesread=recorder.read(audioData,0,bufferSize) to samplesread=recorder.read(audioData,0,audioData.length)
UPDATE
Ok, Jesse. I can see where another mistake can be - the positionNotificationPeriod. This value have to be large enought so it won't call the listener too often and we need to make sure that when the listener is called the bytes to read are ready to be collected. If bytes won't be ready when the listener is called, the main thread will get blocked by recorder.read(audioData, 0, audioData.length) call until requested bytes get's collected by AudioRecord.
You should calculate buffer size and shorts array length based on time interval you set - how often you want the listener to be called. Position notification period, buffer size and shorts array length all have to be adjusted correctly. Let me show you an example:
int periodInFrames = sampleRate / 10;
int bufferSize = periodInFrames * 1 * 16 / 8;
audioData = new short [bufferSize / 2];
int minBufferSize = AudioRecord.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize < minBufferSize) bufferSize = minBufferSize;
recorder = new AudioRecord(AudioSource.MIC, sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize);
recorder.setRecordPositionUpdateListener(mRecordListener);
recorder.setPositionNotificationPeriod(periodInFrames);
recorder.startRecording();
public OnRecordPositionUpdateListener mRecordListener = new OnRecordPositionUpdateListener() {
public void onPeriodicNotification(AudioRecord recorder) {
samplesread = recorder.read(audioData, 0, audioData.length);
player.write(short2byte(audioData));
}
};
private byte[] short2byte(short[] data) {
int dataSize = data.length;
byte[] bytes = new byte[dataSize * 2];
for (int i = 0; i < dataSize; i++) {
bytes[i * 2] = (byte) (data[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (data[i] >> 8);
data[i] = 0;
}
return bytes;
}
So now a bit of explanation.
First we set how often the listener have to be called to collect audio data (periodInFrames). PositionNotificationPeriod is expressed in frames. Sampling rate is expressed in frames per second, so for 44100 sampling rate we have 44100 frames per second. I divided it by 10 so the listener will be called every 4410 frames = 100 milliseconds - that's reasonable time interval.
Now we calculate buffer size based on our periodInFrames so any data won't be overriden before we collect it. Buffer size is expressed in bytes. Our time interval is 4410 frames, each frame contains 1 byte for mono or 2 bytes for stereo so we multiply it by number of channels (1 in your case). Each channel contains 1 byte for ENCODING_8BIT or 2 bytes for ENCODING_16BIT so we multiply it by bits per sample (16 for ENCODING_16BIT, 8 for ENCODING_8BIT) and divide it by 8.
Then we set audioData length to be half of the bufferSize so we make sure that when the listener gets called, bytes to read are already there waiting to be collected. That's because short contains 2 bytes and bufferSize is expressed in bytes.
Then we check if bufferSize is large enought to succesfully initialize AudioRecord object, if it's not then we set bufferSize to it's minimal size - we don't need to change our time interval or audioData length.
In our listener we read and store data to short array. That's why we use audioData.length instead buffer size, because only audioData.length can tell us the number of shorts the buffer contains.
I had it working some time ago so please let me know if it will work for you.
I'm not sure why you're avoiding spawning separate threads, but if it's because you don't want have to deal with coding them properly, you can use .schedule on a Timer object after each .read, where the time interval is set to the time it takes to get your buffer filled (number of samples in buffer / sampleRate). Yes I know this is using a separate thread, but this advice was given assuming that the reason you were avoiding using threads was to avoid having to code them properly.
This way, the longest time it can possibly block the thread for should be neglible. But I don't know why you'd want to do that.
If the above reason is not why you're avoiding using separate threads, may I ask why?
Also, what exactly do you mean by realtime? Do you intend to playback the affected audio using, let's say, an AudioTrack? Because the latency on most Android devices is pretty bad.
Is it possible to "wipe" strings in Delphi? Let me explain:
I am writing an application that will include a DLL to authorise users. It will read an encrypted file into an XML DOM, use the information there, and then release the DOM.
It is obvious that the unencrypted XML is still sitting in the memory of the DLL, and therefore vulnerable to examination. Now, I'm not going to go overboard in protecting this - the user could create another DLL - but I'd like to take a basic step to preventing user names from sitting in memory for ages. However, I don't think I can easily wipe the memory anyway because of references. If I traverse my DOM (which is a TNativeXML class) and find every string instance and then make it into something like "aaaaa", then will it not actually assign the new string pointer to the DOM reference, and then leave the old string sitting there in memory awaiting re-allocation? Is there a way to be sure I am killing the only and original copy?
Or is there in D2007 a means to tell it to wipe all unused memory from the heap? So I could release the DOM, and then tell it to wipe.
Or should I just get on with my next task and forget this because it is really not worth bothering.
I don't think it is worth bothering with, because if a user can read the memory of the process using the DLL, the same user can also halt the execution at any given point in time. Halting the execution before the memory is wiped will still give the user full access to the unencrypted data.
IMO any user sufficiently interested and able to do what you describe will not be seriously inconvenienced by your DLL wiping the memory.
Two general points about this:
First, this is one of those areas where "if you have to ask, you probably shouldn't be doing this." And please don't take that the wrong way; I mean no disrespect to your programming skills. It's just that writing secure, cryptographically strong software is something that either you're an expert at or you aren't. Very much in the same way that knowing "a little bit of karate" is much more dangerous than knowing no karate at all. There are a number of third-party tools for writing secure software in Delphi which have expert support available; I would strongly encourage anyone without a deep knowledge of cryptographic services in Windows, the mathematical foundations of cryptography, and experience in defeating side channel attacks to use them instead of attempting to "roll their own."
To answer your specific question: The Windows API has a number of functions which are helpful, such as CryptProtectMemory. However, this will bring a false sense of security if you encrypt your memory, but have a hole elsewhere in the system, or expose a side channel. It can be like putting a lock on your door but leaving the window open.
How about something like this?
procedure WipeString(const str: String);
var
i:Integer;
iSize:Integer;
pData:PChar;
begin
iSize := Length(str);
pData := PChar(str);
for i := 0 to 7 do
begin
ZeroMemory(pData, iSize);
FillMemory(pData, iSize, $FF); // 1111 1111
FillMemory(pData, iSize, $AA); // 1010 1010
FillMemory(pData, iSize, $55); // 0101 0101
ZeroMemory(pData, iSize);
end;
end;
DLLs don't own allocated memory, processes do. The memory allocated by your specific process will be discarded once the process terminates, whether the DLL hangs around (because it is in use by another process) or not.
How about decrypting the file to a stream, using a SAX processor instead of an XML DOM to do your verification and then overwriting the decrypted stream before freeing it?
If you use the FastMM memory manager in Full Debug mode, then you can force it to overwrite memory when it is being freed.
Normally that behaviour is used to detect wild pointers, but it can also be used for what your want.
On the other hand, make sure you understand what Craig Stuntz writes: do not write this authentication and authorization stuff yourself, use the underlying operating system whenever possible.
BTW: Hallvard Vassbotn wrote a nice blog about FastMM:
http://hallvards.blogspot.com/2007/05/use-full-fastmm-consider-donating.html
Regards,
Jeroen Pluimers
Messy but you could make a note of the heap size that you've used while you've got the heap filled with sensitive data then when that is released do a GetMem to allocate you a large chunk spanning (say) 200% of that. do a Fill on that chunk and make the assumption that any fragmentation is unlinkely to be of much use to an examiner.
Bri
How about keeping the password as a hash value in the XML and verify by comparing the hash of the input password with the hashed password in your XML.
Edit: You can keep all the sensitive data encrypted and decrypt only at the last possible moment.
Would it be possible to load the decrypted XML into an array of char or byte rather than a string? Then there would be no copy-on-write handling, so you would be able to backfill the memory with #0's before freeing?
Be careful if assigning array of char to string, as Delphi has some smart handling here for compatibility with traditional packed array[1..x] of char.
Also, could you use ShortString?
If your using XML, even encrypted, to store passwords your putting your users at risk. A better approach would be to store the hash values of the passwords instead, and then compare the hash against the entered password. The advantage of this approach is that even in knowing the hash value, you won't know the password which makes the hash. Adding a brute force identifier (count invalid password attempts, and lock the account after a certain number) will increase security even further.
There are several methods you can use to create a hash of a string. A good starting point would be to look at the turbo power open source project "LockBox", I believe it has several examples of creating one way hash keys.
EDIT
But how does knowing the hash value if its one way help? If your really paranoid, you can modify the hash value by something prediticable that only you would know... say, a random number using a specific seed value plus the date. You could then store only enough of the hash in your xml so you can use it as a starting point for comparison. The nice thing about psuedo random number generators is that they always generate the same series of "random" numbers given the same seed.
Be careful of functions that try to treat a string as a pointer, and try to use FillChar or ZeroMemory to wipe the string contents.
this is both wrong (strings are shared; you're screwing someone else who's currently using the string)
and can cause an access violation (if the string happens to have been a constant, it is sitting on a read-only data page in the process address space; and trying to write to it is an access violation)
procedure BurnString(var s: UnicodeString);
begin
{
If the string is actually constant (reference count of -1), then any attempt to burn it will be
an access violation; as the memory is sitting in a read-only data page.
But Delphi provides no supported way to get the reference count of a string.
It's also an issue if someone else is currently using the string (i.e. Reference Count > 1).
If the string were only referenced by the caller (with a reference count of 1), then
our function here, which received the string through a var reference would also have the string with
a reference count of one.
Either way, we can only burn the string if there's no other reference.
The use of UniqueString, while counter-intuitiave, is the best approach.
If you pass an unencrypted password to BurnString as a var parameter, and there were another reference,
the string would still contain the password on exit. You can argue that what's the point of making a *copy*
of a string only to burn the copy. Two things:
- if you're debugging it, the string you passed will now be burned (i.e. your local variable will be empty)
- most of the time the RefCount will be 1. When RefCount is one, UniqueString does nothing, so we *are* burning
the only string
}
if Length(s) > 0 then
begin
System.UniqueString(s); //ensure the passed in string has a reference count of one
ZeroMemory(Pointer(s), System.Length(s)*SizeOf(WideChar));
{
By not calling UniqueString, we only save on a memory allocation and wipe if RefCnt <> 1
It's an unsafe micro-optimization because we're using undocumented offsets to reference counts.
And i'm really uncomfortable using it because it really is undocumented.
It is absolutely a given that it won't change. And we'd have stopping using Delphi long before
it changes. But i just can't do it.
}
//if PLongInt(PByte(S) - 8)^ = 1 then //RefCnt=1
// ZeroMemory(Pointer(s), System.Length(s)*SizeOf(WideChar));
s := ''; //We want the callee to see their passed string come back as empty (even if it was shared with other variables)
end;
end;
Once you have the UnicodeString version, you can create the AnsiString and WideString versions:
procedure BurnString(var s: AnsiString); overload;
begin
if Length(s) > 0 then
begin
System.UniqueString(s);
ZeroMemory(Pointer(s), System.Length(s)*SizeOf(AnsiChar));
//if PLongInt(PByte(S) - 8)^ = 1 then //RefCount=1
// ZeroMemory(Pointer(s), System.Length(s)*SizeOf(AnsiChar));
s := '';
end;
end;
procedure BurnString(var s: WideString);
begin
//WideStrings (i.e. COM BSTRs) are not reference counted, but they are modifiable
if Length(s) > 0 then
begin
ZeroMemory(Pointer(s), System.Length(s)*SizeOf(WideChar));
//if PLongInt(PByte(S) - 8)^ = 1 then //RefCount=1
// ZeroMemory(Pointer(s), System.Length(s)*SizeOf(AnsiChar));
s := '';
end;
end;