Writing a byte array to local disk fails in SL 4 - security

I get a bytearray from the server which I want to write to my local disk. It's a spreadsheet and it's part of an export function.
This is what I do on the client:
Using oFileStream As New FileStream(path, FileMode.Create)
oFileStream.Write(excel, 0, excel.Length)
End Using
This is the error occuring on creating a new Filestream:
Btw: I know there are several threads about this issue, but none resolved my problem.

You can save the byte array on the server and provide a link to it on the client. That way, your client (SL) won't need to write anything.

Related

how to read an incomplete file and wait for new data in nodejs

I have a UDP client that grabs some data from another source and writes it to a file on the server. Since this is large amount of data, I dont want the end user to wait until they its full written to the server so that they can download it. So I made a NodeJS server that grabs the latest data from the file and sends it to the user.
Here is the code:
var stream = fs.readFileSync(filename)
.on("data", function(data) {
response.write(data)
});
The problem here is, if the download starts when the file was only for example 10mb.. the fs.readFileSync will only read my file up to 10mb. Even if 2 mins later the file increased to 100mb. fs.readFileSync will never know about the new updated data. How can I do this in Node? I would like somehow refresh the fs state or maybe perpaps wait for new data using fs file system. Or is there some kind of fs fileContent watcher?
EDIT:
I think the code below describes better what I would like to achieve, however in this code it keeps reading forever and I dont have any variable from fs.read that can help me stop it:
fs.open(filename, 'r', function(err, fd) {
var bufferSize=1000,
chunkSize=512,
buffer=new Buffer(bufferSize),
bytesRead = 0;
while(true){ //check if file has new content inside
fs.read(fd, buffer, 0, chunkSize, bytesRead);
bytesRead+= buffer.length;
}
});
Node has built-in methods in the fs module. It is tagged as unstable, so it can change in the future.
Its called: fs.watchFile(filename[, options], listener)
You can read more about it here: https://nodejs.org/api/fs.html#fs_fs_watchfile_filename_options_listener
But i highly suggest you to use one of the good modules mantained actively like
watchr:
From his readme:
Better file system watching for Node.js. Provides a normalised API the
file watching APIs of different node versions, nested/recursive file
and directory watching, and accurate detailed events for
file/directory changes, deletions and creations.
The module page is here: https://github.com/bevry/watchr
(Used the module in a couple of proyects and working great, im not related to it in other way)
you need store in some data base last size of file.
read filesize first.
load your file.
then make a script to check if file was change.
you can consult the size with jquery.post to obtain your result and decide if need to reload in javascript

Websockets with Streaming Archives

So this is the setup I'm working with:
I am on an express server which must stream an archived binary payload to a browser (does not matter if it is zip, tar or tar.gz - although zip would be nice).
On this server, I have a websocket open that connects to another server which is sending me binary payloads of individual files in a directory. I get these payloads streamed, piece-by-piece, as buffers, and I'm doing this serially (that is - file-by-file - there aren't multiple websockets open at one time, and there is one websocket per file). This is the websocket library I'm using: https://github.com/einaros/ws
I would like to go through each file, open a websocket, and then append the buffers to an archiver as they come through the websockets. When data is appended to the archiver, it would be nice if I could stream the ouput of the archiver to the browser (via the response object with response.write). So, basically, as I'm getting the payload from the websocket, I would like that payload streamed through an archiver and then to the response. :-)
Some things I have looked into:
node-zipstream - This is nice because it gives me an output stream I can pipe directly to response.write. However, it doesn't appear to support nested files/folders, and, more importantly, it only accepts an input stream. I have looked at the source code (which is quite terse and readable), and it seems as though, if I were able to have access to the update function within ZipStream.prototype.addFile, I could just call that each time on the message event when I get a binary buffer from the websocket. This is quite messy/hacky though, and, given that this library already doesn't seem to support nested files/folders, I'm not sure I will be going with it.
node-archiver - This suffers from the same issue as node-zipstream (probably because it was inspired by it) where it allows me to pipe the output, but I cannot append multiple buffers for the same file within the archive (and then manually signal when the last buffer has been appended for a given file). However, it does allow me to have nested folders, which is a clear win over node-zipstream.
Is there something I'm not aware of, or is this just a really crazy thing that I want to do?
The only alternative I see at this point is to wait for the entire payload to be streamed through a websocket and then append with node-archiver, but I really would like to reap the benefit of true streaming/archiving on-the-fly.
I've also thought about the possibility of creating a read stream of sorts just to serve as a proxy object that I can pass into node-archiver and then just append the buffers I get from the websocket to this read stream. Looking at various read streams, I'm not sure how to do this though. The only way I could think of was creating a writestream, piping buffers to it, and having a readstream read from that writestream. Am I on the correct thought process here?
As always, thanks for any help/direction you can offer SO community.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer than the one I provided. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
I figured out a way to get this working with node-archiver. :-)
It was based off my hunch of creating a temporary "proxy stream" of sorts, inspired by this SO question: How to create streams from string in Node.Js?
The basic gist is (coffeescript syntax):
archive = archiver 'zip'
archive.pipe response // where response is the http response
// and then for each file...
fileName = ... // known file name
fileSize = ... // known file size
ws = .... // create websocket
proxyStream = new Stream()
numBytesStreamed = 0
archive.append proxyStream, name: fileName
ws.on 'message', (dataBuffer) ->
numBytesStreamed += dataBuffer.length
proxyStream.emit 'data', dataBuffer
if numBytesStreamed is fileSize
proxyStream.emit 'end'
// function/indicator to do this for the next file in the folder
// and then when you're completely done...
archive.finalize (err, bytesOfArchive) ->
if err?
// do whatever
else
// unless you somehow knew this ahead of time
res.addTrailers
'Content-Length': bytesOfArchive
res.end()
Note that this is not the complete solution I implemented. There is still a lot of logic dealing with getting the files, their paths, etc. Not to mention error-handling.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.

Can MessageChannel overflow

I am working on an AS3 project in FDT6. I am using the lastest FLEX 4.6 and AIR 3.7.
I have a worker.swf file that is embedded into the main application to do threading work with.
I am using the MessageChannel class to pass information between the two.
In my main class I have defined
private var mainToWorker:MessageChannel;
private var workerToMain:MessageChannel;
mainToWorker = Worker.current.createMessageChannel(worker);
workerToMain = worker.createMessageChannel(Worker.current);
on the mainToWorker I only ever send messages. In these messages I send a byte array of information. The information is an object that contains a 'command' property and a 'props' property. Basically acting like a function call. The command is a function name and the props is an object that contains data for that function.
mainToWorkerMutex.lock();
mainToWorker.send(ByteArrayUtils.ObjectToByteArray({command:"DoSomething", props:{propA:1,propB:7}}));
mainToWorkerMutex.unlock();
The same occurs for the workerToMain var except I only send byte data that contains the 'message' and 'props' parameters.
workerToMainMutex.lock();
workerToMain.send(ByteArrayUtils.ObjectToByteArray({command:"complete", props:{return:"result"}}));
workerToMainMutex.unlock();
As a sanity check I make sure that the message channels are getting what they should.
It is working fine when I build it in FDT, however when it is built using an ANT script through flash builder I am sometimes getting the 'command' events coming back through in the workerToMain channel.
I am sending quite a lot of data through the message channel. Is it possible that I am overloading it and causing a buffer overflow into the other message channel somehow? How could that only be happening in FB?
I have checked my code many times and I am sure there is nothing in my own code that is sending that message back.
I had similar issue. When sending many bytearrays using channels sometimes things i received was not things i've actually sended. I had 4 channels (message channel to worker, message channel to main, data channel to worker, data channel to main).
I've noticed that data channel to main was affecting message channel to worker. When i turned off data channel to main - message channel to worker stared working just fine :D...
They have a big issue there with sending byte arrays it seems.
But what helped me was using shareable (at first it was not shareable) bytearray for communication via channels, but only for communication, as soon as i am receiving such bytearray i'm copying it to another byte array and parsing a copy.
This removed the problem (made quite hard stress tests there)...
Cheers
P.S. I'm also using static functions (like your ByteArrayUtils) to create bytearray's used for communication, but it seems fine, even made tests using non static functions.
So, it looks like I have found the issue. Looks like it's the ByteArray that is doing it.
ByteArray.toString() is basically sometimes mangles your data meaning you can't really trust it.
http://www.actionscript.org/forums/showthread.php3?t=155067
If you read the comment by "Jim Freer" he mentions how strings sometimes do this.
My solution was to switch to using a JSON encoded string instead of ByteArray data in the message channel. The reason I was using bytearray data to begin with is because I wanted to preserve class definition information, which JSON doesn't do.

Azure Web SItes - how to write to a file

I am using abcPDF to dynamically create PDFs.
I want to save these PDFs for clients to retrieve any time they want. The easiest way (and the way I do now on my current server) is to simply save the finished PDF to the file system.
Seems I am stuck with using blobs. Luckily abcPDF can save to a stream as well as a file. Now, how to I wire up a stream to a blob? I have found code that shows the blob taking a stream like:
blob.UploadFromStream(theStream, options);
The abcPDF function looks like this:
theDoc.Save(theStream)
I do not know how to bridge this gap.
Thanks!
Brad
As an alternative that doesn't require holding the entire file in memory, you might try this:
using (var stream = blob.OpenWrite())
{
theDoc.Save(stream);
}
EDIT
Adding a caveat here: if the save method requires a seekable stream, I don't think this will work.
Given the situation and not knowing the full list of overloads of Save() method of abcPdf, it seems that you need a MemoryStream. Something like:
using(MemoryStream ms = new MemoryStream())
{
theDoc.Save(ms);
ms.Seek(0, SeekOrigin.Begin);
blob.UploadFromStream(ms, options);
}
This shall do the job. But if you are dealing with big files, and you are expecting a lot of traffic (lots of simultaneous PDF creations), you might just go for a temp file. Write the PDF to a temp file, then immediatelly upload the temp file for the blob.

How to resolve an InvalidMd5 error returned from the Windows Azure Blob Storage service?

I am building an application that needs to allow users to upload large images (up to about 100 MB) to the Windows Azure Blob Storage service. Having read Rob Gillen's excellent article on file upload optimization for Windows Azure, I borrowed his approach for doing parallel upload of file chunks, using the CloudBlockBlob.PutBlock() method within a Parallel.For loop (code is available here).
The problem I have is that whenever I try to upload a file I get an "InvalidMd5" exception from the storage client. Suspecting that the problem may be in the development storage, I also tried running the code against my live Azure storage account, but I got the same error. Looking at the traffic with Fiddler I see that the "Content-MD5" header is set to a valid MD5 hash. The description of the error says that "The MD5 value specified in the request is invalid. The MD5 value must be 128 bits and Base64-encoded.", but to the best of my knowledge the value I see being sent in Fiddler is valid (e.g. a91c588092cedbdb1b82c2d3786fd509).
Here is the code I use for calculating the hash (courtesy of Rob Gillen):
public static string GetMD5HashFromStream(byte[] data)
{
MD5 md5 = new MD5CryptoServiceProvider();
byte[] retVal = md5.ComputeHash(data);
StringBuilder sb = new StringBuilder();
for (int i = 0; i < retVal.Length; i++)
{
sb.Append(retVal[i].ToString("x2"));
}
return sb.ToString();
}
And this is the actual call to PutBlock():
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(buff), blockHash, options);
I also tried passing the hash like so:
Convert.ToBase64String(Encoding.UTF8.GetBytes(blockHash))
but the result was the same - "InvalidMd5" error :(
The MD5 hash being passed to PutBlock() with base64 encoding (e.g. YTkxYzU4ODA5MmNlZGJkYjFiODJjMmQzNzg2ZmQ1MDk=) and without it (e.g. a91c588092cedbdb1b82c2d3786fd509) doesn't seem to make a difference.
Rob's code obviously worked for him and I really have no idea what may be causing the problem in my case. The only change I've made to Rob's code is to alter the ParallelUpload() extension method to take a Stream instead of a file name and to dynamically determine the block size depending on the size of the file being uploaded.
Please, if anyone has an idea how to solve this problem, let me know! I will be really grateful! I already lost two days struggling with this.
Rob, thank you for offering to help and pointing out the difference in the MD5 hashes. Your answer got me thinking in the right direction. I spent another whole day digging into this but luckily (and thanks to your remark :)) I finally managed to resolve the problem. It turned out there were actually two issues in my case:
1) The MD5 hash: I noticed the hash you pasted in your answer is shorter than the one I was getting but it took me a while to see yours was exactly twice shorter. After some experimentation I found out that the GetMD5HashFromStream() method from your test application is converting the 16-byte hash generated by the MD5CryptoServiceProvider to a 32-character string. And it was this 32-character string that was causing the problem because it was converted to Base64 and passed to the PutBlock() method, hence the twice longer and thus invalid hash that the blob storage service was complaining about. Here is the code I ended up with:
Original:
public static string GetMD5HashFromStream(byte[] data)
{
MD5 md5 = new MD5CryptoServiceProvider();
byte[] retVal = md5.ComputeHash(data);
StringBuilder sb = new StringBuilder();
for (int i = 0; i < retVal.Length; i++)
{
sb.Append(retVal[i].ToString("x2"));
}
return sb.ToString();
}
and the call to PutBlock():
// calculate the block-level hash
string blockHash = Helpers.GetMD5HashFromStream(buff);
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(buff), blockHash, options);
Final:
MD5 md5 = new MD5CryptoServiceProvider();
byte[] blockHash = md5.ComputeHash(buff);
string convertedHash = Convert.ToBase64String(blockHash, 0, 16);
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(buff), convertedHash, options);
Rob, I'm really curious how your code worked in your case and why it didn't in mine - is it something specific to the setup on my machine, or perhaps a differing version of the Azure tools (I'm using v1.2)... Please let me know if you have any idea.
2) A bug in the development storage: lots of combing through the web led me to this page that mentions an obscure but apparently known bug in the development storage:
If two requests attempt to upload a
block to a blob that does not yet
exist in development storage, one
request will create the blob, and the
other may return status code 409
(Conflict), with storage services
error code BlobAlreadyExists.
Here is what I came up with to work around it:
public static bool IsDevelopmentStorageRunning()
{
return new Microsoft.ServiceHosting.Tools.DevelopmentStorage.DevStore().IsRunning();
}
You will need to add a reference to Microsoft.ServiceHosting.Tools.dll, which was located in "C:\Program Files\Windows Azure SDK\v1.2\bin" on my machine. Then, I use this method before the Parallel.For loop that processes the file chunks as follows:
bool isDevStorageRunning = StorageProxy.IsDevelopmentStorageRunning();
ParallelOptions parallelOptions = new ParallelOptions();
parallelOptions.MaxDegreeOfParallelism = isDevStorageRunning ? 1 : 4;
Parallel.For(0, transferDetails.Length, parallelOptions, j => { ... });
I hope this will save someone all the hassles I went through. Rob, thank you once again for helping out :)
tishon,
After seeing this post, I went back and re-tested my code, and I'm thinking that there is a problem with the data being passed (possibly what you are passing into the function?).
One thing that jumped out at me immediately was the md5 hash you provided... in every case I've tested, my md5 hashes end with two equals signs like the following (captured from fiddler):
Content-MD5: D1Mxthoqhlwm9cC0729mWA==
I'm not a crypto expert, but I know from working with the block IDs for block blobs, that if you have invalid/unsafe characters in your blob ID prior to converting it a base64 encoded value you'll get invalid data and block ids that Azure can't interpret.

Resources