We are going to use bigcouch to serve images. The max image size is ~5MB. The config value is at 64MB. Should we change it to somewhere close? What is the reason behind it?
Thanks,
Kathleen
You should not change it. I'm pretty sure that value is excluding attachments.
The max document size is just for the JSON part of the document (not your image attachments), and it prevents the server from using too much memory to hold an oversized document.
Also, you may ignore #ajreal, he seems to have no idea what he is saying.
Assuming you stream the attachment in as standalone (i.e, PUT /dbname/docid/attachment_name) you won't hit the max_document_size limit. Jason is correct that it only limits the size of a JSON body PUT. I also second his opinion that ajreal is talking crap.
Related
I don't want to see the length of the collection, I need the amount of bytes and then I want to make a get request to use it on my client side... Anyone can help?
you have to maybe use the .stats() method.
If you want the size of your collection in bytes,
use db.collection.dataSize().
Is there any C# way to check an ISO file is valid or not i.e. valid Iso format or any other check possible or not.
The scenario is like, if any text file(or any other format file) is renamed to ISO and given it for further processing. I want to check weather this ISO file is a valid ISO file or not? Is there any way exist programmatically like to check any property of the file or file header or any other things
Thanks for any reply in advance
To quote the wiki gods:
There is no standard definition for ISO image files. ISO disc images
are uncompressed and do not use a particular container format; they
are a sector-by-sector copy of the data on an optical disc, stored
inside a binary file. ISO images are expected to contain the binary
image of an optical media file system (usually ISO 9660 and its
extensions or UDF), including the data in its files in binary format,
copied exactly as they were stored on the disc. The data inside the
ISO image will be structured according to the file system that was
used on the optical disc from which it was created.
reference
So you basically want to detect whether a file is an ISO file or not, and not so much check the file, to see if it's valid (e.g. incomplete, corrupted, ...) ?
There's no easy way to do that and there certainly is not a C# function (that I know of) that can do this.
The best way to approach this is to guess the amount of bytes per block stored in the ISO.
Guess, or simply try all possible situations one by one, unless you have an associated CUE file that actually stores this information. PS. If the ISO is accompanied by a same-name .CUE file then you can be 99.99% sure that it's an ISO file anyway.
Sizes would be 2048 (user data) or 2352 (raw or audio) bytes per block. Other sizes are possible as well !!!! I just mentioned the two most common ones. In case of 2352 bytes per block the user data starts at an offset in this block. Usually 16 or 24 depending on the Mode.
Next I would try to detect the CD/DVD file-systems. Assume that the image starts at sector 0 (although you could for safety implement a scan that assumes -150 to 16 for instance).
You'll need to look into specifics of ISO9660 and UDF for that. Sectors 16, 256 etc. will be interesting sectors to check !!
Bottom line, it's not an easy task to do and you will need to familiarize yourself with optical disc layouts and optical disc file-systems (ISO9660, UDF but possibly also HFS and even FAT on BD).
If you're digging into this I strongly suggest to get IsoBuster (www.isobuster.com) to help you see what the size per block is, what file systems there are, to inspect the different key blocks etc.
In addition to the answers above (and especially #peter's answer): I recently made a very simple Python tool for the detection of truncated/incomplete ISO images. Definitely not validation (which as #Jake1164 correctly points out is impossible), but possibly useful for some scenarios nevertheless. It also supports ISO images that contain Apple (HFS) partitions. For more details see the following blog post:
Detecting broken ISO images: introducing Isolyzer
And the software's Github repo is here:
Isolyzer
You may run md5sum command to check the integrity of an image
For example, here's a list of ISO: http://mirrors.usc.edu/pub/linux/distributions/centos/5.4/isos/x86_64/
You may run:
md5sum CentOS-5.4-x86_64-LiveCD.iso
The output is supposed to be the same as 1805b320aba665db3e8b1fe5bd5a14cc, which you may find from here:
http://mirrors.usc.edu/pub/linux/distributions/centos/5.4/isos/x86_64/md5sum.txt
I would like to know if it is possible to send a block of data like 128 bytes of data to a Xively server MOTOROLA SREC for example I need this to do firmware upgrades / download images to my Arduino connected device? As far as I can see one can only get - datapoints / values ?
A value of a datapoint can be a string. Firmware updates can be implement using Xively API V2 by just storing string encoded binaries as datapoints, provided that the size is small.
You probably can make some use of timestamps for rolling back versions that did work or something similar. Also you probably want to use the datapoints endpoint so you can just grab the entire response body and no need to parse anything.
/v2/feeds/<feed_id>/datastreams/<datastream_id>/datapoints/<timestamp>.csv
I suppose, you will need implement this in the bootloader which needs to be very small and maybe you can actually skip paring the HTTP headers and only attempt to very whether the body looks right (i.e. has some magic byte that you put in there, you can also try some to checksum it. This would a little bit opportunistic, but might be okay for an experiment. You should probably add Xively device provisioning to this also, but wouldn't try implementing everything right away.
It is however quite challenging to implement reliable firmware updates and there are sever papers out there which you should read. Some suggest to make device's behaviour most primitive you can, avoid any logic and make it rely on what server tells it to do.
To actually store the firmware string you can use cURL helper.
Add first version into a new datastream
Update with a new version
Is it possible to download only the last 30 seconds of an mp3? Or is it necessary to download the whole thing and crop it after the fact? I would be downloading via http, i.e. I have the URL of the file but that's it.
No, it is not possible... at least not without knowing some more information first.
The real problem here is determining at what byte offset the last 30 seconds is. This is a product of knowing:
Sample Rate
Bit Depth (per sample)
# of Channels
CBR or VBR
Bit Rate
Even then, you're not going to get that with a VBR MP3 file, and even with CBR, who knows how big the ID3 and other crap at the beginning of the file is. Even if you know all of that, there is still some variability, as you have the problem of the bit reservoir.
The only way to know would be to download the whole file and use a tool such as FFMPEG to find out the right offset. Then if you want to play it, you'll want to add the appropriate headers, and make sure you are trimming on an eligible frame, or fix the bit reservoir yourself.
Now, if this could all be figured out server-side ahead of time, then yes, you could request the offset from the server, and then download from there. As for how to download it, your question is very incomplete and didn't mention what protocol you were using, so I cannot help you there.
I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**.
Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model.
Right now, I can think two approaches:
SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException
and
SPFile file = folder.Files.Add("filename", new byte[]{ });
using(Stream stream = file.OpenBinaryStream())
{
// ... init vars and stuff ...
while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) > 0)
{
stream.Write(buffer, 0, (int)bytes); // Timeout issues
}
file.SaveBinary(stream);
}
Are there any other way to complete successfully this task?
** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (>100Mb).
I ended with following code:
myFolder.Files.Add("filename",
new DataRecordStream(dataReader,
dataReader.GetOrdinal("Content"), length));
You can find DataRecordStream implementation here. It's basically a Stream whos read data from a DbDataRecord through .GetBytes
This approach is similar to OpenBinaryStream()/SaveBinary(stream), but it's doesnt keeps all byte[] in memory while you transfer data. In some point, DataRecordStream will be accessed from Microsoft.SharePoint.SPFile.CloneStreamToSPFileStream using 64k chunks.
Thank you all for valuable infos!
The first thing I would say is that SharePoint is really, really not designed for this. It stores all files in its own database so that's where these large files are going. This is not a good idea for lots of reasons: scalability, cost, backup/restore, performance, etc... So I strongly recommend using file shares instead.
You can increase the timeout of the web request by changing the executionTimeout attribute of the httpRuntime element in web.config.
Apart from that, I'm not sure what else to suggest. I haven't heard of such large files being stored in SharePoint. If you absolutely must do this, try also asking on Server Fault.
As mentioned previously, storing large files in Sharepoint is generally a bad idea. See this article for more information: http://blogs.msdn.com/joelo/archive/2007/11/08/what-not-to-store-in-sharepoint.aspx
With that said, it is possible to use external storage for BLOBs, which may or may not help your performance issues -- Microsoft released a half-complete external BLOB storage provider that does the trick, but it unfortunately works at the farm level and affects all uploads. Ick.
Fortunately, since you can implement your own external BLOB provider, you may be able to write something to better handle these specific files. See this article for details: http://207.46.16.252/en-us/magazine/2009.06.insidesharepoint.aspx
Whether or not this would be worth the overhead depends on how much of a problem you're having. :)