I have timeseries sensor data (temperature, vibration) in .rec files. I am looking a way to import them using Python. What I read while finding a solution for this is, usually .rec file contains images (medical images). In my case it is time series data. There is tools such as MXnet and MNE. But they use .idx files along with .rec files to iterate. Also, examples are limited to images.
Currently, what I am doing is to change the extension to .csv ans importing using pandas (delimiter '\t'). It is working as intended but I am looking for a clean solution that directly allows me to read and in best case scenario to iterate on .rec files.
Related
I have a folder which includes a lot of subfolders and files. I want to combine all those files into one single large file. That file should be able to get expanded rendering back the original folder and files.
Another requirement is that the method to do it should render exactly the same output (single large file) across different platforms (Node.js, Android, iOS). I've tried ZIP utility's store mode, it indeed renders one file combining all input files and it doesn't compress them, which is good. However, when I try it on Node.js and Windows 7Zip software (ZIP format Store mode), I find that the outputs are not exactly the same. The two large files' sizes are slightly different and of course with different md5. Though they can both be expanded and return back identical files, the single files doesn't meet my requirement.
Another option I tried is Tar file format. Node.js and 7Zip renders different output as well.
Do you know anything I miss using ZIP store mode and Tar file? e.g. using specific versions or customized ZIP util?
Or, could you provide another method to realize my tasks?
I need a method to combine files which shares exactly the same protocol across Node.js, android, and iOS platforms.
Thank you.
The problem is your requirement. You should only require that the files and directory structure be exactly reconstructed after extraction. Not that the archive itself be exactly the same. Instead of running your MD5 on the archive, instead run it on the reconstructed files.
There is no way to assure the same zip result using different compressors, or different versions of the same compressor, or the same version of the same code with different settings. If you do not have complete control of the code creating and compressing the data, e.g., by virtue of having written it yourself and assuring portability across platforms, then you cannot guarantee that the archive files will be the same.
More importantly, there is no need to have that guarantee. If you want to assure the integrity of the transfer, check the result of extraction, not the intermediate archive file. Then your check is even better than checking the archive, since you are then also verifying that there were no bugs in the construction and extraction processes.
I have several images that I'm querying from a database. All of those files are in .png.
My current process involves downloading each image and then align them next to each other without space in between. It would be possible to stack them as well if that makes the process easier.
I am now looking for a way to take the images and basically save the file as a Picture as if they were a grouped item in Excel.
Is there a way to accomplish this? I have looked around and I've found something that did this with TIFF but that's not exactly what I have here and I wasn't able to adapt the code.
The Preview application on the Mac allows one to merge multiple PDF files, although the functionality is rather obscure. I'm writing a utility in Haskell that needs to perform a similar task, that is, merge an arbitrary number of PDF files into one new file.
Does anyone have a suggestion as to where to start with this? Obviously if there's a library on Hackage that will do most of the work out of the box that would be ideal, but if not, then some pointers about where to start would be very much appreciated.
I'm working on pdf library, that supports parsing and generating. It is low level, higher level tools are in todo list yet (because it is hard to design good high level API).
Here is an example of unpacking and decrypting of PDF file. It is easy to implement PDF merging, but you need to be familiar with PDF internals.
ADDED:
I create a basic example of merging PDF files in Haskell. 150 lines of code total, but it lacks few features (see comments at on the top of the file). They are easy to add, so let me know if you are interested.
The PDF file format isn't that complicated. Adobe has an official specification document for it somewhere. Essentially a PDF file contains a set of numbered "objects". You'd have to get all the objects from each PDF file, renumber them so they're unique, and then you need to fiddle with the page index so all the pages actually show up.
There appear to be a couple of packages on Hackage for writing PDF files, but I don't see much for reading them. You may like to look at the source code for pdfsplit for ideas. Also HPDF.
I have some data stored in different Microsoft Excel Worksheet (.xlsx) .
Now i want to ploat a graph by using these data (which are in different .xlsx) files. How can i do this ? means which language or platform i should use or any other help related to that.
You can use Java API's to create graph if you are familiar with java. You can use Apache POI or jfreechart or openxmlformats(api provided by apache).
Matlab has the built-in function xlsread which parses data from Excel files. Depending on how the files are organized, writing some code to read them all should be easy, and concatenating the matrices of data and plotting them is also pretty easy.
if you use saveAs to save them as CSV files, you can read them in very easily to Java and split the values with String.split(","). If you want to keep them as .xlsx files, JExcelApi might be able to help, though personally I've never used it. As for graphing, I'd recommend Javaplot/gnuplot, Jzy3D, or JMathPlot depending on what you want to graph. They're free and relatively easy to use.
I am looking for a way to automatically extract parts from audio files. Something like Imagemagick for audio files.
I only need to extract random parts of a fixed length from a large set of complete ogg-vorbis files. I easily know how to automatically interpret the output from a programm, so I would be able to write a small script if I had programs to do the following:
Get the length of the file
Extract parts of the given an offset in seconds and a length
Is there any program, which allows me to do this under linux? The files I am using are ogg vorbis files.
If there is a python library, which is able to do this, it would work as well.
You can use SoX (Sound eXchange) to do both.