LabVIEW data writing in a TDMS file [closed] - excel

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 9 months ago.
Improve this question
I want to acquire pressure data from a pressure sensor. When I write the data using the "write to measurement file", only one part of the data is saved and others are missed.
I also try exporting data to excel from the waveform chart. But I receive a message that says there is not enough memory.
What should I do to save whole the data without missing them?
Is there any way to save the data directly to the Hard drive?
Thanks

I don't like the expressVIs myself. Recording directly to a TDMS file is the way to go. There are great examples of writing to a tdms file in the Labview examples. I think the TDMS API is pretty intuitive as well, look at the TDMS palette under the File IO palette, all you really need for this is to open, write, then close.
TDMS are incredibly fast for writing to and allow multiple groupings and metadata at multiple levels. I use TDMS all the time in my applications and never miss any data.

Related

How to send image as a REST API response? What guidelines to follow? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am making an application using Node.js and Express. I am able to save files to the server using multer, however while sending responses I have two options,
Send the URI of the image in JSON, and let the front-end call this to display the image.
Send the image data using some form of encoding like Base64 as part of JSON.
Since I am new to web development, I am confused as to which option to use. Some people have said that the first option requires two API calls, so can be slow. While I have also heard that the second option will take up more memory resources.
What other things should I consider while choosing, and is there any other way of sending images to the client side?
Option 1
Is less complex since no conversion is needed. These 2 API calls won't slow you down. The image size is way more important!!.. The file can be stored/accessed directly on filesystem and served from there. Also a filedownload is implemented in a short period of time. Also the base64 encoding makes the file roughly ~33% (!!) bigger what has a huge impact on large files regarding performance.
Option 2
Base 64 is more secure as nobody can link to your website as described here .
You only need to use base64 for security reasons OR if you have to transfer the image data as string if you cannot transfer it as binary.
Use Case
If this is your private non-productive project just try out both and use the one you like. In the end you are learning something.. It's only important to stay consistent !
If one option fits better to you, just implement it the way you like. You can always refactor a given part of the application later when you may have more experience or when the core parts of your application are finished. Sometimes, after working a while with one of the techniques it gets more clear which approach to use.
For learning it's sometimes better to go ahead, and implement something what works
and start to refactor as problems occur. Rather than overengineering small

Systematically replace ALL words in plain txt, with a new library of words in Excel [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have created a list of morse code abbreviations in excel and now I want to transform any kind of text to these new words (abbreviations), by loading pages upon pages of text into the script to transform into a morse transscript.
For instance:
Once upon a time I was going to a beach and saw a cow
ONC UPN A TI I WS GG TO A BCH SS SAW A CW
I want this so that I can add words not in the list, and remove words that are never being used (manually though). This is because this list is from before the war and contains many unused words today, and so I want to optimize it for proper usage. It's the Evans Code if anyone is interested.
If the script could add 1 to the cell next to the words in excel for a count of how many times it's been replaced, that would also be great.
I have no idea of how to go about it, I just want to know if it is possible, and if so, please guide me on my way - I'm not asking you to do all the work, which I'm sure is not as straight-forward as it is on paper.
I've got a little knowledge of node.js and c++, but I don't know if either of those are the right languages for the task at hand
I have no idea of how to go about it, I just want to know if it is possible, and if so, please guide me on my way.
Break the problem down into smaller tasks.
Choose a single programming language. (Any of the languages you listed would be suitable.)
Get Excel to export the spreadsheet containing the dictionary as a CSV file. (CSV files are easier for a program to read.)
Find a CVS reader library. (Google it.)
Write a method to read the CSV into an in-memory data structure; e.g. a "map" or "dictionary" that maps from words to "evans code"
Write code to
read your input file a line at a time.
split each line into words
for each word, look the word in the dictionary (ignoring case) and replace with the code word
reassemble words into lines and write to output
Punctuation might make this a bit more complicated, but your example doesn't show any punctuation.
I'm not asking you to do all the work, which I'm sure is not as straight-forward as it is on paper.
Actually, it should be pretty much as straight-forward as it is on paper. Provided you get into it.
(But the longer you put off starting because it "looks hard", the harder it will actually be! That kind of thought pattern tends to be self-fulfilling.)

Command line to Render MIDI from Kontakt patch [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a bunch of MIDI files to render with the same Kontakt patch.
I would like to automatic render a these MIDI files with command line, without needing to load any program manually or assign manually the kontakt patch to the midi track.
I want to write my own program "MyProgram" and use it as follows :
For example
MyProgram.exe -MIDI myMidiFile.mid -kontakt myPatch.nki -out myWav.wav,
which will render the MIDI file as Wav file with the specific Kontakt Patch I assigned.
I don't have any plaform constraint, however, Windows would be the best I guess because of the VST context.
I have no idea how to achieve that, if someone has an idea?
Cheers
Not possible as specified unless NI has released something that I'm unaware of.
What is instead possible is generic VSTi plugin state persistence. (See http://vstdev.richackard.com/doc/vstsdk/faq.html). What you need is a command line VST host that will export/import the state as a file, so you can load it back again. This is useful because Kontakt is a VST plugin, so you'll be able to automate anything you want by controlling Kontakt over VST. Once set up, it would be conceptually possible to load and apply this state via a fully automated command line tool. This could definitely be implemented using the SDK, http://www.steinberg.net/en/company/developers.html
More help on VST host development: http://teragonaudio.com/article/How-to-make-your-own-VST-host.html
I can only find one similar tool that already exists; I don't know for sure if it will work for you, but the forum posts I'm reading suggest it supports VST save states.
http://teragonaudio.com/MrsWatson.html
Mrs Watson is open source, so you can extend it to better suit your needs if necessary.
I don't think you will be able to do it directly from the .nki file because this is Kontakt's own format and there isn't a function in the VST spec that will let you load it in that form.
What you may be able to do though is load Kontakt into a sequencer/DAW, load the nki file and then save the plugin state to an fxp file and then load that in your application. Note that Kontakt isn't currently VST3 so you'll need to use the VST2.x SDK.

How Text to Audio softwares works [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to create a software that can convert readable-texts(non-English) to Audio sound output.
After some searches what I have realized that most of the existing audio readers are too robotic and lacks the human-speech like effects.
I am looking for some algorithm/paper-work, which can give me some idea on how to proceed/implement such a thing.
or
Does anyone know, How some of the world's best Text-Reader software works?
My expectation are:
Reduced Robotic-like sound, and more of Human-like sound
High Quality Output
Light weight, yet Fast process speed
**Please edit this question, if anyone thinks some points are missing on this aspect.
Some small steps might help you give some basic Idea of what happens-
You need to create a dictionary of words, each word with its name and sound.
Create your own signal processor, this will help you add effects to your sound, like you might want robotic, or a female version or something else.
Parse the text file you want to read in array formats, dividing each word and punctuations, to form an array and. eg. "I want to die, this isn't a correct way to live." this will form an array as {I:want:to:die:,:this:isn't:a:correct:way:to:live:.}
Use the punctuation to implement life like parameters like , for small pause and . for longer pauses in your audio reader.
Use the words to take out audio from your database(dictionary) list in point 1.
Play the whole array continuously with a pause between each array element, will work similar to spaces
I think these are major ways to do this. To make it faster you can use advanced sound processing tools, to cache small sound data and add data on fly while you are modulating sound signals.
Might this help you.
Could be nice if you can tell us what kind of app you'll create (Movil, Web, Desktop) and also in what code you'll develop it (Php, Java, C++, etc). Because if you search in google, you'll find a lot plugins for website that convert text to audio that you can download them and see the code.
Also it's hard to find an app that not sound like a robot and if you find it maybe you'll pay for it.
The "robotic" aspect of text to speech that you are concerned about is a matter of the quality of "prosody". This is an active research area. You could probably get a PhD for working on improving prosody in TTS systems. If you would like to read about current research you can try searching for "improving prosody in text to speech".
A big part of the problem is having an accurate model of speech prosody in a given language. The thesis "MeLos: Analysis and Modelling of Speech Prosody and Speaking Style" by Nicolas Obin (2012) contains a survey of the state of the art in speech prosody modelling. Or try searching for "text to speech prosody survey state of the art".

WAV-MIDI matching [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
let's consider a variation of the "WAV to MIDI" conversion problem. I'm aware of the complexity of such a problem and I know that a vast literature about the more general Music Information Retrieval (MIR) subject exists.
But let's suppose here that we already have both the WAV and the MIDI representation of a music piece, so we actually don't have to discover pitches inside the WAV signal from scatch... we "just" have to match the pitches detected (using a suitable algorithm) with the NoteOn events contained in the MIDI representation. I definitely suppose we should use the information contained in the MIDI file to give some hints to the pitch detection algorithm.
Such a matching tool could be very useful, for example for MIDI "humanization": we could make the MIDI representation more expressive using the information retrieved from the WAV signal to "fine tune" note onsets, durations, dynamics, etc...
Does anybody know if such a problem has already been addressed in literature?
Any form of contribution or assistance will be greatly appreciated.
Thanks in advance.
At the 2010 Music Hackday in London some people used the MATCH Vamp plugin to align score to Youtube videos. It was very impressive! Maybe their source code could be of use. I don't know how well MATCH works on audio generated from MIDI files, but that could be worth a try. Here's a link: http://wiki.musichackday.org/index.php?title=Auto_Score_Tubing
This guy appears to have done something similar: http://www.musanim.com/wavalign/ His results are definitely interesting.
This seems like an interesting idea. What are you trying to do, is it just match the notes pitch? Or do you have something else in mind?
One possible thing that you could look into is if you know the note (as an integer value I think its been a while) that will be used to pass into the noteOn method, you may be able to do something with that to map it with a wav signal. IT depends on what you are trying to do.
Also, there are some things that you could also play around with in (I think it is called) the midi controller. Such as: modulation, pitch, volume, pan, or play a couple of notes simultaneously. What you could do with this though, is have a background thread that can change some of those effects as the note is being played. For example, you could have a note get quieter the longer it is played, or have a note that with pan between the left and right speakers, etc
I havnt really played with this code in a long time, but there are some examples of using a midi controller.

Resources