Best program to analyze data from cycle tests [closed] - excel

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've performed some cycle tests of steel joints. The tests conditions include the application of 3 cycles per amplitude value and three different amplitudes were used.
Now I have a huge text file with rotation and moment values but I need to determine the stiffness of each branch of the diagram with a regression analysis method. Therefore I need to separate each cycle.
Do you recommend
Mathematica,
Matlab,
Excel,
or other program best suited to make this task easier?
Many thanks as always for your advice.

It's not entirely clear what you're looking for in the question. I also don't know much about Mathematica or Excel, but I'll say as much as I can about how Matlab might be used to address this problem.
When you say 'separate each cycle', I assume you mean that your text file contains data about all 3 cycles and you want to partition it into 3 separate datasets regarding each individual cycle. I would guess that Matlab will import your data file (the file->import data menu is quite flexible, and I've used it successfully with e.g. 30MB files, but if your files are hundreds of MB that might be a problem).
Assuming there is some structure to the data file, I would expect that you can slice it to achieve your desired partition, e.g.
cycle1 = data(1:3:end, :); %If data from cycles are stored in alternate rows
cycle1 = data(1:end/3, :); %If data from cycles are stored in blocks of rows
cycle1 = data(:, 3); %If data from cycles are stored in separate columns
etc. If you comment with a description of structure of the file I may be able to help further.
Regarding regression analysis, Matlab has several tools; polyfit is quite flexible and might satisfy your requirements. I don't know anything about materials, but I may be able to give better suggestions if you explain the relationship between stiffness and the measures variables.

Mathematica is great, but in terms of the widest range of tools, I'd opt for R and perhaps it's glm package. There are many other suitable packages, perhaps even a neural network or random forest for regression might make an interesting alternative, all are freely available in R.

Related

Programming Wavelets for Audio Identification [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
How exactly is a wavelet used digitally?
Wikipedia states
"a wavelet could be created to have a frequency of Middle C and a
short duration of roughly a 32nd note"
Would this be a data structure holding e.g {sampleNumber, frequency} pairs?
If a wavelet is an array of these pairs, how is it applied to the audio data?
How does this wavelet apply to the analysis when using an FFT?
What is actually being compared to identify the signal?
I feel like you've conflated a few different concepts here. The first confusing part is this:
Would this be a data structure holding e.g {sampleNumber, frequency} pairs?
It's a continuous function, so pick your favourite way of representing continuous functions in a discrete computer memory, and that might be a sensible way to represent it.
The wavelet is applied to the audio signal by convolution (this is actually the next paragraph in the Wikipedia article you referenced...), as is relatively standard in most DSP applications (particularly audio-based applications). Wavelets are really just a particular kind of filter in the broader signal-processing sense, in that they have particular properties that are desirable in some applications, but they are still fundamentally just filters!
As for the comparison being performed - it's the presence or absence of a particular frequency in the input signal corresponding to the frequency (or frequencies) that the wavelet is designed to identify.

Compressing a string [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There is a frequently asked question in interviews about compressing a string.
I'm not looking for a code, I only need an efficient algorithm that solves the problem.
Given a string (e.g. aaabbccaaadd), compress it (3a2b2c3a2d).
My solution:
Travel on the string. Every time I see the same letter I count it.
I will output the letter and the counter when I see a different letter coming (and start over again).
Is there more efficient way to do this?
Thanks
That's called running length encoding, and the algorithm you name is basically the best you'll get. It takes O(1) auxiliary storage (save the last symbol seen, or equivalently inspect the upcoming element; also save a counter of how many identical symbols you've seen) and runs in O(n) time. As you need to inspect each symbol at least once to know the result, you can't get better than O(n) time anyway. What's more, it can also process streams one symbol at a time, and output one symbol at a time, so you actually only need O(1) RAM.
You can pull a number of tricks to get the constant factors better, but the algorithm remains basically the same. Such tricks include:
If you stream to a slow destination (like disk or network), buffer. Extensively.
If you expect long runs of identical symbols, you may be able to vectorize the loop counting them, or at least make that loop tighter by moving out the other cases.
If applicable, tell your compiler not to worry about aliasing between input and output pointers.
Such micro-optimizations may be moot if your data source is slow. For the level of optimization some of my points above address, even RAM can counts as slow.
Use Lempel Ziv compression if your string will be sufficiently long.. The advantage is: it will not only shorten distinct repetitions but also 'groups' of repetitions efficiently. See wikipedia: Lempel-Ziv-Welch
A vague example - so that you get the idea:
aaabqxyzaaatuoiaaabhaaabi will be compressed as:
AbqxyzAtuiBhBi
where [A = aaa] & [B = Ab = aaab]
many compression algorithms are based on Huffman Coding. That's the answer I'd give in an interview

Haskell Linear Algebra Matrix Library for Arbitrary Element Types [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm looking for a Haskell linear algebra library that has the following features:
Matrix multiplication
Matrix addition
Matrix transposition
Rank calculation
Matrix inversion is a plus
and has the following properties:
arbitrary element (scalar) types (in particular element types that are not Storable instances). My elements are an instance of Num, additionally the multiplicative inverse can be calculated. The elements mathematically form a finite field (𝔽2256). That should be enough to implement the features mentioned above.
arbitrary matrix sizes (I'll probably need something like 100x100, but the matrix sizes will depend on the user's input so it should not be limited by anything else but the memory or the computational power available)
as fast as possible, but I'm aware that a library for arbitrary elements will probably not perform like a C/Fortran library that does the work (interfaced via FFI) because of the indirection of arbitrary (non Int, Double or similar) types. At least one pointer gets dereferenced when an element is touched
(written in Haskell, this is not a real requirement for me, but since my elements are no Storable instances the library has to be written in Haskell)
I already tried very hard and evaluated everything that looked promising (most of the libraries on Hackage directly state that they wont work for me). In particular I wrote test code using:
hmatrix, assumes Storable elements
Vec, but the documentation states:
Low Dimension : Although the dimensionality is limited only by what GHC will handle, the library is meant for 2,3 and 4 dimensions. For general linear algebra, check out the excellent hmatrix library and blas bindings
I looked into the code and the documentation of many more libraries but nothing seems to suit my needs :-(.
Update
Since there seems to be nothing, I started a project on GitHub which aims to develop such a library. The current state is very minimalistic, not optimized for speed at all and only the most basic functions have tests and therefore should work. But should you be interested in using or helping out developing it: Contact me (you'll find my mail address on my web site) or send pull requests.
well, I'm note really sure how much relevant my answer is but Im having good experiences with GNU GSL library and there is a wrapper for haskel:
http://hackage.haskell.org/package/bindings-gsl
Check it out, maybe it will help you

In-depth analysis of the difference between the CPU and GPU [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've been searching for the major differences between a CPU and a GPU, more precisely the fine line that separates the cpu and gpu. For example, why not use multiple cpus instead of a gpu and vice versa. Why is the gpu "faster" in crunching calculations than the cpu. What are some types of things that one of them can do and the other can't do or do efficiently and why. Please don't reply with answers like " Central processing unit " and " "Graphics processing unit". I'm looking for a in-depth technical answer.
GPUs are basically massively parallel computers. They work well on problems that can use large scale data decomposition and they offer orders of magnitude speedups on those problems.
However, individual processing units in a GPU cannot match a CPU for general purpose performance. They are much simpler and do not have optimizations like long pipelines, out-of-order execution and instruction-level-parallelizaiton.
They also have other drawbacks. Firstly, you users have to have one, which you cannot rely on unless you control the hardware. Also there are overheads in transferring the data from main memory to GPU memory and back.
So it depends on your requirements: in some cases GPUs or dedicated processing units like Tesla are the clear winners, but in other cases, your work cannot be decomposed to make full use of a GPU and the overheads then make CPUs the better choice.
First watch this demonstration:
http://www.nvidia.com/object/nvision08_gpu_v_cpu.html
That was fun!
So what's important here is that the "CPU" can be controlled to perform basically any calculation on command; For calculations that are unrelated to each other, or where each computation is strongly dependent on its neighbors (rather than merely the same operaton), you usually need a full CPU. As an example, compiling a large C/C++ project. The compiler has to read each token of each source file in sequence before it can understand the meaning of the next; Just because there are lots of source files to process, they all have different structure, and so the same calculations don't apply accros the source files.
You could speed that up by having several, independent CPU's, each working on separate files. Improving the speed by a factor of X means you need X CPU's which will cost X times as much as 1 CPU.
Some kinds of task involve doing exactly the same calculation on every item in a dataset; Some physics simulations look like this; in each step, each 'element' in the simulation will move a little bit; the 'sum' of the forces applied to it by its immediate neighbors.
Since you're doing the same calculation on a big set of data, you can repeat some of the parts of a CPU, but share others. (in the linked demonstration, the air system, valves and aiming are shared; Only the barrels are duplicated for each paintball). Doing X calculations requires less than X times the cost in hardware.
The obvious disadvantage is that the shared hardware means that you can't tell a subset of the parallel processor to do one thing while another subset does something unrelated. the extra parallel capacity would go to waste while the GPU performs one task and then another different task.

Are units of measurement unique to F#? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was reading Andrew Kennedy's blog post series on units of measurement in F# and it makes a lot of sense in a lot of cases. Are there any other languages that have such a system?
Edit: To be more clear, I mean the flexible units of measurement system where you can define your own arbitrarily.
Does TI-89 BASIC count? Enter 54_kg * (_c^2) and it will give you an answer in joules.
Other than that, I can't recall any languages that have it built in, but any language with decent OO should make it simple to roll your own. Which means someone else probably already did.
Google confirms. For example, here's one in Python. __repr__ could easily be amended to also select the most appropriate derived unit, etc.
CPAN has several modules for Perl: Physics::Unit, Data::Dimensions, Class::Measure, Math::Units::PhysicalValue, and a handful of others that will convert but don't really combine values with units.
Nemerle had compiler-checked Units of Measure in 2006.
http://nemerle.org
http://nemerle.org/forum.old/viewtopic.php?t=265&view=previous&sid=00f48f33fafd3d49cc6a92350b77d554
C++ has it, in the form of boost::units.
I'm not sure if this really counts, but the RPL system on my HP-48 calculator does have similar features. I can write 40_gal 5_l + and get the right answer of 156.416 liters.
I believe I saw that Fortress support this, I'll see if I can find a link.
I can't find a specific link, but the language specification makes mention of it in a couple of places. The 1.0 language specification also says that dimensions and units were temporarily dropped from the specification (along with a whole heap of other features) to match up with the current implementation. It's a work in progress, so I guess things are in flux.
F# is the first mainstream language to support this feature.
There is also a Java specification for units at http://jcp.org/en/jsr/detail?id=275 and you can already use it from here http://jscience.org/
Nemerle has something much better than F# !
You should check this one : http://rsdn.ru/forum/src/1823225.flat.aspx#1823225 .
It is really great .
And you can download here : http://rsdn.ru/File/27948/Oyster.Units.0.06.zip
Some example:
def m3 = 1 g;
def m4 = Si.Mass(m1);
WriteLine($"Mass in SI: $m4, in CGS: $m3");
def x1 = Si.Area(1 cm * 10 m);
WriteLine($"Area of 1 cm * 10 m = $x1 m");
I'm pretty sure Ada has it.
well I made QuantitySystem library especially for units in C#, however its not compile time checking
but I've tried to make it run as I wanted
also it supports expansion so you can define your unique units
http://QuantitySystem.CodePlex.com
also it can differentiate between Torque and Work :) [This was important for me]
the library approach is from Dimension to units
all I've seen till now units only approach.
I'm sure you'd be able to do this with most dynamic languages (javascript, python, ruby) by carefully monkey-patching some of the base-classes. You might get into problems though when working with imperial measurements.

Resources