Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have spectrograms which I acquired without the original sound files. Those are greyscale images, where the x axis represents time and the y axis represents frequency, which each pixel value represents volume (or so I believe).
I am pretty certain the files are those of a few songs and I need to be able to identify which songs those are. There are many files like these, so I need to be able to convert them in bulk.
Is there a way to convert them back to an mp3? How will this be done?
I understand that it won't contain all the original information, but for my purposes any conversion will do.
The answer is: it depends on your needs and resources. It's possible but you may be not satisfied. I understand that you have it in some image files. You should have separate real and imaginary spectrums. Otherwise you lack of all the phase information. But the record should be still 'understable'. Linear scale of frequency domain is desired. Other problem is a resolution.
For audible data you need at least 4k samples/s, so each second of your record should have at least 4000px/Fpx in time domain, where Fpx is amount of pixels in frequency domain.. Assuming the Fpx is 400, each second of your record should have 10px of width. For HiFi it's about 10 times more.
I doubt that the amplitude information - mapped to RGB (or Black-White) is reliable. You will get probably a few bits per sample, where 'the nice' starts at 12bits per sample.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
The general task is to binarize the image so that only the brightest spots remain. But adaptive binarization and the Otsu method do not give an acceptable result due to light traces (shown in the image).
I think that you need to go through the entire image with a small window that will highlight a local minimum in the area.I am counting on the fact that with the correct selection of the threshold, only light spots will remain that need to be found. It should be. But I do not know how to apply the standard opencv threshold function in sliding windows.
UPD:After the proposed adaptive threshold, the image looks like this. Not perfect, but much closer to what I need.It seems that a combination of threshold functions does not always give a better result than a single one.
This is the command:
outputimg = cv.adaptiveThreshold(img,255,cv.ADAPTIVE_THRESH_MEAN_C,cv.THRESH_BINARY,11,0)
further explanation and examples: https://docs.opencv.org/3.4/d7/d4d/tutorial_py_thresholding.html
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Here I have an image of two objects/stars:
I have hundreds of images like this one, from NASA MAST Archive. (The corners are not stars, just errors, one star is on the top, the other one is on the bottom).
What algorithm should I use to determine the number of objects (in this case stars) in one picture? For a human, it is pretty obvious that there are two objects, but I want to implement this detection in Python.
For reference, here is a picture with one star only:
(The pictures are produced from FITS files with PyKE.)
You can apply a threshold and use open cv to analyze the number of connected components (groups).
For example :
import cv2
src = cv2.imread('/path/to/your/image')
ret, thresh = cv2.threshold(src,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
connectivity = 8 #also diagonal neighbors, choose 4 if you want just horizontal and vertical neighbors.
# Analysis of the binary image
output = cv2.connectedComponentsWithStats(thresh, connectivity, cv2.CV_32S)
n_groups=output[2].max()
To get rid of the noises you can decide that you don't take into account groups with less than TH number of connected pixels (from the images you uploaded as an example I would choose something like TH=4).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
With regard to Event Sourcing and Domain Driven Design, I'm looking for a good software solution to help my team model our Aggregates electronically during an Event Storming session.
I have considered simple sticky note applications but they leave a lot to be desired such as the ability to save and share.
So what would you recommend as a good Event Storming software?
I think you only get opinionated answers. Tools that you really need to consider:
Paper roll, preferably plotter roll since it is dense enough and wide enough. For more vertical space put two strips, one below the other. Take a photo when you are done, share it with others. It is OK to scrap the roll since for the next session it would be beneficial to re-create the picture again and it will be better (see WET - write everything twice)
Online tools that have sticky notes of different colours and sizes. These should only be used if you have a distributed online session. I know two online tools that allow real-time collaboration, we use one and tried another, which is also very good. You choose yourself. I do not work for any of those.
Miro
Conceptboard
Mural
How about a "whiteboard"? I found it actually quite easier to create a very large image with a very dark gray background and simply use Paint to draw on top of it. Other devs were able to add their ideas to it and Save As... so that the original file was not overwritten.
Does it make sense?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to check for complete separation. I am using SPSS and need to know what steps I have to take to get the grahpic on this site. Can someone help me?
SPSS does not provide that probability curve (SAS and Stata can do that). However, plotting the 1/0 outcome against the continuous predictor, and observe how the two horizontal data lines overlap may be enough to give you some hint.
If you have enough data, you can also first separate your data by different groups (for example, 10 equal groups split by your continuous predictors), and the compute each group's mean (aka probability of "yes" to outcome), and join the points. That line should approximate the curve in the illustration you provide.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
For Windows there are many tools for extracting 3D data from programmes by intercepting the OpenGL data (e.g. 3D Ripper DX, glintercept, Ogle, OpenGLXtractor, HijackGL).
Are there any similar tools for Linux? If not, would it be possible to make one? (and if would anyone be interested in starting an open source project with me?)
I will actually automate the process, but that is another story.
First a word of warning: OpenGL is not a scene graph. There is no such thing as a "scene" or "objects" (in the physical kind of thing sense) in OpenGL. All what OpenGL does is drawing points, lines and triangles to a scene, one at a time and independent from each other. So intercepting OpenGL drawing calls to extract objects by nature is unreliable. That being said most programs using OpenGL do it in a way that make it actually quite feasible to extract the rendered geometry and interpret it as objects.
Another member of my hackerspace wrote a tool for intercepting OpenGL calls to extract meshes (the original use was so that we could 3D print game assets and similar on our RepRap). The sources for this tool can be found here https://github.com/mazzoo/ogldump
However ogldump is very limited. It doesn't support vertex buffer objects (VBO), interleaved vertex arrays can mess things up and things like shaders and generic vertex attributes are completely unheared of. Feel free to patch that in, if you like.