Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Here I have an image of two objects/stars:
I have hundreds of images like this one, from NASA MAST Archive. (The corners are not stars, just errors, one star is on the top, the other one is on the bottom).
What algorithm should I use to determine the number of objects (in this case stars) in one picture? For a human, it is pretty obvious that there are two objects, but I want to implement this detection in Python.
For reference, here is a picture with one star only:
(The pictures are produced from FITS files with PyKE.)
You can apply a threshold and use open cv to analyze the number of connected components (groups).
For example :
import cv2
src = cv2.imread('/path/to/your/image')
ret, thresh = cv2.threshold(src,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
connectivity = 8 #also diagonal neighbors, choose 4 if you want just horizontal and vertical neighbors.
# Analysis of the binary image
output = cv2.connectedComponentsWithStats(thresh, connectivity, cv2.CV_32S)
n_groups=output[2].max()
To get rid of the noises you can decide that you don't take into account groups with less than TH number of connected pixels (from the images you uploaded as an example I would choose something like TH=4).
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
So I have one big mesh which models a building. I would like to chop the mesh into parts by floor and hallway to make geographically distinct "scenes" which I can cull/order before rendering to reduce render time. I used 3DS Max to "Slice" the model into various meshes however in the scene explorer it still only shows 1 object. When I export the scene to fbx and read it in Assimp it only reads in 1 mesh.
TLDR: How do I split a model in 3DS Max (or similar) such that it exports as multiple meshes which I can selectively render?
The solution is to "Slice" the model, in my case I used the Slice Plane to get clean cuts. Then To use a "Mesh Edit" modifier and "Detach" each individual component.
Here is a 3ds Max forum post asking the exact same thing. Hopefully the answer in there can be useful for you too.
https://forums.autodesk.com/t5/3ds-max-forum/split-a-mesh-into-several-meshes/m-p/5927179#M109322
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
The general task is to binarize the image so that only the brightest spots remain. But adaptive binarization and the Otsu method do not give an acceptable result due to light traces (shown in the image).
I think that you need to go through the entire image with a small window that will highlight a local minimum in the area.I am counting on the fact that with the correct selection of the threshold, only light spots will remain that need to be found. It should be. But I do not know how to apply the standard opencv threshold function in sliding windows.
UPD:After the proposed adaptive threshold, the image looks like this. Not perfect, but much closer to what I need.It seems that a combination of threshold functions does not always give a better result than a single one.
This is the command:
outputimg = cv.adaptiveThreshold(img,255,cv.ADAPTIVE_THRESH_MEAN_C,cv.THRESH_BINARY,11,0)
further explanation and examples: https://docs.opencv.org/3.4/d7/d4d/tutorial_py_thresholding.html
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to make a chart to visualize our product backlog over time. My idea is to show a line (a "series") for each work item, with each line having a width according to its estimate, and each line stacked on all the other work items that are ahead of it (as of each day). So on any given day, a line be at a Y-axis height representing how much work is ahead of it on the backlog.
The problem is that the ordering changes day by day, so I'd need to have the series cross over each other, and I haven't been able to find a charting tool that will let me do it.
(I'm trying to demonstrate the high-level "flow" of work items - the ones near the top of the queue will keep getting done, but the ones near the back of the queue will just sit there for a long time. New ones will be introduced periodically, and old ones will be canceled. I imagine the rendered chart will look like streaks of wind, if you will.)
Is there a way to do it in D3, maybe?
Pretty much any chart visualization that you can think of can be done with d3, it's just a matter of execution.
I'm not positive exactly what you're describing, but is it something like this baby name chart?
You can look through the gallery to see a lot of different examples of what is possible with d3.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have spectrograms which I acquired without the original sound files. Those are greyscale images, where the x axis represents time and the y axis represents frequency, which each pixel value represents volume (or so I believe).
I am pretty certain the files are those of a few songs and I need to be able to identify which songs those are. There are many files like these, so I need to be able to convert them in bulk.
Is there a way to convert them back to an mp3? How will this be done?
I understand that it won't contain all the original information, but for my purposes any conversion will do.
The answer is: it depends on your needs and resources. It's possible but you may be not satisfied. I understand that you have it in some image files. You should have separate real and imaginary spectrums. Otherwise you lack of all the phase information. But the record should be still 'understable'. Linear scale of frequency domain is desired. Other problem is a resolution.
For audible data you need at least 4k samples/s, so each second of your record should have at least 4000px/Fpx in time domain, where Fpx is amount of pixels in frequency domain.. Assuming the Fpx is 400, each second of your record should have 10px of width. For HiFi it's about 10 times more.
I doubt that the amplitude information - mapped to RGB (or Black-White) is reliable. You will get probably a few bits per sample, where 'the nice' starts at 12bits per sample.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to check for complete separation. I am using SPSS and need to know what steps I have to take to get the grahpic on this site. Can someone help me?
SPSS does not provide that probability curve (SAS and Stata can do that). However, plotting the 1/0 outcome against the continuous predictor, and observe how the two horizontal data lines overlap may be enough to give you some hint.
If you have enough data, you can also first separate your data by different groups (for example, 10 equal groups split by your continuous predictors), and the compute each group's mean (aka probability of "yes" to outcome), and join the points. That line should approximate the curve in the illustration you provide.