Related
I am evaluating VictoriaMetrics for an IoT application where we sometimes have gaps in a series due to hardware or communication issues. In some time series reporting situations it is helpful for us to interpolate values for the missing time intervals. I see that MetricsQL (which extends PromQL) has a keep_last_value() function that will fill gaps by holding the last observed value until a new one appears (which will be helpful to us) but in some situations a linear interpolation between the values before and after the gap is a more realistic estimate for the missing portion. Is there a function in PromQL or MetricsQL that will do linear interpolation of missing data in a series, or is it possible to construct a more complex query that will achieve this?
Clarifying the desired interpolation
What I would like is a simple interpolation between the points immediately before and after the gap; this is, I believe, what TimescaleDB's interpolate() function does. In other words, if my time series is:
(1:00, 2)
(2:00, 4)
(3:00, NaN)
(4:00, 5)
(5:00, 1)
I would like the interpolated 3:00 value to be 4.5, half way between the points immediately before and after. I don't want it to be 6 (which is what I would get by extrapolating from the points before the missing one, ignoring the points after) and I don't want whatever value I would get if I did linear regression on the whole series and interpolated at 3:00 (presumably 3, or something close to it).
Of course, this is a simple illustration and it's also possible that the gap could last more than one time step. But in that case I would still like the interpolation to be based solely off of the points immediately before and immediately after the gap, ignoring the rest of the series.
Final answer
Use the interpolate function, now available in VictoriaMetrics starting from v1.38.0.
Original suggestion
This does not achieve the exact interpolation requested in the revised question, but may be useful for others with slightly different requirements
Try combining predict_linear function with default operator from MetricsQL in the following way:
metric default predict_linear(metric[1h], 0)
Try modifying the value in square brackets in order to get the desired level of interpolation.
I have been trying to cluster a set of text documents. I have a sparse TFIDF matrix with around 10k documents (subset of a large dataset), and I try to run the scikit-learn k-means algorithm with different sizes of clusters (10,50,100). Rest all the parameters are default values.
I get a very strange behavior that no matter how many clusters I specify or even if I change the number of iterations, there would be 1 cluster in the lot which would contain most of the documents in itself and there will be many clusters which would have just 1 document in them. This is highly non-uniform behavior
Does anyone know what kind of problem am I running into?
Here are the possible things that might be going "wrong":
Your k-means cluster initialization points are chosen as the same set of points in each run. I recommend using the 'random' for the init parameter of k-means http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html. If that doesn't work then supply to k-means your own set of random initial cluster centers. Remember to initialize your random generator using its seed() method as the current date and time. https://docs.python.org/2/library/random.html uses current date-time as the default value.
Your distance function, i.e. euclidean distance might be the culprit. This is less likely but it is always good to run k-means using cosine similarity especially when you are using it for document similarity. scikits doesn't have this functionality at present but you should look here: Is it possible to specify your own distance function using scikit-learn K-Means Clustering?
These two combined should give you good clusters.
I noticed with the help of above answers and comments that there was a problem with outliers and noise in original space. For this, we should use a dimensionality reduction method which eliminates the unwanted noise in the data. I tried random projections first but it failed to work with text data, simply because the problem was still not solved.
Then using Truncated Singular Value Decomposition, I was able to get perfect uniform clusters. Hence, the Truncated SVD is the way to go with textual data in my opinion.
please suggest some way for efficient incremental clustering. I am trying to put similar strings to one group. comparing with each other is not efficient. what i have thought is to check the each input string with the cluster representative( this means there is one representative pattern for strings in that cluster so that the new string can be compared to that only). So, anything to start with so that the nearly similar strings in a cluster can be represented by one universal pattern(may be) with highest possible accuracy. In this way the new input are just compared with cluster representative and the kept into it if found similar. The number of cluster and input are not fixed...strings are streaming and may be of any pattern length.
I hope i was clear. Just help me with some term to get going.
It sounds like the part of the problem that is giving you difficulty is finding a representative pattern to use for each cluster.
The usual way to do clustering of strings is to treat them as vectors and use cosine similarity as the distance measure: http://en.wikipedia.org/wiki/Cosine_distance
When the strings in the cluster are represented as vectors, then I think the center of the cluster is just the sum of the normalized vectors. Use this sum as the representative to compare each new string against.
I'm working on a project where I have a feature in an image described as a set of X & Y coordinates (5-10 points per feature) which are unique for this feature. I also have a database with thousands of features where each have the same type of descriptor. The result looks like this:
myFeature: (x1,y1), (x2,y2), (x3,y3)...
myDatabase: Feature1: (x1,y1), (x2,y2), (x3,y3)...
Feature2: (x1,y1), (x2,y2), (x3,y3)...
Feature3: (x1,y1), (x2,y2), (x3,y3)...
...
I want to find the best match of myFeature in the features in myDatabase.
What is the fastest way to match these features? Currently I am stepping though each feature in the database and comparing each individual point:
bestScore = 0
for each feature in myDatabase:
score = 0
for each point descriptor in MyFeature:
find minimum distance from the current point to the...
points describing the current feature in the database
if the distance < threshold:
there is a match to the current point in the target feature
score += 1
if score > bestScore:
save feature as new best match
This search works, but clearly it gets painfully slow on large databases. Does anyone know of a faster method to do this type of search, or at least if there is a way to quickly rule out features that clearly won't match the descriptor?
Create a bitset (an array of 1s and 0s) from each feature.
Create such a bitmask for your search criteria and then just use a bitwise and to compare the search mask to your features.
With this approach, you can shift most work to the routines responsible for saving the stuff. Also, creating the bitmasks should not be that computationally intensive.
If you just want to rule out features that absolutely can't match, then your mask-creation algorithm should take care of that and create the bitmasks a bit fuzzy.
The easiest way to create such masks is probably by creating a matrix as big as the matrix of your features and put a one in every coordinate that is set for the feature and a zero in every coordinate that isn't. Then turn that matrix into a one dimensional row. Compare the feature-row then to the search mask bitwise.
This is similar to the way bitmap indexes work on large databases (oracle e.g.), but with a different intention and without a full bitmap-image of all database rows in memory.
The power of this is in the bitwise comparisons.
On a 32bit machine you can perform 32 comparisons per instruction when you can just do one with integer numbers in a point comparison. It yields even higher boni for floating point operations, depending on the architecture.
This in general looks like a spatial index problem. It's not my field, but you'll probably need to build a sort of tree index, such as a quadtree, that you can use to easily search for features. You can find some links from this wikipedia article: http://en.wikipedia.org/wiki/Spatial_index
It might be a problem that you can easily implement in an existing spatial database. It's very GIS-like in its description.
One thing you can do is calculate a point of gravity for every feature and use that to whittle down the search space a bit (a one dimensional search is a lot easier to build an index for), but that has the downside of being just a heuristic (depending on the shapes of your feature, the point of gravity may end up in weird places).
I have a function which takes two strings and gives out the cosine similarity value which shows the relationship between both texts.
If I want to compare 75 texts with each other, I need to make 5,625 single comparisons to have all texts compared with each other.
Is there a way to reduce this number of comparisons? For example sparse matrices or k-means?
I don't want to talk about my function or about ways to compare texts. Just about reducing the number of comparisons.
What Ben says it's true, to get better help you need to tell us what's the goal.
For example, one possible optimization if you want to find similar strings is storing the string vectors in a spatial data structure such as a quadtree, where you can outright discard the vectors that are too far away from each other, avoiding many comparisons.
If your algorithm is pair-wise, then you probably can't reduce the number of comparisons, by definition.
You'll need to use a different algorithm, or at the very least pre-process your input if you want to reduce the number of comparisons.
Without the details of your function, it's difficult to give any concrete help.