Micrometer - Prometheus: Some meters are showing, others are not - micrometer

I have a rabbitmq message queue on which many other services report status updates for what's called a Point. Now, on a separate service (written with SpringBoot), I need to listen for those Point updates and convert that into a Prometheus-scrapable endpoint.
So my plan is to convert the incoming Point objects into Meters and register them in the MeterRegistry. And that works, but only for some of the points. I haven't figured out, yet, which exactly are visible and which aren't because it looks like that depends on the order in which they come in after a restart of the service. I couldn't figure out any pattern, yet, that would help troubleshooting.
From what I understood reading the micrometer documentation, the Meter is created once, we give it an object and a function that allows it to retrieve the double value from that object for the metric. Since, I have new instances of Point coming in every couple of seconds, that value won't just be updated as the Meter is referencing the old Point.
Assuming this is correct, I added a little wrapper around that Point (the PointWrapper) that I pass to the Meter and cache instance of PointWrapper myself. Now when a new Point comes in, I check if I already have a PointWrapper for that Point and if so, I replace the Point instance in the wrapper with the new one.
#Service
public class PointSubscriber {
private final MetricsService metrics;
public PointSubscriber(#Autowired MetricsService metrics) {
this.metrics = metrics;
}
#Bean
public Consumer<PointUpdate> processPoint() {
return (update) -> {
metrics.update(update.getPoint());
};
}
#Service
#RequiredArgsConstructor(onConstructor = #__(#Autowired))
public class MetricsService {
private Logger logger = LoggerFactory.getLogger(getClass());
private final MeterRegistry meterRegistry;
private Map<String, PointWrapper> cache = new HashMap<>();
public void update(Point point) {
// Check if wrapper already in cache
String pointId = point.getId();
PointWrapper cached = cache.get(pointId);
// Replace the point in the wrapper to update the value
if (cached != null) {
logger.debug("Updating value for {}", point.getId());
cached.setPoint(point);
// Create the wrapper, cache it and register Meter
} else {
PointWrapper pointMeter = PointWrapper.from(point.getId(), point);
// Don't register Meters that will return null
if (pointMeter.getMetricValue() == null) {
logger.debug("Not going to register point with null value: {}", point.getId());
return;
}
logger.debug("Registering point {}", point.getId());
register(pointMeter, meterRegistry);
cache.put(pointId, pointMeter);
}
}
public Meter register(PointWrapper pointMeter, MeterRegistry registry) {
Set<Tag> tags = new HashSet<>();
tags.add(Tag.of("pointId", pointMeter.getPoint().getId()));
tags.addAll(pointMeter.getPoint().getLabels().entrySet().stream()
.map (e -> Tag.of(e.getKey(),e.getValue()))
.collect(Collectors.toSet()));
return Gauge.builder(pointMeter.getMetricName(), pointMeter, PointWrapper::getMetricValue)
.tags(tags)
.register(registry);
}
}
#Data
#Builder
public class PointWrapper {
public static PointWrapper from(String id, Point point) {
return PointWrapper.builder()
.id(id)
.metricName("symphony_point")
.point(point)
.build();
}
private String id;
private String metricName;
#EqualsAndHashCode.Exclude
private Point point;
public Double getMetricValue() {
if (point == null)
return null;
if (point instanceof QuantityPoint) {
return ((QuantityPoint) point).getValue();
} else if (point instanceof StatePoint<?>) {
StatePoint<?> s = (StatePoint<?>) point;
if (s.getState() == null)
return null;
return Double.valueOf(s.getState().asNumber());
}
return null;
}
}
As I mentioned, this leads to a bunch of missing data points in the prometheus endpoint. I read that Meters are uniquely identified by their name and tags. The name is always symphony_point but I add the Point's ID as a tag called pointId. Just because of that, every Meter.Id is unique.
I can see logs like
Registering point outdoor_brightness_north
but that point is missing in the Prometheus endpoint.
Any ideas?
UPDATE
#checketts pointed out that metrics with the same name must have the same labels set. I checked quickly can confirm, that's not the case with the data I am using:
symphony.point area pointId device property floor room
symphony.point area pointId device property floor room
symphony.point area pointId property room floor device
symphony.point area pointId property room floor device
symphony.point area room pointId device property floor
symphony.point pointId area room device property floor
symphony.point area room pointId device property floor
symphony.point area room property pointId floor device
symphony.point pointId area property device
symphony.point area device property pointId
symphony.point area room pointId floor device property
symphony.point area pointId device property floor room
symphony.point area pointId device property room floor
symphony.point area pointId property floor device room
symphony.point area room property pointId floor device
symphony.point area property room floor pointId device
symphony.point pointId area room property floor device
symphony.point area device pointId property
symphony.point area device property pointId floor room
symphony.point area pointId room device property floor
symphony.point area pointId room device property floor
symphony.point area room pointId device property floor
symphony.point area pointId room floor device property
symphony.point pointId area device property
symphony.point area property room floor device pointId
symphony.point area pointId device room property floor
symphony.point area room device property floor pointId
symphony.point area device pointId property floor room
symphony.point area pointId property floor device room
symphony.point pointId area device property
symphony.point area pointId device property floor room
symphony.point area pointId property room floor device
symphony.point area pointId room device property floor
symphony.point pointId property area device
symphony.point area property pointId floor device room
symphony.point area room property pointId floor device
symphony.point area room pointId property floor device
symphony.point area pointId floor device property room
symphony.point area room device pointId property floor
symphony.point pointId property area device
symphony.point area room device property pointId floor
symphony.point area device property floor pointId room
symphony.point area room pointId floor device property
symphony.point area pointId property room floor device
symphony.point area room device property floor pointId
symphony.point area room device pointId property floor
symphony.point pointId area device property
symphony.point area property floor pointId device room
symphony.point area pointId device property floor room
symphony.point area property pointId device
symphony.point pointId area property floor device room
symphony.point area pointId floor device property room
symphony.point area property pointId floor device room
symphony.point area room pointId floor device property
symphony.point pointId area device property
symphony.point area room pointId property floor device
symphony.point area room pointId floor device property
symphony.point area room device property pointId floor
symphony.point area pointId room property floor device
symphony.point area room device property floor pointId
symphony.point area pointId property room floor device
symphony.point pointId area property device
symphony.point area pointId device property floor room
symphony.point area device pointId property floor room
symphony.point area room pointId property floor device
symphony.point area pointId device property floor room
symphony.point area pointId device room property floor
symphony.point area room pointId device property floor
symphony.point area property room pointId floor device
symphony.point pointId area device property
That's a big bummer, since the labels that come via Points (that's what I build the Tags from) aren't well-defined. Still I need to be able to make queries based on them. I could add them all to the name but then things like "show all indoor temperatures" become a very unpleasant experience to query.
Anyway, I will try to validate this is the root cause of my problem.

This line is suspicious:
tags.addAll(pointMeter.getPoint().getLabels().entrySet().stream()
.map (e -> Tag.of(e.getKey(),e.getValue()))
.collect(Collectors.toSet()));
Do all points have the same labels? With Prometheus, all meters with the same name need to have the same tag names (aka labels). The first point with its label names will become the default and all other will be rejected.

Related

coding 3 color sorting machine with 3 servo motors and 1 tcs230 sensor with c++ on arduino

I am now on my project that I need to sort the objects with different colors. I'll put all the objects on a conveyor and the sensor will read the color and the conveyor will transfer the object to the location that the box is place next the conveyor and there is servo motor I have installed to flick the object into the box. I found out that the code can not detect the color of the object(I already check that the sensor work properly).
#include <Servo.h> // include the Servo library
// Define the pins for the TCS230 color sensors
#define TCS230_S0 4
#define TCS230_S1 5
#define TCS230_S2 6
#define TCS230_S3 7
// Define the pins for the servo motors
#define SERVO1 9
#define SERVO2 10
#define SERVO3 11
// Define the RGB color values for each color to be sorted
#define RED_R 200
#define RED_G 0
#define RED_B 0
#define GREEN_R 0
#define GREEN_G 200
#define GREEN_B 0
#define BLUE_R 0
#define BLUE_G 0
#define BLUE_B 200
#define YELLOW_R 200
#define YELLOW_G 200
#define YELLOW_B 0
Servo servo1; // create Servo object for servo1
Servo servo2; // create Servo object for servo2
Servo servo3; // create Servo object for servo3
void setup() {
// initialize the TCS230 color sensor pins
pinMode(TCS230_S0, OUTPUT);
pinMode(TCS230_S1, OUTPUT);
pinMode(TCS230_S2, OUTPUT);
pinMode(TCS230_S3, OUTPUT);
// initialize the servo motor pins
servo1.attach(9);
servo2.attach(10);
servo3.attach(11);
}
void loop() {
int red, green, blue; // variables to store the color values
// read the color values from the TCS230 color sensor
digitalWrite(TCS230_S2, LOW);
digitalWrite(TCS230_S3, HIGH);
red = pulseIn(TCS230_S0, LOW);
green = pulseIn(TCS230_S1, LOW);
digitalWrite(TCS230_S2, HIGH);
digitalWrite(TCS230_S3, HIGH);
blue = pulseIn(TCS230_S0, LOW);
// compare the color values to the predefined RGB values for each color
if (red > RED_R && green < RED_G && blue < RED_B) {
// move servo1 to sort the red object into the corresponding box
servo1.write(45);
delay(1000);
servo1.write(90);
delay(1000);
}
else if (red < GREEN_R && green > GREEN_G && blue < GREEN_B) {
// move servo2 to sort the green object into the corresponding box
servo2.write(45);
delay(1000);
servo2.write(90);
delay(1000);
}
else if (red < BLUE_R && green < BLUE_G && blue > BLUE_B) {
// move servo3 to sort the blue object into the corresponding box
servo3.write(45);
delay(1000);
servo3.write(90);
delay(1000);
}}```
I assume that you're pin variable names match the sensors pin names.
Then your code is utter nonsense. It clearly shows that you have not spent a minute reading any documentation on that sensor.
First of all you never set TCS230_S0, or TCS230_S1 high. So the sensor is turned off. You need to set at least one of those pins high in order to configure the output frequency.
Then you turn S2 low (S1 is still low) and S3 high. This configures the sensor so it outputs the blue value. You then attempt to read red and green pulse width from two output pins.
Then you turn the sensor to output green only and attempt to read blue, again by reading the pulsewidth from an output pin.
First of all you need to turn the sensor to the desired output frequency. Then you need to select a colour you want to measure and then you measure the pulse width of the sensors out pin. S0-S3 are for configuration only.

Calculate approximated distance using RSSI

I working on a project that aims to measure approximated distance between single Raspberry PI and nearby smartphone.
The final target of the project is to check if there is a smartphone in the same room of the Raspberry.
I thought on two ways for implementation. The first is measure distance using RSSI value, second is to calibrate the setup in the first time from many places in the room and outside the room and get a threshold RSSI value.
I read that smartphones sends wi-fi packets even when the wi-fi is disabled, I thought to use this feature the get the RSSI value from the transmitting smartphone (Using kismet passively) and check if its in the room. I can use Bluetooth RSSI also.
How can I calculate distance using RSSI?
This is an open issue. Basically, Measuring distance according to the RSSI in the ideal state is easy, The main challenge is reducing noise that produced due to multipath and reflecting RF signals and its Interferences. Anyway, you can convert RSSI to distance by below code:
double rssiToDistance(int RSSI, int txPower) {
/*
* RSSI in dBm
* txPower is a transmitter parameter that calculated according to its physic layer and antenna in dBm
* Return value in meter
*
* You should calculate "PL0" in calibration stage:
* PL0 = txPower - RSSI; // When distance is distance0 (distance0 = 1m or more)
*
* SO, RSSI will be calculated by below formula:
* RSSI = txPower - PL0 - 10 * n * log(distance/distance0) - G(t)
* G(t) ~= 0 //This parameter is the main challenge in achiving to more accuracy.
* n = 2 (Path Loss Exponent, in the free space is 2)
* distance0 = 1 (m)
* distance = 10 ^ ((txPower - RSSI - PL0 ) / (10 * n))
*
* Read more details:
* https://en.wikipedia.org/wiki/Log-distance_path_loss_model
*/
return pow(10, ((double) (txPower - RSSI - PL0)) / (10 * 2));
}

How could I rotate my watches display to a variable degree?

I would like to rotate my watches display to a variable degree like a compass does in real life? So far I have only discovered this function within the samsung API
screen.lockOrientation("portrait-secondary");
I would like more control than this, if that means using the native API, that's fine but I need help on where to look.
You may try using evas_map_util_rotate() function to rotate an object. It rotates the map based on an angle and the center coordinates of the rotation (the rotation point). A positive angle rotates the map clockwise, while a negative angle rotates the map counter-clockwise.
Please have a look into Modifying a Map with Utility Functions section of this link. It contains an example which shows how to rotate an object around its center point by 45 degrees clockwise.
You may also use below sample code snippet.
#param[in] object_to_rotate The object you want to rotate
#param[in] degree The degree you want to rotate
#param[in] cx The rotation's center horizontal position
#param[in] cy The rotation's center vertical position
void view_rotate_object(Evas_Object *object_to_rotate, double degree, Evas_Coord cx, Evas_Coord cy)
{
Evas_Map *m = NULL;
if (object_to_rotate == NULL) {
dlog_print(DLOG_ERROR, LOG_TAG, "object is NULL");
return;
}
m = evas_map_new(4);
evas_map_util_points_populate_from_object(m, object_to_rotate);
evas_map_util_rotate(m, degree, cx, cy);
evas_object_map_set(object_to_rotate, m);
evas_object_map_enable_set(object_to_rotate, EINA_TRUE);
evas_map_free(m);
}
Hope it'll help.

2D Screen coordinate to 3D position Directx 9 / Box Select

I am trying to implement box select in a 3d world. Basically, click, hold mouse, and then unpress mouse, get a box, and then box select. To start, I'm trying to figure out how to get the coordinates of the clicks in 3d.
I have raypicking, and that is not getting the right coordinate (gets origin and direction). It keeps returning the same origin no matter what X/Y for screen is (although the direction is different).
I've also tried:
D3DXVECTOR3 ori = D3DXVECTOR3(sx, sy, 0.0f);
D3DXVECTOR3 out;
D3DXVec3Unproject(&out, &ori, &viewPort, &projectionMat, &viewMat, &worldMat);
And it gets the same thing, the coordinates are very close to each other no matter what coordinates (and are wrong). It's almost like returning the eye, instead of the actual world coordinate.
How do I turn 2d Screen coordinates into 3d using directx 9c?
This is called picking in Direct3D, to select a model in 3D space, you mainly need 3 steps:
Generate the picking ray
Transform the picking ray and the model you want to pick in the same space
Do a intersection test of the picking ray and the model
Generate the picking ray
When we click the mouse on the screen(say the point is s on the screen), the model is selected when the box project on the area surround the point s on the projection window.
so, in order to generate the picking ray with the given screen coordinates (x, y), first we need to transform (x,y) to the projection window, this is can be done by the invert process of viewport transformation. another thing is the point on the projection window was scaled by the project matrix, so we should divide it by the scale factors.
in DirectX, the camera always place at the origin, so the picking ray starts from the origin, and projection window is the near clip plane(z=1).this is what the code has done below.
Ray CalcPickingRay(LPDIRECT3DDEVICE9 Device, int screen_x, int screen_y)
{
float px = 0.0f;
float py = 0.0f;
// Get viewport
D3DVIEWPORT9 vp;
Device->GetViewport(&vp);
// Get Projection matrix
D3DXMATRIX proj;
Device->GetTransform(D3DTS_PROJECTION, &proj);
px = ((( 2.0f * screen_x) / vp.Width) - 1.0f) / proj(0, 0);
py = (((-2.0f * screen_y) / vp.Height) + 1.0f) / proj(1, 1);
Ray ray;
ray._origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
ray._direction = D3DXVECTOR3(px, py, 1.0f);
return ray;
}
Transform the picking ray and model into the same space.
We always obtain this by transform the picking ray to world space, simply get the invert of your view matrix, then apply the invert matrix to your pickig ray.
// transform the ray from view space to world space
void TransformRay(Ray* ray, D3DXMATRIX* invertViewMatrix)
{
// transform the ray's origin, w = 1.
D3DXVec3TransformCoord(
&ray->_origin,
&ray->_origin,
invertViewMatrix);
// transform the ray's direction, w = 0.
D3DXVec3TransformNormal(
&ray->_direction,
&ray->_direction,
invertViewMatrix);
// normalize the direction
D3DXVec3Normalize(&ray->_direction, &ray->_direction);
}
Do intersection test
If everything above is well, you can do the intersection test now. this is a ray-box intersection, so you can use function D3DXboxBoundProbe. you can change the visual mode of you box to see if the picking was really work, for example, set the fill mode to solid or wire-frame if D3DXboxBoundProbe return TRUE.
You can perform the picking in response of WM_LBUTTONDOWN.
case WM_LBUTTONDOWN:
{
// Get screen point
int iMouseX = (short)LOWORD(lParam) ;
int iMouseY = (short)HIWORD(lParam) ;
// Calculate the picking ray
Ray ray = CalcPickingRay(g_pd3dDevice, iMouseX, iMouseY) ;
// transform the ray from view space to world space
// get view matrix
D3DXMATRIX view;
g_pd3dDevice->GetTransform(D3DTS_VIEW, &view);
// inverse it
D3DXMATRIX viewInverse;
D3DXMatrixInverse(&viewInverse, 0, &view);
// apply on the ray
TransformRay(&ray, &viewInverse) ;
// collision detection
D3DXVECTOR3 v(0.0f, 0.0f, 0.0f);
if(D3DXSphereBoundProbe(box.minPoint, box.maxPoint &ray._origin, &ray._direction))
{
g_pd3dDevice->SetRenderState(D3DRS_FILLMODE, D3DFILL_SOLID);
}
break ;
}
It turns out, I was handling the problem the wrong/opposite way. Turning 2D to 3D didn't make sense in the end. But as it turns out, converting the vertices from 3D to 2D, then seeing if inside the 2D box was the right answer!

How to generate random points around the curves of characters using processing?

I would like to generate random/noise points along each character of a multiple line text. I've tried this with the Geomerative library, but unfortunately it does not support multi line. Any other solution?
You could find a library to get the path points of the text or if simply adding points, you could get a 2D snapshot(either using get() or PGraphics) of the text and fill in pixels. Here's a minimal example.
PImage snapshot;
int randomSize = 3;
void setup(){
//render some text
background(255);
fill(0);
textSize(40);
text("Hello",0,50);
//grab a snapshot
snapshot = get();
}
void draw(){
int rx = (int)random(snapshot.width);//pick a random pixel location
int ry = (int)random(snapshot.height);//you can pick only the areas that have text or the whole image bot a bit of hit&miss randomness
//check if it's the same colour as the text, if so, pick a random neighbour and also paint it black
if(snapshot.get(rx,ry) == color(0)) snapshot.set(rx+((int)random(randomSize,-randomSize)),ry+((int)random(randomSize,-randomSize)),0);
image(snapshot,0,0);
}

Resources